Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor seq_file_parser.py for performance and readability #92

Merged
merged 8 commits into from
Nov 17, 2022
Merged

Refactor seq_file_parser.py for performance and readability #92

merged 8 commits into from
Nov 17, 2022

Conversation

ThibFrgsGmz
Copy link
Contributor

Originating Project/Creator
Affected Component
Affected Architectures(s)
Related Issue(s) void
Has Unit Tests (y/n) n
Builds Without Errors (y/n) Let CI run
Unit Tests Pass (y/n) Let CI run
Documentation Included (y/n) n

Change Description

This PR aims to:

  • Remove the unnecessary else statement to limit nested statements (def parseAbsolute L210)
  • Convert a for loop into a list comprehension and directlt returning it (def replaceSpacesAndCommas L69)
  • Use f-string instead of string interpolation
  • Replace dict() / list() with {} / [] for performance speed

Rationale

Clean code, better performance and readability.

Testing/Review Recommendations

void

Future Work

void

@github-actions
Copy link

github-actions bot commented Aug 20, 2022

@check-spelling-bot Report

🔴 Please review

See the 📂 files view or the 📜action log for details.

Unrecognized words (145)
ab
AF
am
an
as
at
av
Az
be
bg
BI
br
bs
by
bz
cc
cd
ch
co
cs
dd
de
di
dl
Dn
do
DP
ds
dt
DW
ec
ed
el
em
en
ep
eq
es
ex
fa
FF
fg
fn
fp
FR
fs
fw
Ga
GB
gd
go
gt
Ha
he
hi
hr
hs
Hz
I've
IB
id
IE
if
ii
in
io
is
it
IV
ja
js
Ki
La
lg
li
lo
lt
mb
md
me
mf
mh
Mi
ml
mm
mp
mq
mr
ms
mt
mw
my
na
nd
nl
no
of
ok
ol
on
op
or
os
ot
pi
pl
pr
ps
pt
qt
rc
re
rs
rt
sf
sh
sm
so
SP
sr
ss
st
sw
SZ
tb
Te
th
TK
to
tr
ts
ui
un
up
us
UT
vs
wb
we
we're
ws
wy
Xe
xi
xs
Previously acknowledged words that are now absent arglist beforeunload Chainable css cwd deserialization dts EINPROGRESS endtime frac functools isclass iterables itsdangerous ooo osascript PYTHONHOME replayer setblocking sss starttime taggable timeval UIs VIRTUALENV xls
To accept ✔️ these unrecognized words as correct and remove the previously acknowledged and now absent words, run the following commands

... in a clone of the [email protected]:ThibFrgsGmz/fprime-gds.git repository
on the feat/rm_unneeded_else_ branch (ℹ️ how do I use this?):

update_files() {
perl -e '
my @expect_files=qw('".github/actions/spelling/expect.txt"');
@ARGV=@expect_files;
my @stale=qw('"$patch_remove"');
my $re=join "|", @stale;
my $suffix=".".time();
my $previous="";
sub maybe_unlink { unlink($_[0]) if $_[0]; }
while (<>) {
if ($ARGV ne $old_argv) { maybe_unlink($previous); $previous="$ARGV$suffix"; rename($ARGV, $previous); open(ARGV_OUT, ">$ARGV"); select(ARGV_OUT); $old_argv = $ARGV; }
next if /^(?:$re)(?:(?:\r|\n)*$| .*)/; print;
}; maybe_unlink($previous);'
perl -e '
my $new_expect_file=".github/actions/spelling/expect.txt";
use File::Path qw(make_path);
use File::Basename qw(dirname);
make_path (dirname($new_expect_file));
open FILE, q{<}, $new_expect_file; chomp(my @words = <FILE>); close FILE;
my @add=qw('"$patch_add"');
my %items; @items{@words} = @words x (1); @items{@add} = @add x (1);
@words = sort {lc($a)."-".$a cmp lc($b)."-".$b} keys %items;
open FILE, q{>}, $new_expect_file; for my $word (@words) { print FILE "$word\n" if $word =~ /\w/; };
close FILE;
system("git", "add", $new_expect_file);
'
}

comment_json=$(mktemp)
curl -L -s -S \
-H "Content-Type: application/json" \
"https://api.github.com/repos/fprime-community/fprime-gds/issues/comments/1221267709" > "$comment_json"
comment_body=$(mktemp)
jq -r ".body // empty" "$comment_json" | tr -d "\\r" > $comment_body
rm $comment_json

patch_remove=$(perl -ne 'next unless s{^</summary>(.*)</details>$}{$1}; print' < "$comment_body")

patch_add=$(perl -e '$/=undef; $_=<>; if (m{Unrecognized words[^<]*</summary>\n*```\n*([^<]*)```\n*</details>$}m) { print "$1" } elsif (m{Unrecognized words[^<]*\n\n((?:\w.*\n)+)\n}m) { print "$1" };' < "$comment_body")

update_files
rm $comment_body
git add -u
Available dictionaries could cover words not in the dictionary

This includes both expected items (735) from .github/actions/spelling/expect.txt and unrecognized words (145)

cspell:python/src/python/python-lib.txt (3873) covers 136 of them
cspell:php/php.txt (2597) covers 71 of them
cspell:python/src/python/python.txt (453) covers 68 of them
cspell:win32/src/win32.txt (53509) covers 55 of them
cspell:node/node.txt (1768) covers 51 of them

Consider adding them using (in .github/workflows/spelling.yml):

      with:
        extra_dictionaries:
          cspell:python/src/python/python-lib.txt
          cspell:php/php.txt
          cspell:python/src/python/python.txt
          cspell:win32/src/win32.txt
          cspell:node/node.txt

To stop checking additional dictionaries, add:

      with:
        check_extra_dictionaries: ''
Warnings (1)

See the 📂 files view or the 📜action log for details.

ℹ️ Warnings Count
ℹ️ limited-references 96

See ℹ️ Event descriptions for more information.

Copy link
Collaborator

@LeStarch LeStarch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixes review notes. I am curious why you use a list comprehension as opposed to the original list(map()) call as it is longer, but the inheritance issue has been fixed.

@ThibFrgsGmz
Copy link
Contributor Author

ThibFrgsGmz commented Nov 11, 2022

@LeStarch I am curious why you use a list comprehension as opposed to the original list(map()) call as it is longer

Since I learned that it is slower to use dict/list keywords, I tend to always want to delete them.

@LeStarch LeStarch merged commit 30fa76d into nasa:devel Nov 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants