Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

imp module is deprecated #4275

Merged
merged 13 commits into from
Nov 15, 2019
Merged

imp module is deprecated #4275

merged 13 commits into from
Nov 15, 2019

Conversation

were
Copy link
Contributor

@were were commented Nov 7, 2019

Python deprecated imp module, so we can no longer use imp.load_source to import a file by a path.

A workaround is to execute the source so that the defined closures are imported in the symbol table (aka. environment).

@were
Copy link
Contributor Author

were commented Nov 7, 2019

@junrushao1994 Can you take a look?

Copy link
Member

@junrushao junrushao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any other workarounds to exec? Pylint doesn’t seem to like it. Otherwise you should disable pylint on that line

@junrushao
Copy link
Member

Seems like your PR failed on an irrelevant place, so feel free to re-trigger it.

BTW, I saw there is usage of import imp as well in topi/python/topi/cpp.py:21. Could you also fix this if it is easy?

@yzhliu yzhliu added the status: need update need update based on feedbacks label Nov 11, 2019
@were
Copy link
Contributor Author

were commented Nov 15, 2019

In topi.cpp, a bunch of dummy modules are created to get rid of a bunch of modules with only one line.

This can perfectly achieve exactly the same effect to replace imp.

On the other hand, I know it is acceptable to have modules just load FFI, like tvm.arith.

I am not sure which one is a better way to get rid of imp.

@were
Copy link
Contributor Author

were commented Nov 15, 2019

@Laurawly @Huyuwei Any suggestions?

@@ -0,0 +1,9 @@
"""FFI for C++ TOPI ops and schedules"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this file does not have an ASF header and could still pass our test?

@@ -0,0 +1,7 @@
"""FFI for vision TOPI ops and schedules"""
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this file does not have an ASF header and could still pass our test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not know. When linting, it just asks me to add ASF header to new files but init.py.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/apache/incubator-tvm/blob/master/topi/python/topi/x86/__init__.py
This also does not have ASF header.
I do not know why either.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, then I am fine with it :-)

@tqchen tqchen merged commit 9e6371f into apache:master Nov 15, 2019
@tqchen
Copy link
Member

tqchen commented Nov 15, 2019

Thanks @were @junrushao1994

@tqchen tqchen added status: accepted and removed status: need update need update based on feedbacks labels Nov 15, 2019
zxy844288792 pushed a commit to zxy844288792/tvm that referenced this pull request Nov 15, 2019
zxy844288792 pushed a commit to zxy844288792/tvm that referenced this pull request Nov 15, 2019
kevinthesun pushed a commit to neo-ai/tvm that referenced this pull request Nov 25, 2019
* [TOPI][OP] Support Faster-RCNN Proposal OP on CPU (apache#4297)

* Support Proposal operator on CPU.

* PyLint space issue

* PyLint space issue

* Pylint singleton-comparison issue

* [QNN][Legalize] Specialize for Platforms without any fast Int8 arithmetic units. (apache#4307)

* fix error when memory_id is VTA_MEM_ID_OUT (apache#4330)

* [CI][DOCKER] Add ONNX runtime dep (apache#4314)

* [DOCKER] Add ONNX runtime dep

* Improve ci script

* [QNN] Quantize - Fixing the sequence of lowering. (apache#4316)

* [QNN] Use Int16 upcast in Fallback Conv2D. Fix test names. (apache#4329)

* [doc][fix] fix sphinx parsing for pass infra tutorial (apache#4337)

* change ci image version (apache#4313)

* [Codegen] remove fp16 function override for cuda  (apache#4331)

* add volatile override back

* [codegen] remove fp16 function override for cuda

* [CI] Set workspace to be per executor (apache#4336)

* [Build][Windows] Fix Windows build by including cctype (apache#4319)

* Fix build

* dummy change to retrigger CI

* dummy change to retrigger ci

* dummy change to retrigger ci

* Enable hipModuleGetGlobal() (apache#4321)

* [Relay][Pass] Add pass to remove unused functions in relay module (apache#4334)

* [Relay][Pass] Add pass to remove unused functions in relay module

* Add tests

* Fix lint

* Fix visit order

* Add pass argument

* Fix

* Add support for quant. mul operator in tflite frontend (apache#4283)

A test for qnn_mul has to be added when the qnn elemwise tests (apache#4282) get merged.

* Add topi.nn.fifo_buffer to TVM doc (apache#4343)

* Solve custom model of prelu (apache#4326)

* Deprecate NNVM warning msg (apache#4333)

* [Contrib] Add MKL DNN option (apache#4323)

* [Contrib] Add MKL DNN

* update

* update

* [Relay][Frontend][TF] Fix transpose when axes is not a param (apache#4327)

* [Relay][Frontend][TF] Use _infer_value_simulated when axes is not a const to Transpose

* uncomment tests

* dummy change to retrigger ci

* [RUNTIME] Add device query for AMD GcnArch (apache#4341)

* add gcnArch query

* kGcnArch query for cuda is a no-op

* [Test][Relay][Pass] Add test case for lambda lift (apache#4317)

* [Relay][Frontend][ONNX] operator support: DepthToSpace, SpaceToDepth (apache#4271)

* imp module is deprecated (apache#4275)

* [VTA] Bug fix for padded load with large inputs (apache#4293)

* bug fix for padded load with large inputs

* Update TensorLoad.scala

* Update test_vta_insn.py

* fix inconsistent tag name (apache#4134)

* [CodeGen] Add build config option disable_assert to control whether to generate assert (apache#4340)

* Bump up CUDA log version in tophub.py (apache#4347)

* Add check to ensure input file was successfully opened in NNVM deploy code demo (apache#4315)

* [COMMUNITY] Add DISCLAIMER, KEYS for ASF release (apache#4345)

* [COMMUNITY] Add DISCLAIMER, KEYS for ASF release

* Add file name spec

* [Relay][VM][Interpreter] Enable first-class constructors in VM and interpreter via eta expansion (apache#4218)

* Fix constructor pretty printing

* Make Module::HasDef name consistent with API

* Add VM constructor compilation via eta expansion

* Lint

* Fix CI

* Fix failing test

* Address comment

* Retrigger CI

* Retrigger CI

* Update dmlc_tvm_commit_id.txt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants