Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: Add a test for the global compile function #700

Merged
merged 7 commits into from
Nov 10, 2021

Conversation

narendasan
Copy link
Collaborator

Description

Add an additional test for the top level compilation api

Type of change

Please delete options that are not relevant and/or add your own.

  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes

Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Signed-off-by: Naren Dasan <[email protected]>
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /workspace/tests/py/test_ptq_to_backend.py	(original)
+++ /workspace/tests/py/test_ptq_to_backend.py	(reformatted)
@@ -26,25 +26,27 @@
                                                              batch_size=1,
                                                              shuffle=False,
                                                              num_workers=1)
-        self.calibrator = torchtrt.ptq.DataLoaderCalibrator(self.testing_dataloader,
-                                                           cache_file='./calibration.cache',
-                                                           use_cache=False,
-                                                           algo_type=torchtrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,
-                                                           device=torch.device('cuda:0'))
+        self.calibrator = torchtrt.ptq.DataLoaderCalibrator(
+            self.testing_dataloader,
+            cache_file='./calibration.cache',
+            use_cache=False,
+            algo_type=torchtrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,
+            device=torch.device('cuda:0'))

        self.spec = {
            "forward":
-                torchtrt.ts.TensorRTCompileSpec(**{
-                    "inputs": [torchtrt.Input([1, 3, 32, 32])],
-                    "enabled_precisions": {torch.float, torch.half, torch.int8},
-                    "calibrator": self.calibrator,
-                    "device": {
-                        "device_type": torchtrt.DeviceType.GPU,
-                        "gpu_id": 0,
-                        "dla_core": 0,
-                        "allow_gpu_fallback": False,
-                    }
-                })
+                torchtrt.ts.TensorRTCompileSpec(
+                    **{
+                        "inputs": [torchtrt.Input([1, 3, 32, 32])],
+                        "enabled_precisions": {torch.float, torch.half, torch.int8},
+                        "calibrator": self.calibrator,
+                        "device": {
+                            "device_type": torchtrt.DeviceType.GPU,
+                            "gpu_id": 0,
+                            "dla_core": 0,
+                            "allow_gpu_fallback": False,
+                        }
+                    })
        }

    def compute_accuracy(self, testing_dataloader, model):
--- /workspace/tests/py/test_api.py	(original)
+++ /workspace/tests/py/test_api.py	(reformatted)
@@ -31,26 +31,25 @@

    def test_compile_script(self):
        trt_mod = torchtrt.ts.compile(self.scripted_model,
-                                  inputs=[self.input],
-                                  device=torchtrt.Device(gpu_id=0),
-                                  enabled_precisions={torch.float})
+                                      inputs=[self.input],
+                                      device=torchtrt.Device(gpu_id=0),
+                                      enabled_precisions={torch.float})
        same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
        self.assertTrue(same < 2e-2)
-

    def test_compile_global(self):
        trt_mod = torchtrt.compile(self.scripted_model,
-                                  inputs=[self.input],
-                                  device=torchtrt.Device(gpu_id=0),
-                                  enabled_precisions={torch.float})
+                                   inputs=[self.input],
+                                   device=torchtrt.Device(gpu_id=0),
+                                   enabled_precisions={torch.float})
        same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
        self.assertTrue(same < 2e-2)

    def test_compile_global_nn_mod(self):
        trt_mod = torchtrt.compile(self.model,
-                                  inputs=[self.input],
-                                  device=torchtrt.Device(gpu_id=0),
-                                  enabled_precisions={torch.float})
+                                   inputs=[self.input],
+                                   device=torchtrt.Device(gpu_id=0),
+                                   enabled_precisions={torch.float})
        same = (trt_mod(self.input) - self.scripted_model(self.input)).abs().max()
        self.assertTrue(same < 2e-2)

@@ -224,16 +223,16 @@
    def test_input_use_default_fp32(self):
        ts_model = torch.jit.script(self.model)
        trt_mod = torchtrt.ts.compile(ts_model,
-                                  inputs=[torchtrt.Input(self.input.shape)],
-                                  enabled_precisions={torch.float, torch.half})
+                                      inputs=[torchtrt.Input(self.input.shape)],
+                                      enabled_precisions={torch.float, torch.half})
        trt_mod(self.input)

    def test_input_respect_user_setting_fp32_weights_fp16_in(self):
        ts_model = torch.jit.script(self.model)
        trt_mod = torchtrt.ts.compile(ts_model,
-                                  inputs=[self.input.half()],
-                                  require_full_compilation=True,
-                                  enabled_precisions={torch.float, torch.half})
+                                      inputs=[self.input.half()],
+                                      require_full_compilation=True,
+                                      enabled_precisions={torch.float, torch.half})
        trt_mod(self.input.half())

    def test_input_respect_user_setting_fp32_weights_fp16_in_non_constructor(self):
@@ -242,9 +241,9 @@
        input_spec.dtype = torch.half

        trt_mod = torchtrt.ts.compile(ts_model,
-                                  inputs=[input_spec],
-                                  require_full_compilation=True,
-                                  enabled_precisions={torch.float, torch.half})
+                                      inputs=[input_spec],
+                                      require_full_compilation=True,
+                                      enabled_precisions={torch.float, torch.half})
        trt_mod(self.input.half())


@@ -258,8 +257,8 @@
        half_mod.half()

        trt_mod = torchtrt.ts.compile(half_mod,
-                                  inputs=[torchtrt.Input(self.input.shape)],
-                                  enabled_precisions={torch.float, torch.half})
+                                      inputs=[torchtrt.Input(self.input.shape)],
+                                      enabled_precisions={torch.float, torch.half})
        trt_mod(self.input.half())

    def test_input_use_default_fp16_without_fp16_enabled(self):
@@ -274,9 +273,9 @@
        half_mod.half()

        trt_mod = torchtrt.ts.compile(half_mod,
-                                  inputs=[self.input],
-                                  require_full_compilation=True,
-                                  enabled_precisions={torch.float, torch.half})
+                                      inputs=[self.input],
+                                      require_full_compilation=True,
+                                      enabled_precisions={torch.float, torch.half})
        trt_mod(self.input)

    def test_input_respect_user_setting_fp16_weights_fp32_in_non_constuctor(self):
@@ -287,9 +286,9 @@
        input_spec.dtype = torch.float

        trt_mod = torchtrt.ts.compile(half_mod,
-                                  inputs=[input_spec],
-                                  require_full_compilation=True,
-                                  enabled_precisions={torch.float, torch.half})
+                                      inputs=[input_spec],
+                                      require_full_compilation=True,
+                                      enabled_precisions={torch.float, torch.half})
        trt_mod(self.input)


@@ -369,14 +368,15 @@
        self.assertEqual(device.device_type, torchtrt.DeviceType.GPU)
        self.assertEqual(device.gpu_id, 0)

+
class TestInput(unittest.TestCase):

    def _verify_correctness(self, struct: torchtrt.Input, target: Dict) -> bool:
        internal = struct._to_internal()

-        list_eq = lambda al, bl: all([a == b for (a, b) in zip (al, bl)])
-
-        eq = lambda a, b : a == b
+        list_eq = lambda al, bl: all([a == b for (a, b) in zip(al, bl)])
+
+        eq = lambda a, b: a == b

        def field_is_correct(field, equal_fn, a1, a2):
            equal = equal_fn(a1, a2)
@@ -388,12 +388,12 @@
        opt_ = field_is_correct("opt", list_eq, internal.opt, target["opt"])
        max_ = field_is_correct("max", list_eq, internal.max, target["max"])
        is_dynamic_ = field_is_correct("is_dynamic", eq, internal.input_is_dynamic, target["input_is_dynamic"])
-        explicit_set_dtype_ = field_is_correct("explicit_dtype", eq, internal._explicit_set_dtype, target["explicit_set_dtype"])
+        explicit_set_dtype_ = field_is_correct("explicit_dtype", eq, internal._explicit_set_dtype,
+                                               target["explicit_set_dtype"])
        dtype_ = field_is_correct("dtype", eq, int(internal.dtype), int(target["dtype"]))
        format_ = field_is_correct("format", eq, int(internal.format), int(target["format"]))

-        return all([min_,opt_,max_,is_dynamic_,explicit_set_dtype_,dtype_,format_])
-
+        return all([min_, opt_, max_, is_dynamic_, explicit_set_dtype_, dtype_, format_])

    def test_infer_from_example_tensor(self):
        shape = [1, 3, 255, 255]
@@ -410,7 +410,6 @@
        example_tensor = torch.randn(shape).half()
        i = torchtrt.Input._from_tensor(example_tensor)
        self.assertTrue(self._verify_correctness(i, target))
-

    def test_static_shape(self):
        shape = [1, 3, 255, 255]
@@ -499,8 +498,11 @@
        self.assertTrue(self._verify_correctness(i, target))

        tensor_shape = lambda shape: torch.randn(shape).shape
-        i = torchtrt.Input(min_shape=tensor_shape(min_shape), opt_shape=tensor_shape(opt_shape), max_shape=tensor_shape(max_shape))
-        self.assertTrue(self._verify_correctness(i, target))
+        i = torchtrt.Input(min_shape=tensor_shape(min_shape),
+                           opt_shape=tensor_shape(opt_shape),
+                           max_shape=tensor_shape(max_shape))
+        self.assertTrue(self._verify_correctness(i, target))
+

def test_suite():
    suite = unittest.TestSuite()
--- /workspace/tests/py/test_to_backend_api.py	(original)
+++ /workspace/tests/py/test_to_backend_api.py	(reformatted)
@@ -13,24 +13,25 @@
        self.scripted_model = torch.jit.script(self.model)
        self.spec = {
            "forward":
-                torchtrt.ts.TensorRTCompileSpec(**{
-                    "inputs": [torchtrt.Input([1, 3, 300, 300])],
-                    "enabled_precisions": {torch.float},
-                    "refit": False,
-                    "debug": False,
-                    "strict_types": False,
-                    "device": {
-                        "device_type": torchtrt.DeviceType.GPU,
-                        "gpu_id": 0,
-                        "dla_core": 0,
-                        "allow_gpu_fallback": True
-                    },
-                    "capability": torchtrt.EngineCapability.default,
-                    "num_min_timing_iters": 2,
-                    "num_avg_timing_iters": 1,
-                    "max_batch_size": 0,
-                    "disable_tf32": False,
-                })
+                torchtrt.ts.TensorRTCompileSpec(
+                    **{
+                        "inputs": [torchtrt.Input([1, 3, 300, 300])],
+                        "enabled_precisions": {torch.float},
+                        "refit": False,
+                        "debug": False,
+                        "strict_types": False,
+                        "device": {
+                            "device_type": torchtrt.DeviceType.GPU,
+                            "gpu_id": 0,
+                            "dla_core": 0,
+                            "allow_gpu_fallback": True
+                        },
+                        "capability": torchtrt.EngineCapability.default,
+                        "num_min_timing_iters": 2,
+                        "num_avg_timing_iters": 1,
+                        "max_batch_size": 0,
+                        "disable_tf32": False,
+                    })
        }

    def test_to_backend_lowering(self):
Reformatting /workspace/tests/modules/hub.py
Reformatting /workspace/tests/py/test_api_dla.py
Reformatting /workspace/tests/py/test_multi_gpu.py
Reformatting /workspace/tests/py/test_qat_trt_accuracy.py
Reformatting /workspace/tests/py/test_to_backend_api.py
--- /workspace/tests/py/test_ptq_dataloader_calibrator.py	(original)
+++ /workspace/tests/py/test_ptq_dataloader_calibrator.py	(reformatted)
@@ -26,11 +26,12 @@
                                                              batch_size=1,
                                                              shuffle=False,
                                                              num_workers=1)
-        self.calibrator = torchtrt.ptq.DataLoaderCalibrator(self.testing_dataloader,
-                                                           cache_file='./calibration.cache',
-                                                           use_cache=False,
-                                                           algo_type=torchtrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,
-                                                           device=torch.device('cuda:0'))
+        self.calibrator = torchtrt.ptq.DataLoaderCalibrator(
+            self.testing_dataloader,
+            cache_file='./calibration.cache',
+            use_cache=False,
+            algo_type=torchtrt.ptq.CalibrationAlgo.ENTROPY_CALIBRATION_2,
+            device=torch.device('cuda:0'))

    def compute_accuracy(self, testing_dataloader, model):
        total = 0
Reformatting /workspace/tests/py/test_trt_intercompatability.py
Reformatting /workspace/tests/py/test_ptq_trt_calibrator.py
Reformatting /workspace/tests/py/test_ptq_to_backend.py
Reformatting /workspace/tests/py/test_api.py
Reformatting /workspace/tests/py/model_test_case.py
Reformatting /workspace/tests/py/test_ptq_dataloader_calibrator.py
ERROR: Some files do not conform to style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

narendasan and others added 2 commits November 10, 2021 14:21
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to Python style guidelines

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code conforms to C++ style guidelines

@narendasan narendasan merged commit 8143489 into master Nov 10, 2021
@narendasan narendasan deleted the test_global_compile branch November 10, 2021 23:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: tests Issues re: Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant