Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Start sketching out merge lattice and tensor class #28

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

adam2392
Copy link
Contributor

@adam2392 adam2392 commented Jul 17, 2023

Closes: #27

Useful links: https://godbolt.org/z/crbY5ae8n

@adam2392
Copy link
Contributor Author

Okay so in order to go towards the MergeLattice, we first want a MVP of the Tensor class. Looking at the taco's documentation: http://tensor-compiler.org/docs/tensors.html, and the Tensor class, the main functionalities look like:

  1. constructor with arbitrary template types [x]
  2. insert: inserting a value at a specific index in the tensor
  3. pack: I don't see why this would be needed since we define the data structures during compile-time? But maybe I'm wrong.
  4. querying basic metadata of the tensor, such as get_dimension, get_levels

Is there any other explicit functionality we would like? Perhaps convert_format, which gets passed in a tuple of level types, which are used to construct new levels? E.g. convert (dense, dense) into (compressed, hashed) or something like that?

Error-checking

During construction of the tensor, Hameer, you mentioned we should do some error-checking. Should we work on this now, or possibly table it until we connect this with the MergeLattice?

Signed-off-by: Adam Li <[email protected]>
Signed-off-by: Adam Li <[email protected]>
Signed-off-by: Adam Li <[email protected]>
Signed-off-by: Adam Li <[email protected]>
Signed-off-by: Adam Li <[email protected]>
Signed-off-by: Adam Li <[email protected]>
Signed-off-by: Adam Li <[email protected]>
@adam2392
Copy link
Contributor Author

adam2392 commented Jul 27, 2023

A few questions:

  1. How do we get data/pointer-to-the-next-level at a specific index if they don't support locate? Do we just iterate?

To iterate, is this the right pseudocode? The get_data function for writing/reading would be a runtime method right?

// E.g. 3D tensor, then write non-zero value to (i,j,k) location
// at a high level in the Tensor need to access the levels to use the append_init, append_*
// methods
//
// for idx != i:
//     append(j, ithlevel)
//     for jdx <= j:
//         append(k, jthlevel)
//         for kdx <= k:
//             append(value, kthlevel)
  1. Re enhancing container_traits.hpp, what sort of functions should we provide the container_traits struct? We need to provide an implementation so that someone could plug in an arbitrary set of containers right?

E.g. would it be these?

template <typename size_type>
        constexpr void resize(size_type count)
        {
        }

        template <typename T>
        constexpr void push_back(const T& value)
        {
        }

        template <typename size_type, typename Elem>
        constexpr Elem operator[](size_type pos)
        {
        }

Signed-off-by: Adam Li <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

WIP: Initial Merge Lattice API and unit-test
1 participant