diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 35a3038de..a6a11dba4 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-08T17:43:27","documenter_version":"1.7.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-14T19:34:23","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/CONTRIBUTING/index.html b/dev/CONTRIBUTING/index.html index 2d8856251..6b18a1e3a 100644 --- a/dev/CONTRIBUTING/index.html +++ b/dev/CONTRIBUTING/index.html @@ -5,4 +5,4 @@ julia +release~x64 test/runtests.jl --overwrite

The test suite takes a while to run. You can filter to only run a selection of test suites by specifying them as positional arguments, e.g.

./test/runtests.jl constructors conversions representation

This information is summarized with ./test/runtests.jl --help

Python test suite

finch-tensor-python contains a separate Array API compatible test suite written in Python. It requires Python 3.10 or later and Poetry installed.

It can be run with:

git clone https://github.com/finch-tensor/finch-tensor-python.git
 cd finch-tensor-python
 poetry install --with test
-FINCH_REPO_PATH=<PATH_TO_FINCH_REPO> poetry run pytest tests/

Benchmarking

The Finch test suite includes a benchmarking script that measures Finch performance on a variety of kernels. It also includes some scripts to help compare Finch performance on the feature branch to the main branch. To run the benchmarking script, run ./benchmarks/runbenchmarks.jl. To run the comparison script, run ./benchmarks/runjudge.jl. Both scripts take a while to run and generate a report at the end.

Documentation

The /docs directory includes Finch documentation in /src, and a built website in /build. You can build the website with ./docs/make.jl. You can run doctests with ./docs/test.jl, and fix doctests with ./docs/fix.jl, though both are included as part of the test suite.

+FINCH_REPO_PATH=<PATH_TO_FINCH_REPO> poetry run pytest tests/

Benchmarking

The Finch test suite includes a benchmarking script that measures Finch performance on a variety of kernels. It also includes some scripts to help compare Finch performance on the feature branch to the main branch. To run the benchmarking script, run ./benchmarks/runbenchmarks.jl. To run the comparison script, run ./benchmarks/runjudge.jl. Both scripts take a while to run and generate a report at the end.

Documentation

The /docs directory includes Finch documentation in /src, and a built website in /build. You can build the website with ./docs/make.jl. You can run doctests with ./docs/test.jl, and fix doctests with ./docs/fix.jl, though both are included as part of the test suite.

diff --git a/dev/appendices/changelog/index.html b/dev/appendices/changelog/index.html index 108aee38b..ec3c5f11b 100644 --- a/dev/appendices/changelog/index.html +++ b/dev/appendices/changelog/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/appendices/directory_structure/index.html b/dev/appendices/directory_structure/index.html index e19180d63..7c1917a44 100644 --- a/dev/appendices/directory_structure/index.html +++ b/dev/appendices/directory_structure/index.html @@ -57,4 +57,4 @@ ├── [Manifest.toml] # local listing of installed dependencies (don't commit this) ├── LICENSE ├── CONTRIBUTING.md -└── README.md +└── README.md diff --git a/dev/appendices/faqs/index.html b/dev/appendices/faqs/index.html index 5160ab5e2..bad870cc1 100644 --- a/dev/appendices/faqs/index.html +++ b/dev/appendices/faqs/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/appendices/glossary/index.html b/dev/appendices/glossary/index.html index acb7d5e28..252c6abaa 100644 --- a/dev/appendices/glossary/index.html +++ b/dev/appendices/glossary/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/appendices/publications_articles/index.html b/dev/appendices/publications_articles/index.html index 2192e6262..c3d774688 100644 --- a/dev/appendices/publications_articles/index.html +++ b/dev/appendices/publications_articles/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/assets/documenter.js b/dev/assets/documenter.js index 82252a11d..7d68cd808 100644 --- a/dev/assets/documenter.js +++ b/dev/assets/documenter.js @@ -612,176 +612,194 @@ function worker_function(documenterSearchIndex, documenterBaseURL, filters) { }; } -// `worker = Threads.@spawn worker_function(documenterSearchIndex)`, but in JavaScript! -const filters = [ - ...new Set(documenterSearchIndex["docs"].map((x) => x.category)), -]; -const worker_str = - "(" + - worker_function.toString() + - ")(" + - JSON.stringify(documenterSearchIndex["docs"]) + - "," + - JSON.stringify(documenterBaseURL) + - "," + - JSON.stringify(filters) + - ")"; -const worker_blob = new Blob([worker_str], { type: "text/javascript" }); -const worker = new Worker(URL.createObjectURL(worker_blob)); - /////// SEARCH MAIN /////// -// Whether the worker is currently handling a search. This is a boolean -// as the worker only ever handles 1 or 0 searches at a time. -var worker_is_running = false; - -// The last search text that was sent to the worker. This is used to determine -// if the worker should be launched again when it reports back results. -var last_search_text = ""; - -// The results of the last search. This, in combination with the state of the filters -// in the DOM, is used compute the results to display on calls to update_search. -var unfiltered_results = []; - -// Which filter is currently selected -var selected_filter = ""; - -$(document).on("input", ".documenter-search-input", function (event) { - if (!worker_is_running) { - launch_search(); - } -}); - -function launch_search() { - worker_is_running = true; - last_search_text = $(".documenter-search-input").val(); - worker.postMessage(last_search_text); -} - -worker.onmessage = function (e) { - if (last_search_text !== $(".documenter-search-input").val()) { - launch_search(); - } else { - worker_is_running = false; - } - - unfiltered_results = e.data; - update_search(); -}; +function runSearchMainCode() { + // `worker = Threads.@spawn worker_function(documenterSearchIndex)`, but in JavaScript! + const filters = [ + ...new Set(documenterSearchIndex["docs"].map((x) => x.category)), + ]; + const worker_str = + "(" + + worker_function.toString() + + ")(" + + JSON.stringify(documenterSearchIndex["docs"]) + + "," + + JSON.stringify(documenterBaseURL) + + "," + + JSON.stringify(filters) + + ")"; + const worker_blob = new Blob([worker_str], { type: "text/javascript" }); + const worker = new Worker(URL.createObjectURL(worker_blob)); + + // Whether the worker is currently handling a search. This is a boolean + // as the worker only ever handles 1 or 0 searches at a time. + var worker_is_running = false; + + // The last search text that was sent to the worker. This is used to determine + // if the worker should be launched again when it reports back results. + var last_search_text = ""; + + // The results of the last search. This, in combination with the state of the filters + // in the DOM, is used compute the results to display on calls to update_search. + var unfiltered_results = []; + + // Which filter is currently selected + var selected_filter = ""; + + $(document).on("input", ".documenter-search-input", function (event) { + if (!worker_is_running) { + launch_search(); + } + }); -$(document).on("click", ".search-filter", function () { - if ($(this).hasClass("search-filter-selected")) { - selected_filter = ""; - } else { - selected_filter = $(this).text().toLowerCase(); + function launch_search() { + worker_is_running = true; + last_search_text = $(".documenter-search-input").val(); + worker.postMessage(last_search_text); } - // This updates search results and toggles classes for UI: - update_search(); -}); + worker.onmessage = function (e) { + if (last_search_text !== $(".documenter-search-input").val()) { + launch_search(); + } else { + worker_is_running = false; + } -/** - * Make/Update the search component - */ -function update_search() { - let querystring = $(".documenter-search-input").val(); + unfiltered_results = e.data; + update_search(); + }; - if (querystring.trim()) { - if (selected_filter == "") { - results = unfiltered_results; + $(document).on("click", ".search-filter", function () { + if ($(this).hasClass("search-filter-selected")) { + selected_filter = ""; } else { - results = unfiltered_results.filter((result) => { - return selected_filter == result.category.toLowerCase(); - }); + selected_filter = $(this).text().toLowerCase(); } - let search_result_container = ``; - let modal_filters = make_modal_body_filters(); - let search_divider = `
`; + // This updates search results and toggles classes for UI: + update_search(); + }); - if (results.length) { - let links = []; - let count = 0; - let search_results = ""; - - for (var i = 0, n = results.length; i < n && count < 200; ++i) { - let result = results[i]; - if (result.location && !links.includes(result.location)) { - search_results += result.div; - count++; - links.push(result.location); - } - } + /** + * Make/Update the search component + */ + function update_search() { + let querystring = $(".documenter-search-input").val(); - if (count == 1) { - count_str = "1 result"; - } else if (count == 200) { - count_str = "200+ results"; + if (querystring.trim()) { + if (selected_filter == "") { + results = unfiltered_results; } else { - count_str = count + " results"; + results = unfiltered_results.filter((result) => { + return selected_filter == result.category.toLowerCase(); + }); } - let result_count = `
${count_str}
`; - search_result_container = ` + let search_result_container = ``; + let modal_filters = make_modal_body_filters(); + let search_divider = `
`; + + if (results.length) { + let links = []; + let count = 0; + let search_results = ""; + + for (var i = 0, n = results.length; i < n && count < 200; ++i) { + let result = results[i]; + if (result.location && !links.includes(result.location)) { + search_results += result.div; + count++; + links.push(result.location); + } + } + + if (count == 1) { + count_str = "1 result"; + } else if (count == 200) { + count_str = "200+ results"; + } else { + count_str = count + " results"; + } + let result_count = `
${count_str}
`; + + search_result_container = ` +
+ ${modal_filters} + ${search_divider} + ${result_count} +
+ ${search_results} +
+
+ `; + } else { + search_result_container = `
${modal_filters} ${search_divider} - ${result_count} -
- ${search_results} -
-
+
0 result(s)
+ +
No result found!
`; - } else { - search_result_container = ` -
- ${modal_filters} - ${search_divider} -
0 result(s)
-
-
No result found!
- `; - } + } - if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").removeClass("is-justify-content-center"); - } + if ($(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").removeClass("is-justify-content-center"); + } - $(".search-modal-card-body").html(search_result_container); - } else { - if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { - $(".search-modal-card-body").addClass("is-justify-content-center"); + $(".search-modal-card-body").html(search_result_container); + } else { + if (!$(".search-modal-card-body").hasClass("is-justify-content-center")) { + $(".search-modal-card-body").addClass("is-justify-content-center"); + } + + $(".search-modal-card-body").html(` +
Type something to get started!
+ `); } + } - $(".search-modal-card-body").html(` -
Type something to get started!
- `); + /** + * Make the modal filter html + * + * @returns string + */ + function make_modal_body_filters() { + let str = filters + .map((val) => { + if (selected_filter == val.toLowerCase()) { + return `${val}`; + } else { + return `${val}`; + } + }) + .join(""); + + return ` +
+ Filters: + ${str} +
`; } } -/** - * Make the modal filter html - * - * @returns string - */ -function make_modal_body_filters() { - let str = filters - .map((val) => { - if (selected_filter == val.toLowerCase()) { - return `${val}`; - } else { - return `${val}`; - } - }) - .join(""); - - return ` -
- Filters: - ${str} -
`; +function waitUntilSearchIndexAvailable() { + // It is possible that the documenter.js script runs before the page + // has finished loading and documenterSearchIndex gets defined. + // So we need to wait until the search index actually loads before setting + // up all the search-related stuff. + if (typeof documenterSearchIndex !== "undefined") { + runSearchMainCode(); + } else { + console.warn("Search Index not available, waiting"); + setTimeout(waitUntilSearchIndexAvailable, 1000); + } } +// The actual entry point to the search code +waitUntilSearchIndexAvailable(); + }) //////////////////////////////////////////////////////////////////////////////// require(['jquery'], function($) { diff --git a/dev/getting_started/index.html b/dev/getting_started/index.html index fbbc0a711..6f5ad6555 100644 --- a/dev/getting_started/index.html +++ b/dev/getting_started/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/guides/array_api/index.html b/dev/guides/array_api/index.html index 74b3e02da..a6f0514a7 100644 --- a/dev/guides/array_api/index.html +++ b/dev/guides/array_api/index.html @@ -91,10 +91,10 @@ y = lazy(rand(10)) z = x + y z = z + 1 -z = compute(z)

will not actually compute z until compute(z) is called, so the execution of x + y is fused with the execution of z + 1.

source
Finch.computeFunction
compute(args..., ctx=default_scheduler()) -> Any

Compute the value of a lazy tensor. The result is the argument itself, or a tuple of arguments if multiple arguments are passed.

source

Einsum

Finch also supports a highly general @einsum macro which supports any reduction over any simple pointwise array expression.

Finch.@einsumMacro
@einsum tns[idxs...] <<op>>= ex...

Construct an einsum expression that computes the result of applying op to the tensor tns with the indices idxs and the tensors in the expression ex. The result is stored in the variable tns.

ex may be any pointwise expression consisting of function calls and tensor references of the form tns[idxs...], where tns and idxs are symbols.

The <<op>> operator can be any binary operator that is defined on the element type of the expression ex.

The einsum will evaluate the pointwise expression tns[idxs...] <<op>>= ex... over all combinations of index values in tns and the tensors in ex.

Here are a few examples:

@einsum C[i, j] += A[i, k] * B[k, j]
+z = compute(z)

will not actually compute z until compute(z) is called, so the execution of x + y is fused with the execution of z + 1.

source
Finch.computeFunction
compute(args..., ctx=default_scheduler()) -> Any

Compute the value of a lazy tensor. The result is the argument itself, or a tuple of arguments if multiple arguments are passed.

source

Einsum

Finch also supports a highly general @einsum macro which supports any reduction over any simple pointwise array expression.

Finch.@einsumMacro
@einsum tns[idxs...] <<op>>= ex...

Construct an einsum expression that computes the result of applying op to the tensor tns with the indices idxs and the tensors in the expression ex. The result is stored in the variable tns.

ex may be any pointwise expression consisting of function calls and tensor references of the form tns[idxs...], where tns and idxs are symbols.

The <<op>> operator can be any binary operator that is defined on the element type of the expression ex.

The einsum will evaluate the pointwise expression tns[idxs...] <<op>>= ex... over all combinations of index values in tns and the tensors in ex.

Here are a few examples:

@einsum C[i, j] += A[i, k] * B[k, j]
 @einsum C[i, j, k] += A[i, j] * B[j, k]
 @einsum D[i, k] += X[i, j] * Y[j, k]
 @einsum J[i, j] = H[i, j] * I[i, j]
 @einsum N[i, j] = K[i, k] * L[k, j] - M[i, j]
 @einsum R[i, j] <<max>>= P[i, k] + Q[k, j]
-@einsum x[i] = A[i, j] * x[j]
source
+@einsum x[i] = A[i, j] * x[j]source diff --git a/dev/guides/benchmarking_tips/index.html b/dev/guides/benchmarking_tips/index.html index cd510bf81..ce1003845 100644 --- a/dev/guides/benchmarking_tips/index.html +++ b/dev/guides/benchmarking_tips/index.html @@ -32,4 +32,4 @@ ▆███████▅▄▄▃▅▄▄▃▄▇████████████▇▆▆▆▇█▇███▇▇▇▆▇█▇█▇▇▆▅▅▄▄▆▅▅▅▄▅ █ 387 ns Histogram: log(frequency) by time 452 ns < - Memory estimate: 608 bytes, allocs estimate: 2. + Memory estimate: 608 bytes, allocs estimate: 2. diff --git a/dev/guides/calling_finch/index.html b/dev/guides/calling_finch/index.html index b41d96542..e5515601e 100644 --- a/dev/guides/calling_finch/index.html +++ b/dev/guides/calling_finch/index.html @@ -5,7 +5,7 @@ A[i] = B[i] + C[i] end return A -end

Finch programs are composed using the following syntax:

Symbols are used to represent variables, and their values are taken from the environment. Loops introduce index variables into the scope of their bodies.

Finch uses the types of the arrays and symbolic analysis to discover program optimizations. If B and C are sparse array types, the program will only run over the nonzeros of either.

Semantically, Finch programs execute every iteration. However, Finch can use sparsity information to reliably skip iterations when possible.

options are optional keyword arguments:

See also: @finch_code

source
Finch.@finch_codeMacro

@finch_code [options...] prgm

Return the code that would be executed in order to run a finch program prgm.

See also: @finch

source

Ahead Of Time (@finch_kernel)

While @finch is the recommended way to use Finch, it is also possible to run finch ahead-of-time. The @finch_kernel macro generates a function definition ahead-of-time, which can be evaluated and then called later.

There are several reasons one might want to do this:

  1. If we want to make tweaks to the Finch implementation, we can directly modify the source code of the resulting function.
  2. When benchmarking Finch functions, we can easily and reliably ensure the benchmarked code is inferrable.
  3. If we want to use Finch to generate code but don't want to include Finch as a dependency in our project, we can use @finch_kernel to generate the functions ahead of time and copy and paste the generated code into our project. Consider automating this workflow to keep the kernels up to date!
Finch.@finch_kernelMacro
@finch_kernel [options...] fname(args...) = prgm

Return a definition for a function named fname which executes @finch prgm on the arguments args. args should be a list of variables holding representative argument instances or types.

See also: @finch

source

As an example, the following code generates an spmv kernel definition, evaluates the definition, and then calls the kernel several times.

let
+end

Finch programs are composed using the following syntax:

Symbols are used to represent variables, and their values are taken from the environment. Loops introduce index variables into the scope of their bodies.

Finch uses the types of the arrays and symbolic analysis to discover program optimizations. If B and C are sparse array types, the program will only run over the nonzeros of either.

Semantically, Finch programs execute every iteration. However, Finch can use sparsity information to reliably skip iterations when possible.

options are optional keyword arguments:

See also: @finch_code

source
Finch.@finch_codeMacro

@finch_code [options...] prgm

Return the code that would be executed in order to run a finch program prgm.

See also: @finch

source

Ahead Of Time (@finch_kernel)

While @finch is the recommended way to use Finch, it is also possible to run finch ahead-of-time. The @finch_kernel macro generates a function definition ahead-of-time, which can be evaluated and then called later.

There are several reasons one might want to do this:

  1. If we want to make tweaks to the Finch implementation, we can directly modify the source code of the resulting function.
  2. When benchmarking Finch functions, we can easily and reliably ensure the benchmarked code is inferrable.
  3. If we want to use Finch to generate code but don't want to include Finch as a dependency in our project, we can use @finch_kernel to generate the functions ahead of time and copy and paste the generated code into our project. Consider automating this workflow to keep the kernels up to date!
Finch.@finch_kernelMacro
@finch_kernel [options...] fname(args...) = prgm

Return a definition for a function named fname which executes @finch prgm on the arguments args. args should be a list of variables holding representative argument instances or types.

See also: @finch

source

As an example, the following code generates an spmv kernel definition, evaluates the definition, and then calls the kernel several times.

let
     A = Tensor(Dense(SparseList(Element(0.0))))
     x = Tensor(Dense(Element(0.0)))
     y = Tensor(Dense(Element(0.0)))
@@ -28,4 +28,4 @@
     end
 end
 
-main()
+main() diff --git a/dev/guides/concordization/index.html b/dev/guides/concordization/index.html index fb1a86904..a271c436e 100644 --- a/dev/guides/concordization/index.html +++ b/dev/guides/concordization/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/guides/debugging_tips/index.html b/dev/guides/debugging_tips/index.html index afb90fea1..af17a2d06 100644 --- a/dev/guides/debugging_tips/index.html +++ b/dev/guides/debugging_tips/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/guides/dimensionalization/index.html b/dev/guides/dimensionalization/index.html index 4296d9f6a..6b81d08df 100644 --- a/dev/guides/dimensionalization/index.html +++ b/dev/guides/dimensionalization/index.html @@ -14,4 +14,4 @@ y .= 0 for i = 1:3 y[~i] += x[i] -end

does not set the dimension of y, and y does not participate in dimensionalization.

In summary, the rules of index dimensionalization are as follows:

The rules of declaration dimensionalization are as follows:

Finch.FinchNotation.DimensionlessType
Dimensionless()

A singleton type representing the lack of a dimension. This is used in place of a dimension when we want to avoid dimensionality checks. In the @finch macro, you can write Dimensionless() with an underscore as for i = _, allowing finch to pick up the loop bounds from the tensors automatically.

source
+end

does not set the dimension of y, and y does not participate in dimensionalization.

In summary, the rules of index dimensionalization are as follows:

The rules of declaration dimensionalization are as follows:

Finch.FinchNotation.DimensionlessType
Dimensionless()

A singleton type representing the lack of a dimension. This is used in place of a dimension when we want to avoid dimensionality checks. In the @finch macro, you can write Dimensionless() with an underscore as for i = _, allowing finch to pick up the loop bounds from the tensors automatically.

source
diff --git a/dev/guides/fileio/index.html b/dev/guides/fileio/index.html index 481d95d65..ed460b53d 100644 --- a/dev/guides/fileio/index.html +++ b/dev/guides/fileio/index.html @@ -1,4 +1,4 @@ -FileIO · Finch.jl

Finch Tensor File Input/Output

All of the file formats supported by Finch are listed below. Each format has a corresponding read and write function, and can be selected automatically based on the file extension with the following functions:

Finch.freadFunction
fread(filename::AbstractString)

Read the Finch tensor from a file using a file format determined by the file extension. The following file extensions are supported:

source
Finch.fwriteFunction
fwrite(filename::AbstractString, tns::Finch.Tensor)

Write the Finch tensor to a file using a file format determined by the file extension. The following file extensions are supported:

source

Binsparse Format (.bsp)

Finch supports the most recent revision of the Binsparse binary sparse tensor format, including the v2.0 tensor extension. This is a good option for those who want an efficient way to transfer sparse tensors between supporting libraries and languages. The Binsparse format represents the tensor format as a JSON string in the underlying data container, which can be either HDF5 or a combination of NPY or JSON files. Binsparse arrays are stored 0-indexed.

Finch.bspwriteFunction
bspwrite(::AbstractString, tns)
+FileIO · Finch.jl

Finch Tensor File Input/Output

All of the file formats supported by Finch are listed below. Each format has a corresponding read and write function, and can be selected automatically based on the file extension with the following functions:

Finch.freadFunction
fread(filename::AbstractString)

Read the Finch tensor from a file using a file format determined by the file extension. The following file extensions are supported:

source
Finch.fwriteFunction
fwrite(filename::AbstractString, tns::Finch.Tensor)

Write the Finch tensor to a file using a file format determined by the file extension. The following file extensions are supported:

source

Binsparse Format (.bsp)

Finch supports the most recent revision of the Binsparse binary sparse tensor format, including the v2.0 tensor extension. This is a good option for those who want an efficient way to transfer sparse tensors between supporting libraries and languages. The Binsparse format represents the tensor format as a JSON string in the underlying data container, which can be either HDF5 or a combination of NPY or JSON files. Binsparse arrays are stored 0-indexed.

Finch.bspwriteFunction
bspwrite(::AbstractString, tns)
 bspwrite(::HDF5.File, tns)
-bspwrite(::NPYPath, tns)

Write the Finch tensor to a file using Binsparse file format.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source
Finch.bspreadFunction

bspread(::AbstractString) bspread(::HDF5.File) bspread(::NPYPath)

Read the Binsparse file into a Finch tensor.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source

TensorMarket (.mtx, .ttx)

Finch supports the MatrixMarket and TensorMarket formats, which prioritize readability and archiveability, storing matrices and tensors in plaintext.

Finch.fttreadFunction
fttread(filename, infoonly=false, retcoord=false)

Read the TensorMarket file into a Finch tensor. The tensor will be dense or COO depending on the format of the file.

TensorMarket must be loaded for this function to be available.

See also: ttread

source

FROSTT (.tns)

Finch supports the FROSTT format for legacy codes that still use it.

Finch.ftnswriteFunction
ftnswrite(filename, tns)

Write a sparse Finch tensor to a FROSTT .tns file.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnswrite

source
Finch.ftnsreadFunction
ftnsread(filename)

Read the contents of the FROSTT .tns file 'filename' into a Finch COO Tensor.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnsread

source
+bspwrite(::NPYPath, tns)

Write the Finch tensor to a file using Binsparse file format.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source
Finch.bspreadFunction

bspread(::AbstractString) bspread(::HDF5.File) bspread(::NPYPath)

Read the Binsparse file into a Finch tensor.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source

TensorMarket (.mtx, .ttx)

Finch supports the MatrixMarket and TensorMarket formats, which prioritize readability and archiveability, storing matrices and tensors in plaintext.

Finch.fttreadFunction
fttread(filename, infoonly=false, retcoord=false)

Read the TensorMarket file into a Finch tensor. The tensor will be dense or COO depending on the format of the file.

TensorMarket must be loaded for this function to be available.

See also: ttread

source

FROSTT (.tns)

Finch supports the FROSTT format for legacy codes that still use it.

Finch.ftnswriteFunction
ftnswrite(filename, tns)

Write a sparse Finch tensor to a FROSTT .tns file.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnswrite

source
Finch.ftnsreadFunction
ftnsread(filename)

Read the contents of the FROSTT .tns file 'filename' into a Finch COO Tensor.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnsread

source
diff --git a/dev/guides/finch_language/index.html b/dev/guides/finch_language/index.html index aec874968..a0f7cfc85 100644 --- a/dev/guides/finch_language/index.html +++ b/dev/guides/finch_language/index.html @@ -1,5 +1,5 @@ -The Finch Language · Finch.jl

Finch Notation

Finch programs are written in Julia, but they are not Julia programs. Instead, they are an abstraction description of a tensor computation.

Finch programs are blocks of tensor operations, joined by control flow. Finch is an imperative language. The AST is separated into statements and expressions, where statements can modify the state of the program but expressions cannot.

The core Finch expressions are:

And the core Finch statements are:

  • declare e.g. tns .= init
  • assign e.g. lhs[idxs...] <<op>>= rhs
  • loop e.g. for i = _; ... end
  • define e.g. let var = val; ... end
  • sieve e.g. if cond; ... end
  • block e.g. begin ... end
Finch.FinchNotation.indexConstant
index(name)

Finch AST expression for an index named name. Each index must be quantified by a corresponding loop which iterates over all values of the index.

source
Finch.FinchNotation.accessConstant
access(tns, mode, idx...)

Finch AST expression representing the value of tensor tns at the indices idx.... The mode differentiates between reads or updates and whether the access is in-place.

source
Finch.FinchNotation.defineConstant
define(lhs, rhs, body)

Finch AST statement that defines lhs as having the value rhs in body. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.assignConstant
assign(lhs, op, rhs)

Finch AST statement that updates the value of lhs to op(lhs, rhs). Overwriting is accomplished with the function overwrite(lhs, rhs) = rhs.

source
Finch.FinchNotation.loopConstant
loop(idx, ext, body)

Finch AST statement that runs body for each value of idx in ext. Tensors in body must have ranges that agree with ext. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.sieveConstant
sieve(cond, body)

Finch AST statement that only executes body if cond is true. A new scope is introduced to evaluate body.

source

Scoping

Finch programs are scoped. Scopes contain variable definitions and tensor declarations. Loops and sieves introduce new scopes. The following program has four scopes, each of which is numbered to the left of the statements it contains.

@finch begin
+The Finch Language · Finch.jl

Finch Notation

Finch programs are written in Julia, but they are not Julia programs. Instead, they are an abstraction description of a tensor computation.

Finch programs are blocks of tensor operations, joined by control flow. Finch is an imperative language. The AST is separated into statements and expressions, where statements can modify the state of the program but expressions cannot.

The core Finch expressions are:

And the core Finch statements are:

  • declare e.g. tns .= init
  • assign e.g. lhs[idxs...] <<op>>= rhs
  • loop e.g. for i = _; ... end
  • define e.g. let var = val; ... end
  • sieve e.g. if cond; ... end
  • block e.g. begin ... end
Finch.FinchNotation.indexConstant
index(name)

Finch AST expression for an index named name. Each index must be quantified by a corresponding loop which iterates over all values of the index.

source
Finch.FinchNotation.accessConstant
access(tns, mode, idx...)

Finch AST expression representing the value of tensor tns at the indices idx.... The mode differentiates between reads or updates and whether the access is in-place.

source
Finch.FinchNotation.defineConstant
define(lhs, rhs, body)

Finch AST statement that defines lhs as having the value rhs in body. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.assignConstant
assign(lhs, op, rhs)

Finch AST statement that updates the value of lhs to op(lhs, rhs). Overwriting is accomplished with the function overwrite(lhs, rhs) = rhs.

source
Finch.FinchNotation.loopConstant
loop(idx, ext, body)

Finch AST statement that runs body for each value of idx in ext. Tensors in body must have ranges that agree with ext. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.sieveConstant
sieve(cond, body)

Finch AST statement that only executes body if cond is true. A new scope is introduced to evaluate body.

source

Scoping

Finch programs are scoped. Scopes contain variable definitions and tensor declarations. Loops and sieves introduce new scopes. The following program has four scopes, each of which is numbered to the left of the statements it contains.

@finch begin
 1   y .= 0
 1   for j = _
 1   2   t .= 0
@@ -10,6 +10,6 @@
 1   2   4   y[i] += A[i, j] * t[]
 1   2   end
 1   end
-end

Variables refer to their defined values in the innermost containing scope. If variables are undefined, they are assumed to have global scope (they may come from the surrounding program).

Tensor Lifecycle

Tensors have two modes: Read and Update. Tensors in read mode may be read, but not updated. Tensors in update mode may be updated, but not read. A tensor declaration initializes and possibly resizes the tensor, setting it to update mode. Also, Finch will automatically change the mode of tensors as they are used. However, tensors may only change their mode within scopes that contain their declaration. If a tensor has not been declared, it is assumed to have global scope.

Tensor declaration is different than variable definition. Declaring a tensor initializes the memory (usually to zero) and sets the tensor to update mode. Defining a tensor simply gives a name to that memory. A tensor may be declared multiple times, but it may only be defined once.

Tensors are assumed to be in read mode when they are defined. Tensors must enter and exit scope in read mode. Finch inserts freeze and thaw statements to ensure that tensors are in the correct mode. Freezing a tensor prevents further updates and allows reads. Thawing a tensor allows further updates and prevents reads.

Tensor lifecycle statements consist of:

Dimensionalization

Finch loops have dimensions. Accessing a tensor with an unmodified loop index "hints" that the loop should have the same dimension as the corresponding axis of the tensor. Finch will automatically dimensionalize loops that are hinted by tensor accesses. One may refer to the automatically determined dimension using a variable named _ or :.

Similarly, tensor declarations also set the dimensions of a tensor. Accessing a tensor with an unmodified loop index "hints" that the tensor axis should have the same dimension as the corresponding loop. Finch will automatically dimensionalize declarations based on all updates up to the first read.

Array Combinators

Finch includes several array combinators that modify the behavior of arrays. For example, the OffsetArray type wraps an existing array, but shifts its indices. The PermissiveArray type wraps an existing array, but allows out-of-bounds reads and writes. When an array is accessed out of bounds, it produces Missing.

Array combinators introduce some complexity to the tensor lifecycle, as wrappers may contain multiple or different arrays that could potentially be in different modes. Any array combinators used in a tensor access must reference a single global variable which holds the root array. The root array is the single array that gets declared, and changes modes from read to update, or vice versa.

Fancy Indexing

Finch supports arbitrary indexing of arrays, but certain indexing operations have first class support through array combinators. Before dimensionalization, the following transformations are performed:

    A[i + c] =>        OffsetArray(A, c)[i]
+end

Variables refer to their defined values in the innermost containing scope. If variables are undefined, they are assumed to have global scope (they may come from the surrounding program).

Tensor Lifecycle

Tensors have two modes: Read and Update. Tensors in read mode may be read, but not updated. Tensors in update mode may be updated, but not read. A tensor declaration initializes and possibly resizes the tensor, setting it to update mode. Also, Finch will automatically change the mode of tensors as they are used. However, tensors may only change their mode within scopes that contain their declaration. If a tensor has not been declared, it is assumed to have global scope.

Tensor declaration is different than variable definition. Declaring a tensor initializes the memory (usually to zero) and sets the tensor to update mode. Defining a tensor simply gives a name to that memory. A tensor may be declared multiple times, but it may only be defined once.

Tensors are assumed to be in read mode when they are defined. Tensors must enter and exit scope in read mode. Finch inserts freeze and thaw statements to ensure that tensors are in the correct mode. Freezing a tensor prevents further updates and allows reads. Thawing a tensor allows further updates and prevents reads.

Tensor lifecycle statements consist of:

Dimensionalization

Finch loops have dimensions. Accessing a tensor with an unmodified loop index "hints" that the loop should have the same dimension as the corresponding axis of the tensor. Finch will automatically dimensionalize loops that are hinted by tensor accesses. One may refer to the automatically determined dimension using a variable named _ or :.

Similarly, tensor declarations also set the dimensions of a tensor. Accessing a tensor with an unmodified loop index "hints" that the tensor axis should have the same dimension as the corresponding loop. Finch will automatically dimensionalize declarations based on all updates up to the first read.

Array Combinators

Finch includes several array combinators that modify the behavior of arrays. For example, the OffsetArray type wraps an existing array, but shifts its indices. The PermissiveArray type wraps an existing array, but allows out-of-bounds reads and writes. When an array is accessed out of bounds, it produces Missing.

Array combinators introduce some complexity to the tensor lifecycle, as wrappers may contain multiple or different arrays that could potentially be in different modes. Any array combinators used in a tensor access must reference a single global variable which holds the root array. The root array is the single array that gets declared, and changes modes from read to update, or vice versa.

Fancy Indexing

Finch supports arbitrary indexing of arrays, but certain indexing operations have first class support through array combinators. Before dimensionalization, the following transformations are performed:

    A[i + c] =>        OffsetArray(A, c)[i]
     A[i + j] =>      ToeplitzArray(A, 1)[i, j]
-       A[~i] => PermissiveArray(A, true)[i]

Note that these transformations may change the behavior of dimensionalization, since they often result in unmodified loop indices (the index i will participate in dimensionalization, but an index expression like i + 1 will not).

+ A[~i] => PermissiveArray(A, true)[i]

Note that these transformations may change the behavior of dimensionalization, since they often result in unmodified loop indices (the index i will participate in dimensionalization, but an index expression like i + 1 will not).

diff --git a/dev/guides/index_sugar/index.html b/dev/guides/index_sugar/index.html index 9eeadd058..5d220ad02 100644 --- a/dev/guides/index_sugar/index.html +++ b/dev/guides/index_sugar/index.html @@ -1,7 +1,7 @@ -Index Sugar · Finch.jl

Index Sugar and Tensor Modifiers

In Finch, expressions like x[i + 1] are compiled using tensor modifiers, like offset(x, 1)[i]. The user can construct tensor modifiers directly, e.g. offset(x, 1), or implicitly using the syntax x[i + 1]. Recognizable index expressions are converted to tensor modifiers before dimensionalization, so that the modified tensor will participate in dimensionalization.

While tensor modifiers may change the behavior of a tensor, they reference their parent tensor as the root tensor. Modified tensors are not understoond as distinct from their roots. For example, all accesses to the root tensor must obey lifecycle and dimensionalization rules. Additionally, root tensors which are themselves modifiers are unwrapped at the beginning of the program, so that modifiers are not obscured and the new root tensor is not a modifier.

The following table lists the recognized index expressions and their equivalent tensor expressions, where i is an index, a, b are constants, p is an iteration protocol, and x is an expression:

Original ExpressionTransformed Expression
A[i + a]offset(A, 1)[i]
A[i + x]toeplitz(A, 1)[i, x]
A[(a:b)(i)]window(A, a:b)[i]
A[a * i]scale(A, (3,))[i]
A[i * x]products(A, 1)[i, j]
A[~i]permissive(A)[i]
A[p(i)]protocolize(A, p)[i]

Each of these tensor modifiers is described below:

Finch.offsetFunction
offset(tns, delta...)

Create an OffsetArray such that offset(tns, delta...)[i...] == tns[i .+ delta...]. The dimensions declared by an OffsetArray are shifted, so that size(offset(tns, delta...)) == size(tns) .+ delta.

source
Finch.toeplitzFunction
toeplitz(tns, dim)

Create a ToeplitzArray such that

    Toeplitz(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] + i[dim + 1], i[dim + 2:end]...]

The ToplitzArray can be thought of as adding a dimension that shifts another dimension of the original tensor.

source
Finch.windowFunction
window(tns, dims)

Create a WindowedArray which represents a view into another tensor

    window(tns, dims)[i...] == tns[dim[1][i], dim[2][i], ...]

The windowed array restricts the new dimension to the dimension of valid indices of each dim. The dims may also be nothing to represent a full view of the underlying dimension.

source
Finch.scaleFunction
scale(tns, delta...)

Create a ScaleArray such that scale(tns, delta...)[i...] == tns[i .* delta...]. The dimensions declared by an OffsetArray are shifted, so that size(scale(tns, delta...)) == size(tns) .* delta. This is only supported on tensors with real-valued dimensions.

source
Finch.productsFunction
products(tns, dim)

Create a ProductArray such that

    products(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] * i[dim + 1], i[dim + 2:end]...]

This is like toeplitz but with times instead of plus.

source
Finch.permissiveFunction
permissive(tns, dims...)

Create an PermissiveArray where permissive(tns, dims...)[i...] is missing if i[n] is not in the bounds of tns when dims[n] is true. This wrapper allows all permissive dimensions to be exempt from dimension checks, and is useful when we need to access an array out of bounds, or for padding. More formally,

    permissive(tns, dims...)[i...] =
+Index Sugar · Finch.jl

Index Sugar and Tensor Modifiers

In Finch, expressions like x[i + 1] are compiled using tensor modifiers, like offset(x, 1)[i]. The user can construct tensor modifiers directly, e.g. offset(x, 1), or implicitly using the syntax x[i + 1]. Recognizable index expressions are converted to tensor modifiers before dimensionalization, so that the modified tensor will participate in dimensionalization.

While tensor modifiers may change the behavior of a tensor, they reference their parent tensor as the root tensor. Modified tensors are not understoond as distinct from their roots. For example, all accesses to the root tensor must obey lifecycle and dimensionalization rules. Additionally, root tensors which are themselves modifiers are unwrapped at the beginning of the program, so that modifiers are not obscured and the new root tensor is not a modifier.

The following table lists the recognized index expressions and their equivalent tensor expressions, where i is an index, a, b are constants, p is an iteration protocol, and x is an expression:

Original ExpressionTransformed Expression
A[i + a]offset(A, 1)[i]
A[i + x]toeplitz(A, 1)[i, x]
A[(a:b)(i)]window(A, a:b)[i]
A[a * i]scale(A, (3,))[i]
A[i * x]products(A, 1)[i, j]
A[~i]permissive(A)[i]
A[p(i)]protocolize(A, p)[i]

Each of these tensor modifiers is described below:

Finch.offsetFunction
offset(tns, delta...)

Create an OffsetArray such that offset(tns, delta...)[i...] == tns[i .+ delta...]. The dimensions declared by an OffsetArray are shifted, so that size(offset(tns, delta...)) == size(tns) .+ delta.

source
Finch.toeplitzFunction
toeplitz(tns, dim)

Create a ToeplitzArray such that

    Toeplitz(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] + i[dim + 1], i[dim + 2:end]...]

The ToplitzArray can be thought of as adding a dimension that shifts another dimension of the original tensor.

source
Finch.windowFunction
window(tns, dims)

Create a WindowedArray which represents a view into another tensor

    window(tns, dims)[i...] == tns[dim[1][i], dim[2][i], ...]

The windowed array restricts the new dimension to the dimension of valid indices of each dim. The dims may also be nothing to represent a full view of the underlying dimension.

source
Finch.scaleFunction
scale(tns, delta...)

Create a ScaleArray such that scale(tns, delta...)[i...] == tns[i .* delta...]. The dimensions declared by an OffsetArray are shifted, so that size(scale(tns, delta...)) == size(tns) .* delta. This is only supported on tensors with real-valued dimensions.

source
Finch.productsFunction
products(tns, dim)

Create a ProductArray such that

    products(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] * i[dim + 1], i[dim + 2:end]...]

This is like toeplitz but with times instead of plus.

source
Finch.permissiveFunction
permissive(tns, dims...)

Create an PermissiveArray where permissive(tns, dims...)[i...] is missing if i[n] is not in the bounds of tns when dims[n] is true. This wrapper allows all permissive dimensions to be exempt from dimension checks, and is useful when we need to access an array out of bounds, or for padding. More formally,

    permissive(tns, dims...)[i...] =
         if any(n -> dims[n] && !(i[n] in axes(tns)[n]))
             missing
         else
             tns[i...]
-        end
source
Finch.protocolizeFunction
protocolize(tns, protos...)

Create a ProtocolizedArray that accesses dimension n with protocol protos[n], if protos[n] is not nothing. See the documention for Iteration Protocols for more information. For example, to gallop along the inner dimension of a matrix A, we write A[gallop(i), j], which becomes protocolize(A, gallop, nothing)[i, j].

source
+ end
source
Finch.protocolizeFunction
protocolize(tns, protos...)

Create a ProtocolizedArray that accesses dimension n with protocol protos[n], if protos[n] is not nothing. See the documention for Iteration Protocols for more information. For example, to gallop along the inner dimension of a matrix A, we write A[gallop(i), j], which becomes protocolize(A, gallop, nothing)[i, j].

source
diff --git a/dev/guides/interoperability/index.html b/dev/guides/interoperability/index.html index 013efd351..9355207cb 100644 --- a/dev/guides/interoperability/index.html +++ b/dev/guides/interoperability/index.html @@ -56,4 +56,4 @@ ├─ [:, 2]: SparseList (0.0) [1:CIndex{Int64}(4)] └─ [:, 3]: SparseList (0.0) [1:CIndex{Int64}(4)] ├─ [CIndex{Int64}(1)]: 4.4 - └─ [CIndex{Int64}(3)]: 5.5

We can also convert between representations by copying to or from CIndex fibers.

+ └─ [CIndex{Int64}(3)]: 5.5

We can also convert between representations by copying to or from CIndex fibers.

diff --git a/dev/guides/iteration_protocols/index.html b/dev/guides/iteration_protocols/index.html index a60d16f9d..124c53ccb 100644 --- a/dev/guides/iteration_protocols/index.html +++ b/dev/guides/iteration_protocols/index.html @@ -1,2 +1,2 @@ -Iteration Protocols · Finch.jl

Iteration Protocols

Finch is a flexible tensor compiler with many ways to iterate over the same data. For example, consider the case where we are intersecting two sparse vectors x[i] and y[i]. By default, we would iterate over all of the nonzeros of each vector. However, if we want to skip over the nonzeros in y based on the nonzeros in x, we could declare the tensor x as the leader tensor with an x[gallop(i)] protocol. When x leads the iteration, the generated code uses the nonzeros of x as an outer loop and the nonzeros of y as an inner loop. If we know that the nonzero datastructure of y supports efficient random access, we might ask to iterate over y with a y[follow(i)] protocol, where we look up each value of y[i] only when x[i] is nonzero.

Finch supports several iteration protocols, documented below. Note that not all formats support all protocols, consult the documentation for each format to figure out which protocols are supported.

Finch.FinchNotation.followFunction
follow(i)

The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.

source
Finch.FinchNotation.walkFunction
walk(i)

The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.

source
Finch.FinchNotation.gallopFunction
gallop(i)

The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.

source
Finch.FinchNotation.extrudeFunction
extrude(i)

The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.

source
Finch.FinchNotation.laminateFunction
laminate(i)

The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.

source
+Iteration Protocols · Finch.jl

Iteration Protocols

Finch is a flexible tensor compiler with many ways to iterate over the same data. For example, consider the case where we are intersecting two sparse vectors x[i] and y[i]. By default, we would iterate over all of the nonzeros of each vector. However, if we want to skip over the nonzeros in y based on the nonzeros in x, we could declare the tensor x as the leader tensor with an x[gallop(i)] protocol. When x leads the iteration, the generated code uses the nonzeros of x as an outer loop and the nonzeros of y as an inner loop. If we know that the nonzero datastructure of y supports efficient random access, we might ask to iterate over y with a y[follow(i)] protocol, where we look up each value of y[i] only when x[i] is nonzero.

Finch supports several iteration protocols, documented below. Note that not all formats support all protocols, consult the documentation for each format to figure out which protocols are supported.

Finch.FinchNotation.followFunction
follow(i)

The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.

source
Finch.FinchNotation.walkFunction
walk(i)

The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.

source
Finch.FinchNotation.gallopFunction
gallop(i)

The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.

source
Finch.FinchNotation.extrudeFunction
extrude(i)

The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.

source
Finch.FinchNotation.laminateFunction
laminate(i)

The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.

source
diff --git a/dev/guides/mask_sugar/index.html b/dev/guides/mask_sugar/index.html index 377ac8eea..cd831058c 100644 --- a/dev/guides/mask_sugar/index.html +++ b/dev/guides/mask_sugar/index.html @@ -8,4 +8,4 @@ end end

to compile to something like

    for i = 1:n
         s[] += A[i, i]
-    end

There are several mask tensors and syntaxes available, summarized in the following table where i, j are indices:

ExpressionTransformed Expression
i < jUpTriMask()[i, j - 1]
i <= jUpTriMask()[i, j]
i > jLoTriMask()[i, j + 1]
i >= jLoTriMask()[i, j]
i == jDiagMask()[i, j]
i != j!(DiagMask()[i, j])

Note that either i or j may be expressions, so long as the expression is constant with respect to the loop over the index.

The mask tensors are described below:

Finch.uptrimaskConstant
uptrimask

A mask for an upper triangular tensor, uptrimask[i, j] = i <= j. Note that this specializes each column for the cases where i <= j and i > j.

source
Finch.lotrimaskConstant
lotrimask

A mask for an upper triangular tensor, lotrimask[i, j] = i >= j. Note that this specializes each column for the cases where i < j and i >= j.

source
Finch.diagmaskConstant
diagmask

A mask for a diagonal tensor, diagmask[i, j] = i == j. Note that this specializes each column for the cases where i < j, i == j, and i > j.

source
Finch.bandmaskConstant
bandmask

A mask for a banded tensor, bandmask[i, j, k] = j <= i <= k. Note that this specializes each column for the cases where i < j, j <= i <= k, and k < i.

source
Finch.chunkmaskFunction
chunkmask(b)

A mask for a chunked tensor, chunkmask[i, j] = b * (j - 1) < i <= b * j. Note that this specializes each column for the cases where i < b * (j - 1), `b * (j

    1. < i <= b * j, andb * j < i`.
source
+ end

There are several mask tensors and syntaxes available, summarized in the following table where i, j are indices:

ExpressionTransformed Expression
i < jUpTriMask()[i, j - 1]
i <= jUpTriMask()[i, j]
i > jLoTriMask()[i, j + 1]
i >= jLoTriMask()[i, j]
i == jDiagMask()[i, j]
i != j!(DiagMask()[i, j])

Note that either i or j may be expressions, so long as the expression is constant with respect to the loop over the index.

The mask tensors are described below:

Finch.uptrimaskConstant
uptrimask

A mask for an upper triangular tensor, uptrimask[i, j] = i <= j. Note that this specializes each column for the cases where i <= j and i > j.

source
Finch.lotrimaskConstant
lotrimask

A mask for an upper triangular tensor, lotrimask[i, j] = i >= j. Note that this specializes each column for the cases where i < j and i >= j.

source
Finch.diagmaskConstant
diagmask

A mask for a diagonal tensor, diagmask[i, j] = i == j. Note that this specializes each column for the cases where i < j, i == j, and i > j.

source
Finch.bandmaskConstant
bandmask

A mask for a banded tensor, bandmask[i, j, k] = j <= i <= k. Note that this specializes each column for the cases where i < j, j <= i <= k, and k < i.

source
Finch.chunkmaskFunction
chunkmask(b)

A mask for a chunked tensor, chunkmask[i, j] = b * (j - 1) < i <= b * j. Note that this specializes each column for the cases where i < b * (j - 1), `b * (j

    1. < i <= b * j, andb * j < i`.
source
diff --git a/dev/guides/optimization_tips/index.html b/dev/guides/optimization_tips/index.html index 4e6133f07..8cc98f3dd 100644 --- a/dev/guides/optimization_tips/index.html +++ b/dev/guides/optimization_tips/index.html @@ -213,4 +213,4 @@ C.val = C_val (C = C,) end -

Type Stability

Julia code runs fastest when the compiler can infer the types of all intermediate values. Finch does not check that the generated code is type-stable. In situations where tensors have nonuniform index or element types, or the computation itself might involve multiple types, one should check that the output of @finch_kernel code is type-stable with @code_warntype.

+

Type Stability

Julia code runs fastest when the compiler can infer the types of all intermediate values. Finch does not check that the generated code is type-stable. In situations where tensors have nonuniform index or element types, or the computation itself might involve multiple types, one should check that the output of @finch_kernel code is type-stable with @code_warntype.

diff --git a/dev/guides/parallelization/index.html b/dev/guides/parallelization/index.html index dc1ab79ec..2c5ed0c3a 100644 --- a/dev/guides/parallelization/index.html +++ b/dev/guides/parallelization/index.html @@ -4,7 +4,7 @@ └─ Dense [1:3] ├─ [1]: 1.0 ├─ [2]: 2.0 - └─ [3]: 3.0source
Finch.MutexLevelType
MutexLevel{Val, Lvl}()

Mutex Level Protects the level directly below it with atomics

Each position in the level below the Mutex level is protected by a lock.

julia> Tensor(Dense(Mutex(Element(0.0))), [1, 2, 3])
+   └─ [3]: 3.0
source
Finch.MutexLevelType
MutexLevel{Val, Lvl}()

Mutex Level Protects the level directly below it with atomics

Each position in the level below the Mutex level is protected by a lock.

julia> Tensor(Dense(Mutex(Element(0.0))), [1, 2, 3])
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: Mutex ->
@@ -12,7 +12,7 @@
    ├─ [2]: Mutex ->
    │  └─ 2.0
    └─ [3]: Mutex ->
-      └─ 3.0
source
Finch.SeparateLevelType
SeparateLevel{Lvl, [Val]}()

A subfiber of a Separate level is a separate tensor of type Lvl, in it's own memory space.

Each sublevel is stored in a vector of type Val with eltype(Val) = Lvl.

julia> Tensor(Dense(Separate(Element(0.0))), [1, 2, 3])
+      └─ 3.0
source
Finch.SeparateLevelType
SeparateLevel{Lvl, [Val]}()

A subfiber of a Separate level is a separate tensor of type Lvl, in it's own memory space.

Each sublevel is stored in a vector of type Val with eltype(Val) = Lvl.

julia> Tensor(Dense(Separate(Element(0.0))), [1, 2, 3])
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: Pointer ->
@@ -20,4 +20,4 @@
    ├─ [2]: Pointer ->
    │  └─ 2.0
    └─ [3]: Pointer ->
-      └─ 3.0
source

Parallel Loops

A loop can be run in parallel with a parallel dimension. A dimension can be wrapped in the parallel() modifier to indicate that it should run in parallel.

Finch.parallelFunction
parallel(ext, device=CPU(nthreads()))

A dimension ext that is parallelized over device. The ext field is usually _, or dimensionless, but can be any standard dimension argument.

source
Finch.CPUType
CPU(n)

A device that represents a CPU with n threads.

source
Finch.SerialType
Serial()

A device that represents a serial CPU execution.

source
+ └─ 3.0source

Parallel Loops

A loop can be run in parallel with a parallel dimension. A dimension can be wrapped in the parallel() modifier to indicate that it should run in parallel.

Finch.parallelFunction
parallel(ext, device=CPU(nthreads()))

A dimension ext that is parallelized over device. The ext field is usually _, or dimensionless, but can be any standard dimension argument.

source
Finch.CPUType
CPU(n)

A device that represents a CPU with n threads.

source
Finch.SerialType
Serial()

A device that represents a serial CPU execution.

source
diff --git a/dev/guides/sparse_utils/index.html b/dev/guides/sparse_utils/index.html index 9e234ef75..d126ad54f 100644 --- a/dev/guides/sparse_utils/index.html +++ b/dev/guides/sparse_utils/index.html @@ -1,5 +1,5 @@ -Sparse and Structured Utilities · Finch.jl

Sparse Array Utilities

Sparse Constructors

In addition to the Tensor constructor, Finch provides a number of convenience constructors for common tensor types. For example, the spzeros and sprand functions have fspzeros and fsprand counterparts that return Finch tensors. We can also construct a sparse COO Tensor from a list of indices and values using the fsparse function.

Finch.fsparseFunction
fsparse(I::Tuple, V,[ M::Tuple, combine]; fill_value=zero(eltype(V)))

Create a sparse COO tensor S such that size(S) == M and S[(i[q] for i = I)...] = V[q]. The combine function is used to combine duplicates. If M is not specified, it is set to map(maximum, I). If the combine function is not supplied, combine defaults to + unless the elements of V are Booleans in which case combine defaults to |. All elements of I must satisfy 1 <= I[n][q] <= M[n]. Numerical zeros are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

See also: sparse

Examples

julia> I = ( [1, 2, 3], [1, 2, 3], [1, 2, 3]);

julia> V = [1.0; 2.0; 3.0];

julia> fsparse(I, V) SparseCOO (0.0) [1:3×1:3×1:3] │ │ │ └─└─└─[1, 1, 1] [2, 2, 2] [3, 3, 3] 1.0 2.0 3.0

source
Finch.fsparse!Function
fsparse!(I..., V,[ M::Tuple])

Like fsparse, but the coordinates must be sorted and unique, and memory is reused.

source
Finch.fsprandFunction
fsprand([rng],[type], M..., p, [rfn])

Create a random sparse tensor of size m in COO format. There are two cases: - If p is floating point, the probability of any element being nonzero is independently given by p (and hence the expected density of nonzeros is also p). - If p is an integer, exactly p nonzeros are distributed uniformly at random throughout the tensor (and hence the density of nonzeros is exactly p / prod(M)). Nonzero values are sampled from the distribution specified by rfn and have the type type. The uniform distribution is used in case rfn is not specified. The optional rng argument specifies a random number generator.

See also: (sprand)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.sprand)

Examples

julia> fsprand(Bool, 3, 3, 0.5)
+Sparse and Structured Utilities · Finch.jl

Sparse Array Utilities

Sparse Constructors

In addition to the Tensor constructor, Finch provides a number of convenience constructors for common tensor types. For example, the spzeros and sprand functions have fspzeros and fsprand counterparts that return Finch tensors. We can also construct a sparse COO Tensor from a list of indices and values using the fsparse function.

Finch.fsparseFunction
fsparse(I::Tuple, V,[ M::Tuple, combine]; fill_value=zero(eltype(V)))

Create a sparse COO tensor S such that size(S) == M and S[(i[q] for i = I)...] = V[q]. The combine function is used to combine duplicates. If M is not specified, it is set to map(maximum, I). If the combine function is not supplied, combine defaults to + unless the elements of V are Booleans in which case combine defaults to |. All elements of I must satisfy 1 <= I[n][q] <= M[n]. Numerical zeros are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

See also: sparse

Examples

julia> I = ( [1, 2, 3], [1, 2, 3], [1, 2, 3]);

julia> V = [1.0; 2.0; 3.0];

julia> fsparse(I, V) SparseCOO (0.0) [1:3×1:3×1:3] │ │ │ └─└─└─[1, 1, 1] [2, 2, 2] [3, 3, 3] 1.0 2.0 3.0

source
Finch.fsparse!Function
fsparse!(I..., V,[ M::Tuple])

Like fsparse, but the coordinates must be sorted and unique, and memory is reused.

source
Finch.fsprandFunction
fsprand([rng],[type], M..., p, [rfn])

Create a random sparse tensor of size m in COO format. There are two cases: - If p is floating point, the probability of any element being nonzero is independently given by p (and hence the expected density of nonzeros is also p). - If p is an integer, exactly p nonzeros are distributed uniformly at random throughout the tensor (and hence the density of nonzeros is exactly p / prod(M)). Nonzero values are sampled from the distribution specified by rfn and have the type type. The uniform distribution is used in case rfn is not specified. The optional rng argument specifies a random number generator.

See also: (sprand)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.sprand)

Examples

julia> fsprand(Bool, 3, 3, 0.5)
 SparseCOO (false) [1:3,1:3]
 ├─├─[1, 1]: true
 ├─├─[3, 1]: true
@@ -11,13 +11,13 @@
 SparseCOO (0.0) [1:2,1:2,1:2]
 ├─├─├─[2, 2, 1]: 0.6478553157718558
 ├─├─├─[1, 1, 2]: 0.996665291437684
-├─├─├─[2, 1, 2]: 0.7491940599574348
source
Finch.fspzerosFunction
fspzeros([type], M...)

Create a random zero tensor of size M, with elements of type type. The tensor is in COO format.

See also: (spzeros)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.spzeros)

Examples

julia> fspzeros(Bool, 3, 3)
+├─├─├─[2, 1, 2]: 0.7491940599574348
source
Finch.fspzerosFunction
fspzeros([type], M...)

Create a random zero tensor of size M, with elements of type type. The tensor is in COO format.

See also: (spzeros)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.spzeros)

Examples

julia> fspzeros(Bool, 3, 3)
 3×3-Tensor
 └─ SparseCOO{2} (false) [:,1:3]
 
 julia> fspzeros(Float64, 2, 2, 2)
 2×2×2-Tensor
-└─ SparseCOO{3} (0.0) [:,:,1:2]
source
Finch.ffindnzFunction
ffindnz(arr)

Return the nonzero elements of arr, as Finch understands arr. Returns (I..., V), where I are the coordinate vectors, one for each mode of arr, and V is a vector of corresponding nonzero values, which can be passed to fsparse.

See also: (findnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.findnz)

source

Fill Values

Finch tensors support an arbitrary "background" value for sparse arrays. While most arrays use 0 as the background value, this is not always the case. For example, a sparse array of Int might use typemin(Int) as the background value. The default function returns the background value of a tensor. If you ever want to change the background value of an existing array, you can use the set_fill_value! function. The countstored function returns the number of stored elements in a tensor, and calling pattern! on a tensor returns tensor which is true whereever the original tensor stores a value. Note that countstored doesn't always return the number of non-zero elements in a tensor, as it counts the number of stored elements, and stored elements may include the background value. You can call dropfills! to remove explicitly stored background values from a tensor.

julia> A = fsparse([1, 1, 2, 3], [2, 4, 5, 6], [1.0, 2.0, 3.0])
+└─ SparseCOO{3} (0.0) [:,:,1:2]
source
Finch.ffindnzFunction
ffindnz(arr)

Return the nonzero elements of arr, as Finch understands arr. Returns (I..., V), where I are the coordinate vectors, one for each mode of arr, and V is a vector of corresponding nonzero values, which can be passed to fsparse.

See also: (findnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.findnz)

source

Fill Values

Finch tensors support an arbitrary "background" value for sparse arrays. While most arrays use 0 as the background value, this is not always the case. For example, a sparse array of Int might use typemin(Int) as the background value. The default function returns the background value of a tensor. If you ever want to change the background value of an existing array, you can use the set_fill_value! function. The countstored function returns the number of stored elements in a tensor, and calling pattern! on a tensor returns tensor which is true whereever the original tensor stores a value. Note that countstored doesn't always return the number of non-zero elements in a tensor, as it counts the number of stored elements, and stored elements may include the background value. You can call dropfills! to remove explicitly stored background values from a tensor.

julia> A = fsparse([1, 1, 2, 3], [2, 4, 5, 6], [1.0, 2.0, 3.0])
 3×6-Tensor
 └─ SparseCOO{2} (0.0) [:,1:6]
    ├─ [1, 2]: 1.0
@@ -90,7 +90,7 @@
    ├─ [3]: 3.0
    ├─ ⋮
    ├─ [7]: 5.0
-   └─ [9]: 6.0
source
Finch.pattern!Function
pattern!(fbr)

Return the pattern of fbr. That is, return a tensor which is true wherever fbr is structurally unequal to its fill_value. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
+   └─ [9]: 6.0
source
Finch.pattern!Function
pattern!(fbr)

Return the pattern of fbr. That is, return a tensor which is true wherever fbr is structurally unequal to its fill_value. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
 10-Tensor
 └─ SparseList (0.0) [1:10]
    ├─ [1]: 2.0
@@ -106,7 +106,7 @@
    ├─ [3]: true
    ├─ ⋮
    ├─ [7]: true
-   └─ [9]: true
source
Finch.countstoredFunction
countstored(arr)

Return the number of stored elements in arr. If there are explicitly stored fill elements, they are counted too.

See also: (SparseArrays.nnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nnz) and (Base.summarysize)(https://docs.julialang.org/en/v1/base/base/#Base.summarysize)

source
Finch.dropfillsFunction
dropfills(src)

Drop the fill values from src and return a new tensor with the same shape and format.

source

How to tell whether an entry is "fill"

In the sparse world, a semantic distinction is sometimes made between "explicitly stored" values and "implicit" or "fill" values (usually zero). However, the formats in the Finch compiler represent a diverse set of structures beyond sparsity, and it is often unclear whether any of the values in the tensor are "explicit" (consider a mask matrix, which can be represented with a constant number of bits). Thus, Finch makes no semantic distinction between values which are stored explicitly or not. If users wish to make this distinction, they should instead store a tensor of tuples of the form (value, is_fill). For example,

julia> A = fsparse([1, 1, 2, 3], [2, 4, 5, 6], [(1.0, false), (0.0, true), (3.0, false)]; fill_value=(0.0, true))
+   └─ [9]: true
source
Finch.countstoredFunction
countstored(arr)

Return the number of stored elements in arr. If there are explicitly stored fill elements, they are counted too.

See also: (SparseArrays.nnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nnz) and (Base.summarysize)(https://docs.julialang.org/en/v1/base/base/#Base.summarysize)

source
Finch.dropfillsFunction
dropfills(src)

Drop the fill values from src and return a new tensor with the same shape and format.

source
Finch.dropfills!Function
dropfills!(dst, src)

Copy only the non-fill values from src into dst.

source

How to tell whether an entry is "fill"

In the sparse world, a semantic distinction is sometimes made between "explicitly stored" values and "implicit" or "fill" values (usually zero). However, the formats in the Finch compiler represent a diverse set of structures beyond sparsity, and it is often unclear whether any of the values in the tensor are "explicit" (consider a mask matrix, which can be represented with a constant number of bits). Thus, Finch makes no semantic distinction between values which are stored explicitly or not. If users wish to make this distinction, they should instead store a tensor of tuples of the form (value, is_fill). For example,

julia> A = fsparse([1, 1, 2, 3], [2, 4, 5, 6], [(1.0, false), (0.0, true), (3.0, false)]; fill_value=(0.0, true))
 3×6-Tensor
 └─ SparseCOO{2} ((0.0, true)) [:,1:6]
    ├─ [1, 2]: (1.0, false)
@@ -129,4 +129,4 @@
 
 julia> sum(map(first, B))
 4.0
-
+
diff --git a/dev/guides/tensor_formats/index.html b/dev/guides/tensor_formats/index.html index f5e38b151..3706dc061 100644 --- a/dev/guides/tensor_formats/index.html +++ b/dev/guides/tensor_formats/index.html @@ -21,11 +21,11 @@ ├─ [:, 2]: SparseList (0.0) [1:4] └─ [:, 3]: SparseList (0.0) [1:4] ├─ [1]: 4.4 - └─ [3]: 5.5

Storage Tree Level Formats

This section describes the formatted storage for Finch tensors, the first argument to the Tensor constructor. Level storage types holds all of the tensor data, and can be nested hierarchichally.

Finch represents tensors hierarchically in a tree, where each node in the tree is a vector of subtensors and the leaves are the elements. Thus, a matrix is analogous to a vector of vectors, and a 3-tensor is analogous to a vector of vectors of vectors. The vectors at each level of the tensor all have the same structure, which can be selected by the user.

In a Finch tensor tree, the child of each node is selected by an array index. All of the children at the same level will use the same format and share the same storage. Finch is column major, so in an expression A[i_1, ..., i_N], the rightmost dimension i_N corresponds to the root level of the tree, and the leftmost dimension i_1 corresponds to the leaf level.

Our example could be visualized as follows:

CSC Format Index Tree

Types of Level Storage

Finch supports a variety of storage formats for each level of the tensor tree, each with advantages and disadvantages. Some storage formats support in-order access, while others support random access. Some storage formats must be written to in column-major order, while others support out-of-order writes. The capabilities of each level are summarized in the following tables along with some general descriptions.

Level Format NameGroupData CharacteristicColumn-Major ReadsRandom ReadsColumn-Major Bulk UpdateRandom Bulk UpdateRandom UpdatesStatus
DenseCoreDense
SparseTreeCoreSparse⚙️
SparseRunListTreeCoreSparse Runs⚙️
ElementCoreLeaf
PatternCoreLeaf
SparseListAdvancedSparse
SparseRunListAdvancedSparse Runs
SparseBlockListAdvancedSparse Blocks
SparsePointAdvancedSingle Sparse
SparseIntervalAdvancedSingle Sparse Run
SparseBandAdvancedSingle Sparse Block⚙️
RunListAdvancedDense Runs⚙️
SparseBytemapAdvancedSparse
SparseDictAdvancedSparse✅️
MutexLevelModifierNo Data⚙️
SeperationLevelModifierNo Data⚙️
SparseCOOLegacySparse✅️

The "Level Format Name" is the name of the level datatype. Other columns have descriptions below.

Status

SymbolMeaning
Indicates the level is ready for serious use.
⚙️Indicates the level is experimental and under development.
🕸️Indicates the level is deprecated, and may be removed in a future release.

Groups

Core Group

Contains the basic, minimal set of levels one should use to build and manipulate tensors. These levels can be efficiently read and written to in any order.

Advanced Group

Contains levels which are more specialized, and geared towards bulk updates. These levels may be more efficient in certain cases, but are also more restrictive about access orders and intended for more advanced usage.

Modifier Group

Contains levels which are also more specialized, but not towards a sparsity pattern. These levels modify other levels in a variety of ways, but don't store novel sparsity patterns. Typically, they modify how levels are stored or attach data to levels to support the utilization of various hardware features.

Legacy Group

Contains levels which are not recommended for new code, but are included for compatibility with older code.

Data Characteristics

Level TypeDescription
DenseLevels which store every subtensor.
LeafLevels which store only scalars, used for the leaf level of the tree.
SparseLevels which store only non-fill values, used for levels with few nonzeros.
Sparse RunsLevels which store runs of repeated non-fill values.
Sparse BlocksLevels which store Blocks of repeated non-fill values.
Dense RunsLevels which store runs of repeated values, and no compile-time zero annihilation.
No DataLevels which don't store data but which alter the storage pattern or attach additional meta-data.

Note that the Single sparse levels store a single instance of each nonzero, run, or block. These are useful with a parent level to represent IDs.

Access Characteristics

Operation TypeDescription
Column-Major ReadsIndicates efficient reading of data in column-major order.
Random ReadsIndicates efficient reading of data in random-access order.
Column-Major Bulk UpdateIndicates efficient writing of data in column-major order, the total time roughly linear to the size of the tensor.
Column-Major Random UpdateIndicates efficient writing of data in random-access order, the total time roughly linear to the size of the tensor.
Random UpdateIndicates efficient writing of data in random-access order, the total time roughly linear to the number of updates.

Examples of Popular Formats in Finch

Finch levels can be used to construct a variety of popular sparse formats. A few examples follow:

Format TypeSyntax
Sparse VectorTensor(SparseList(Element(0.0)), args...)
CSC MatrixTensor(Dense(SparseList(Element(0.0))), args...)
CSF 3-TensorTensor(Dense(SparseList(SparseList(Element(0.0)))), args...)
DCSC (Hypersparse) MatrixTensor(SparseList(SparseList(Element(0.0))), args...)
COO MatrixTensor(SparseCOO{2}(Element(0.0)), args...)
COO 3-TensorTensor(SparseCOO{3}(Element(0.0)), args...)
Run-Length-Encoded ImageTensor(Dense(RunList(Element(0.0))), args...)

Tensor Constructors

Finch.TensorType
Tensor{Lvl} <: AbstractFiber{Lvl}

The multidimensional array type used by Finch. Tensor is a thin wrapper around the hierarchical level storage of type Lvl.

source
Finch.TensorMethod
Tensor(lvl)

Construct a Tensor using the tensor level storage lvl. No initialization of storage is performed, it is assumed that position 1 of lvl corresponds to a valid tensor, and lvl will be wrapped as-is. Call a different constructor to initialize the storage.

source
Finch.TensorMethod
Tensor(lvl, [undef], dims...)

Construct a Tensor of size dims, and initialize to undef, potentially allocating memory. Here undef is the UndefInitializer singleton type. dims... may be a variable number of dimensions or a tuple of dimensions, but it must correspond to the number of dimensions in lvl.

source
Finch.TensorMethod
Tensor(lvl, arr)

Construct a Tensor and initialize it to the contents of arr. To explicitly copy into a tensor, use @ref[copyto!]

source
Finch.TensorMethod
Tensor(lvl, arr)

Construct a Tensor and initialize it to the contents of arr. To explicitly copy into a tensor, use @ref[copyto!]

source
Finch.TensorMethod
Tensor(arr, [init = zero(eltype(arr))])

Copy an array-like object arr into a corresponding, similar Tensor datastructure. Uses init as an initial value. May reuse memory when possible. To explicitly copy into a tensor, use @ref[copyto!].

Examples

julia> println(summary(Tensor(sparse([1 0; 0 1]))))
+      └─ [3]: 5.5

Storage Tree Level Formats

This section describes the formatted storage for Finch tensors, the first argument to the Tensor constructor. Level storage types holds all of the tensor data, and can be nested hierarchichally.

Finch represents tensors hierarchically in a tree, where each node in the tree is a vector of subtensors and the leaves are the elements. Thus, a matrix is analogous to a vector of vectors, and a 3-tensor is analogous to a vector of vectors of vectors. The vectors at each level of the tensor all have the same structure, which can be selected by the user.

In a Finch tensor tree, the child of each node is selected by an array index. All of the children at the same level will use the same format and share the same storage. Finch is column major, so in an expression A[i_1, ..., i_N], the rightmost dimension i_N corresponds to the root level of the tree, and the leftmost dimension i_1 corresponds to the leaf level.

Our example could be visualized as follows:

CSC Format Index Tree

Types of Level Storage

Finch supports a variety of storage formats for each level of the tensor tree, each with advantages and disadvantages. Some storage formats support in-order access, while others support random access. Some storage formats must be written to in column-major order, while others support out-of-order writes. The capabilities of each level are summarized in the following tables along with some general descriptions.

Level Format NameGroupData CharacteristicColumn-Major ReadsRandom ReadsColumn-Major Bulk UpdateRandom Bulk UpdateRandom UpdatesStatus
DenseCoreDense
SparseTreeCoreSparse⚙️
SparseRunListTreeCoreSparse Runs⚙️
ElementCoreLeaf
PatternCoreLeaf
SparseListAdvancedSparse
SparseRunListAdvancedSparse Runs
SparseBlockListAdvancedSparse Blocks
SparsePointAdvancedSingle Sparse
SparseIntervalAdvancedSingle Sparse Run
SparseBandAdvancedSingle Sparse Block⚙️
RunListAdvancedDense Runs⚙️
SparseBytemapAdvancedSparse
SparseDictAdvancedSparse✅️
MutexLevelModifierNo Data⚙️
SeperationLevelModifierNo Data⚙️
SparseCOOLegacySparse✅️

The "Level Format Name" is the name of the level datatype. Other columns have descriptions below.

Status

SymbolMeaning
Indicates the level is ready for serious use.
⚙️Indicates the level is experimental and under development.
🕸️Indicates the level is deprecated, and may be removed in a future release.

Groups

Core Group

Contains the basic, minimal set of levels one should use to build and manipulate tensors. These levels can be efficiently read and written to in any order.

Advanced Group

Contains levels which are more specialized, and geared towards bulk updates. These levels may be more efficient in certain cases, but are also more restrictive about access orders and intended for more advanced usage.

Modifier Group

Contains levels which are also more specialized, but not towards a sparsity pattern. These levels modify other levels in a variety of ways, but don't store novel sparsity patterns. Typically, they modify how levels are stored or attach data to levels to support the utilization of various hardware features.

Legacy Group

Contains levels which are not recommended for new code, but are included for compatibility with older code.

Data Characteristics

Level TypeDescription
DenseLevels which store every subtensor.
LeafLevels which store only scalars, used for the leaf level of the tree.
SparseLevels which store only non-fill values, used for levels with few nonzeros.
Sparse RunsLevels which store runs of repeated non-fill values.
Sparse BlocksLevels which store Blocks of repeated non-fill values.
Dense RunsLevels which store runs of repeated values, and no compile-time zero annihilation.
No DataLevels which don't store data but which alter the storage pattern or attach additional meta-data.

Note that the Single sparse levels store a single instance of each nonzero, run, or block. These are useful with a parent level to represent IDs.

Access Characteristics

Operation TypeDescription
Column-Major ReadsIndicates efficient reading of data in column-major order.
Random ReadsIndicates efficient reading of data in random-access order.
Column-Major Bulk UpdateIndicates efficient writing of data in column-major order, the total time roughly linear to the size of the tensor.
Column-Major Random UpdateIndicates efficient writing of data in random-access order, the total time roughly linear to the size of the tensor.
Random UpdateIndicates efficient writing of data in random-access order, the total time roughly linear to the number of updates.

Examples of Popular Formats in Finch

Finch levels can be used to construct a variety of popular sparse formats. A few examples follow:

Format TypeSyntax
Sparse VectorTensor(SparseList(Element(0.0)), args...)
CSC MatrixTensor(Dense(SparseList(Element(0.0))), args...)
CSF 3-TensorTensor(Dense(SparseList(SparseList(Element(0.0)))), args...)
DCSC (Hypersparse) MatrixTensor(SparseList(SparseList(Element(0.0))), args...)
COO MatrixTensor(SparseCOO{2}(Element(0.0)), args...)
COO 3-TensorTensor(SparseCOO{3}(Element(0.0)), args...)
Run-Length-Encoded ImageTensor(Dense(RunList(Element(0.0))), args...)

Tensor Constructors

Finch.TensorType
Tensor{Lvl} <: AbstractFiber{Lvl}

The multidimensional array type used by Finch. Tensor is a thin wrapper around the hierarchical level storage of type Lvl.

source
Finch.TensorMethod
Tensor(lvl)

Construct a Tensor using the tensor level storage lvl. No initialization of storage is performed, it is assumed that position 1 of lvl corresponds to a valid tensor, and lvl will be wrapped as-is. Call a different constructor to initialize the storage.

source
Finch.TensorMethod
Tensor(lvl, [undef], dims...)

Construct a Tensor of size dims, and initialize to undef, potentially allocating memory. Here undef is the UndefInitializer singleton type. dims... may be a variable number of dimensions or a tuple of dimensions, but it must correspond to the number of dimensions in lvl.

source
Finch.TensorMethod
Tensor(lvl, arr)

Construct a Tensor and initialize it to the contents of arr. To explicitly copy into a tensor, use @ref[copyto!]

source
Finch.TensorMethod
Tensor(lvl, arr)

Construct a Tensor and initialize it to the contents of arr. To explicitly copy into a tensor, use @ref[copyto!]

source
Finch.TensorMethod
Tensor(arr, [init = zero(eltype(arr))])

Copy an array-like object arr into a corresponding, similar Tensor datastructure. Uses init as an initial value. May reuse memory when possible. To explicitly copy into a tensor, use @ref[copyto!].

Examples

julia> println(summary(Tensor(sparse([1 0; 0 1]))))
 2×2 Tensor(Dense(SparseList(Element(0))))
 
 julia> println(summary(Tensor(ones(3, 2, 4))))
-3×2×4 Tensor(Dense(Dense(Dense(Element(0.0)))))
source

Level Constructors

Core Levels

Finch.DenseLevelType
DenseLevel{[Ti=Int]}(lvl, [dim])

A subfiber of a dense level is an array which stores every slice A[:, ..., :, i] as a distinct subfiber in lvl. Optionally, dim is the size of the last dimension. Ti is the type of the indices used to index the level.

julia> ndims(Tensor(Dense(Element(0.0))))
+3×2×4 Tensor(Dense(Dense(Dense(Element(0.0)))))
source

Level Constructors

Core Levels

Finch.DenseLevelType
DenseLevel{[Ti=Int]}(lvl, [dim])

A subfiber of a dense level is an array which stores every slice A[:, ..., :, i] as a distinct subfiber in lvl. Optionally, dim is the size of the last dimension. Ti is the type of the indices used to index the level.

julia> ndims(Tensor(Dense(Element(0.0))))
 1
 
 julia> ndims(Tensor(Dense(Dense(Element(0.0)))))
@@ -39,17 +39,17 @@
    │  └─ [2]: 3.0
    └─ [:, 2]: Dense [1:2]
       ├─ [1]: 2.0
-      └─ [2]: 4.0
source
Finch.ElementLevelType
ElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()

A subfiber of an element level is a scalar of type Tv, initialized to Vf. Vf may optionally be given as the first argument.

The data is stored in a vector of type Val with eltype(Val) = Tv. The type Tp is the index type used to access Val.

julia> Tensor(Dense(Element(0.0)), [1, 2, 3])
+      └─ [2]: 4.0
source
Finch.ElementLevelType
ElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()

A subfiber of an element level is a scalar of type Tv, initialized to Vf. Vf may optionally be given as the first argument.

The data is stored in a vector of type Val with eltype(Val) = Tv. The type Tp is the index type used to access Val.

julia> Tensor(Dense(Element(0.0)), [1, 2, 3])
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: 1.0
    ├─ [2]: 2.0
-   └─ [3]: 3.0
source
Finch.PatternLevelType
PatternLevel{[Tp=Int]}()

A subfiber of a pattern level is the Boolean value true, but it's fill_value is false. PatternLevels are used to create tensors that represent which values are stored by other fibers. See pattern! for usage examples.

julia> Tensor(Dense(Pattern()), 3)
+   └─ [3]: 3.0
source
Finch.PatternLevelType
PatternLevel{[Tp=Int]}()

A subfiber of a pattern level is the Boolean value true, but it's fill_value is false. PatternLevels are used to create tensors that represent which values are stored by other fibers. See pattern! for usage examples.

julia> Tensor(Dense(Pattern()), 3)
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: true
    ├─ [2]: true
-   └─ [3]: true
source

Advanced Levels

Finch.SparseListLevelType
SparseListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A sorted list is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+   └─ [3]: true
source

Advanced Levels

Finch.SparseListLevelType
SparseListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A sorted list is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseList (0.0) [1:3]
@@ -69,7 +69,7 @@
    └─ [:, 3]: SparseList (0.0) [1:3]
       ├─ [1]: 20.0
       └─ [3]: 40.0
-
source
Finch.RunListLevelType
RunListLevel{[Ti=Int], [Ptr, Right]}(lvl, [dim], [merge = true])

The RunListLevel represent runs of equivalent slices A[:, ..., :, i]. A sorted list is used to record the right endpoint of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(RunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+
source
Finch.RunListLevelType
RunListLevel{[Ti=Int], [Ptr, Right]}(lvl, [dim], [merge = true])

The RunListLevel represent runs of equivalent slices A[:, ..., :, i]. A sorted list is used to record the right endpoint of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(RunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: RunList (0.0) [1:3]
@@ -81,7 +81,7 @@
    └─ [:, 3]: RunList (0.0) [1:3]
       ├─ [1:1]: 20.0
       ├─ [2:2]: 0.0
-      └─ [3:3]: 40.0
source
Finch.SparseRunListLevelType
SparseRunListLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim]; [merge = true])

The SparseRunListLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A sorted list is used to record the left and right endpoints of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr, Left, and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(SparseRunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+      └─ [3:3]: 40.0
source
Finch.SparseRunListLevelType
SparseRunListLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim]; [merge = true])

The SparseRunListLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A sorted list is used to record the left and right endpoints of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr, Left, and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(SparseRunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseRunList (0.0) [1:3]
@@ -90,7 +90,7 @@
    ├─ [:, 2]: SparseRunList (0.0) [1:3]
    └─ [:, 3]: SparseRunList (0.0) [1:3]
       ├─ [1:1]: 20.0
-      └─ [3:3]: 40.0
source
Finch.SparseBlockListLevelType

SparseBlockListLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseListLevel, but contiguous subfibers are stored together in blocks.

```jldoctest julia> Tensor(Dense(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

julia> Tensor(SparseBlockList(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) SparseList (0.0) [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparseBandLevelType

SparseBandLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseBlockListLevel, but stores only a single block, and fills in zeros.

```jldoctest julia> Tensor(Dense(SparseBand(Element(0.0))), [10 0 20; 30 40 0; 0 0 50]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparsePointLevelType
SparsePointLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a SparsePoint level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A main difference compared to SparseList level is that SparsePoint level only stores a 'single' non-fill slice. It emits an error if the program tries to write multiple (>=2) coordinates into SparsePoint.

Ti is the type of the last tensor index. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparsePoint(Element(0.0))), [10 0 0; 0 20 0; 0 0 30])
+      └─ [3:3]: 40.0
source
Finch.SparseBlockListLevelType

SparseBlockListLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseListLevel, but contiguous subfibers are stored together in blocks.

```jldoctest julia> Tensor(Dense(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

julia> Tensor(SparseBlockList(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) SparseList (0.0) [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparseBandLevelType

SparseBandLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseBlockListLevel, but stores only a single block, and fills in zeros.

```jldoctest julia> Tensor(Dense(SparseBand(Element(0.0))), [10 0 20; 30 40 0; 0 0 50]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparsePointLevelType
SparsePointLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a SparsePoint level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A main difference compared to SparseList level is that SparsePoint level only stores a 'single' non-fill slice. It emits an error if the program tries to write multiple (>=2) coordinates into SparsePoint.

Ti is the type of the last tensor index. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparsePoint(Element(0.0))), [10 0 0; 0 20 0; 0 0 30])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparsePoint (0.0) [1:3]
@@ -107,7 +107,7 @@
       ├─ [1]: 0.0
       ├─ [2]: 30.0
       └─ [3]: 30.0
-
source
Finch.SparseIntervalLevelType
SparseIntervalLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])

The SparseIntervalLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A main difference compared to SparseRunList level is that SparseInterval level only stores a 'single' non-fill run. It emits an error if the program tries to write multiple (>=2) runs into SparseInterval.

Ti is the type of the last tensor index. The types Ptr, Left, and 'Right' are the types of the arrays used to store positions and endpoints.

julia> Tensor(SparseInterval(Element(0)), [0, 10, 0])
+
source
Finch.SparseIntervalLevelType
SparseIntervalLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])

The SparseIntervalLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A main difference compared to SparseRunList level is that SparseInterval level only stores a 'single' non-fill run. It emits an error if the program tries to write multiple (>=2) runs into SparseInterval.

Ti is the type of the last tensor index. The types Ptr, Left, and 'Right' are the types of the arrays used to store positions and endpoints.

julia> Tensor(SparseInterval(Element(0)), [0, 10, 0])
 3-Tensor
 └─ SparseInterval (0) [1:3]
    └─ [2:2]: 10
@@ -120,7 +120,7 @@
 10-Tensor
 └─ SparseInterval (0) [1:10]
    └─ [3:6]: 1
-
source
Finch.SparseByteMapLevelType
SparseByteMapLevel{[Ti=Int], [Ptr, Tbl]}(lvl, [dims])

Like the SparseListLevel, but a dense bitmap is used to encode which slices are stored. This allows the ByteMap level to support random access.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level.

julia> Tensor(Dense(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+
source
Finch.SparseByteMapLevelType
SparseByteMapLevel{[Ti=Int], [Ptr, Tbl]}(lvl, [dims])

Like the SparseListLevel, but a dense bitmap is used to encode which slices are stored. This allows the ByteMap level to support random access.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level.

julia> Tensor(Dense(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseByteMap (0.0) [1:3]
@@ -137,7 +137,7 @@
    ├─ [:, 1]: SparseByteMap (0.0) [1:3]
    │  ├─ [1]: 10.0
    │  └─ [2]: 30.0
-   └─ [:, 3]: SparseByteMap (0.0) [1:3]
source
Finch.SparseDictLevelType
SparseDictLevel{[Ti=Int], [Tp=Int], [Ptr, Idx, Val, Tbl, Pool=Dict]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A datastructure specified by Tbl is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last fiber index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+   └─ [:, 3]: SparseByteMap (0.0) [1:3]
source
Finch.SparseDictLevelType
SparseDictLevel{[Ti=Int], [Tp=Int], [Ptr, Idx, Val, Tbl, Pool=Dict]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A datastructure specified by Tbl is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last fiber index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseDict (0.0) [1:3]
@@ -157,7 +157,7 @@
    └─ [:, 3]: SparseDict (0.0) [1:3]
       ├─ [1]: 20.0
       └─ [3]: 40.0
-
source

Legacy Levels

Finch.SparseCOOLevelType
SparseCOOLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl]}(lvl, [dims])

A subfiber of a sparse level does not need to represent slices which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. The sparse coo level corresponds to N indices in the subfiber, so fibers in the sublevel are the slices A[:, ..., :, i_1, ..., i_n]. A set of N lists (one for each index) are used to record which slices are stored. The coordinates (sets of N indices) are sorted in column major order. Optionally, dims are the sizes of the last dimensions.

TI is the type of the last N tensor indices, and Tp is the type used for positions in the level.

The type Tbl is an NTuple type where each entry k is a subtype AbstractVector{TI[k]}.

The type Ptr is the type for the pointer array.

julia> Tensor(Dense(SparseCOO{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+
source

Legacy Levels

Finch.SparseCOOLevelType
SparseCOOLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl]}(lvl, [dims])

A subfiber of a sparse level does not need to represent slices which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. The sparse coo level corresponds to N indices in the subfiber, so fibers in the sublevel are the slices A[:, ..., :, i_1, ..., i_n]. A set of N lists (one for each index) are used to record which slices are stored. The coordinates (sets of N indices) are sorted in column major order. Optionally, dims are the sizes of the last dimensions.

TI is the type of the last N tensor indices, and Tp is the type used for positions in the level.

The type Tbl is an NTuple type where each entry k is a subtype AbstractVector{TI[k]}.

The type Ptr is the type for the pointer array.

julia> Tensor(Dense(SparseCOO{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseCOO{1} (0.0) [1:3]
@@ -174,4 +174,4 @@
    ├─ [1, 1]: 10.0
    ├─ [2, 1]: 30.0
    ├─ [1, 3]: 20.0
-   └─ [3, 3]: 40.0
source
+ └─ [3, 3]: 40.0source diff --git a/dev/guides/user-defined_functions/index.html b/dev/guides/user-defined_functions/index.html index c1857d0c2..156af0603 100644 --- a/dev/guides/user-defined_functions/index.html +++ b/dev/guides/user-defined_functions/index.html @@ -14,18 +14,18 @@ julia> x = Scalar(0.0); @finch for i=_; x[] <<choose(1.1)>>= a[i] end; julia> x[] -0.0source
Finch.minbyFunction
minby(a, b)

Return the min of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmin operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(Inf => 0);
+0.0
source
Finch.minbyFunction
minby(a, b)

Return the min of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmin operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(Inf => 0);
 
 julia> @finch for i=_; x[] <<minby>>= a[i] => i end;
 
 julia> x[]
-3.3 => 2
source
Finch.maxbyFunction
maxby(a, b)

Return the max of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmax operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(-Inf => 0);
+3.3 => 2
source
Finch.maxbyFunction
maxby(a, b)

Return the max of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmax operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(-Inf => 0);
 
 julia> @finch for i=_; x[] <<maxby>>= a[i] => i end;
 
 julia> x[]
-9.9 => 3
source

Properties

The full list of properties recognized by Finch is as follows (use these to declare the properties of your own functions):

Finch.isassociativeFunction
isassociative(algebra, f)

Return true when f(a..., f(b...), c...) = f(a..., b..., c...) in algebra.

source
Finch.iscommutativeFunction
iscommutative(algebra, f)

Return true when for all permutations p, f(a...) = f(a[p]...) in algebra.

source
Finch.isdistributiveFunction
isdistributive(algebra, f, g)

Return true when f(a, g(b, c)) = g(f(a, b), f(a, c)) in algebra.

source
Finch.isidempotentFunction
isidempotent(algebra, f)

Return true when f(a, b) = f(f(a, b), b) in algebra.

source
Finch.isidentityFunction
isidentity(algebra, f, x)

Return true when f(a..., x, b...) = f(a..., b...) in algebra.

source
Finch.isannihilatorFunction
isannihilator(algebra, f, x)

Return true when f(a..., x, b...) = x in algebra.

source
Finch.isinverseFunction
isinverse(algebra, f, g)

Return true when f(a, g(a)) is the identity under f in algebra.

source
Finch.isinvolutionFunction
isinvolution(algebra, f)

Return true when f(f(a)) = a in algebra.

source
Finch.return_typeFunction
return_type(algebra, f, arg_types...)

Give the return type of f when applied to arguments of types arg_types... in algebra. Used to determine output types of functions in the high-level interface. This function falls back to Base.promote_op.

source

Finch Kernel Caching

Finch code is cached when you first run it. Thus, if you run a Finch function once, then make changes to the Finch compiler (like defining new properties), the cached code will be used and the changes will not be reflected.

It's best to design your code so that modifications to the Finch compiler occur before any Finch functions are called. However, if you really need to modify a precompiled Finch kernel, you can call Finch.refresh() to invalidate the code cache.

Finch.refreshFunction
Finch.refresh()

Finch caches the code for kernels as soon as they are run. If you modify the Finch compiler after running a kernel, you'll need to invalidate the Finch caches to reflect these changes by calling Finch.refresh(). This function should only be called at global scope, and never during precompilation.

source

(Advanced) On World-Age and Generated Functions

Julia uses a "world age" to describe the set of defined functions at a point in time. Generated functions run in the same world age in which they were defined, so they can't call functions defined after the generated function. This means that if Finch used normal generated functions, users can't define their own functions without first redefining all of Finch's generated functions.

Finch uses special generators that run in the current world age, but do not update with subsequent compiler function invalidations. If two packages modify the behavior of Finch in different ways, and call those Finch functions during precompilation, the resulting behavior is undefined.

There are several packages that take similar, but different, approaches to allow user participation in staged Julia programming (not to mention Base eval or @generated): StagedFunctions.jl, GeneralizedGenerated.jl, RuntimeGeneratedFunctions.jl, or Zygote.jl.

Our approach is most similar to that of StagedFunctions.jl or Zygote.jl. We chose our approach to be the simple and flexible while keeping the kernel call overhead low.

(Advanced) Separate Algebras

If you want to define non-standard properties or custom rewrite rules for some functions in a separate context, you can represent these changes with your own algebra type. We express this by subtyping AbstractAlgebra and defining properties as follows:

struct MyAlgebra <: AbstractAlgebra end
+9.9 => 3
source

Properties

The full list of properties recognized by Finch is as follows (use these to declare the properties of your own functions):

Finch.isassociativeFunction
isassociative(algebra, f)

Return true when f(a..., f(b...), c...) = f(a..., b..., c...) in algebra.

source
Finch.iscommutativeFunction
iscommutative(algebra, f)

Return true when for all permutations p, f(a...) = f(a[p]...) in algebra.

source
Finch.isdistributiveFunction
isdistributive(algebra, f, g)

Return true when f(a, g(b, c)) = g(f(a, b), f(a, c)) in algebra.

source
Finch.isidempotentFunction
isidempotent(algebra, f)

Return true when f(a, b) = f(f(a, b), b) in algebra.

source
Finch.isidentityFunction
isidentity(algebra, f, x)

Return true when f(a..., x, b...) = f(a..., b...) in algebra.

source
Finch.isannihilatorFunction
isannihilator(algebra, f, x)

Return true when f(a..., x, b...) = x in algebra.

source
Finch.isinverseFunction
isinverse(algebra, f, g)

Return true when f(a, g(a)) is the identity under f in algebra.

source
Finch.isinvolutionFunction
isinvolution(algebra, f)

Return true when f(f(a)) = a in algebra.

source
Finch.return_typeFunction
return_type(algebra, f, arg_types...)

Give the return type of f when applied to arguments of types arg_types... in algebra. Used to determine output types of functions in the high-level interface. This function falls back to Base.promote_op.

source

Finch Kernel Caching

Finch code is cached when you first run it. Thus, if you run a Finch function once, then make changes to the Finch compiler (like defining new properties), the cached code will be used and the changes will not be reflected.

It's best to design your code so that modifications to the Finch compiler occur before any Finch functions are called. However, if you really need to modify a precompiled Finch kernel, you can call Finch.refresh() to invalidate the code cache.

Finch.refreshFunction
Finch.refresh()

Finch caches the code for kernels as soon as they are run. If you modify the Finch compiler after running a kernel, you'll need to invalidate the Finch caches to reflect these changes by calling Finch.refresh(). This function should only be called at global scope, and never during precompilation.

source

(Advanced) On World-Age and Generated Functions

Julia uses a "world age" to describe the set of defined functions at a point in time. Generated functions run in the same world age in which they were defined, so they can't call functions defined after the generated function. This means that if Finch used normal generated functions, users can't define their own functions without first redefining all of Finch's generated functions.

Finch uses special generators that run in the current world age, but do not update with subsequent compiler function invalidations. If two packages modify the behavior of Finch in different ways, and call those Finch functions during precompilation, the resulting behavior is undefined.

There are several packages that take similar, but different, approaches to allow user participation in staged Julia programming (not to mention Base eval or @generated): StagedFunctions.jl, GeneralizedGenerated.jl, RuntimeGeneratedFunctions.jl, or Zygote.jl.

Our approach is most similar to that of StagedFunctions.jl or Zygote.jl. We chose our approach to be the simple and flexible while keeping the kernel call overhead low.

(Advanced) Separate Algebras

If you want to define non-standard properties or custom rewrite rules for some functions in a separate context, you can represent these changes with your own algebra type. We express this by subtyping AbstractAlgebra and defining properties as follows:

struct MyAlgebra <: AbstractAlgebra end
 
 Finch.isassociative(::MyAlgebra, ::typeof(gcd)) = true
 Finch.iscommutative(::MyAlgebra, ::typeof(gcd)) = true
-Finch.isannihilator(::MyAlgebra, ::typeof(gcd), x) = x == 1

We pass the algebra to Finch as an optional first argument:

@finch MyAlgebra() (w .= 1; for i=_; w[i] = gcd(u[i], v[i]) end; return w)

Rewriting

Define custom rewrite rules by overloading the get_simplify_rules function on your algebra. Unless you want to write the full rule set from scratch, be sure to append your new rules to the old rules, which can be obtained by calling get_simplify_rules with another algebra. Rules can be specified directly on Finch IR using RewriteTools.jl.

Finch.get_simplify_rulesFunction
get_simplify_rules(alg, shash)

Return the program rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. Defaults to a collection of straightforward rules that use the algebra to check properties of functions like associativity, commutativity, etc. shash is an object that can be called to return a static hash value. This rule set simplifies, normalizes, and propagates constants, and is the basis for how Finch understands sparsity.

source
Finch.get_prove_rulesFunction
get_prove_rules(alg, shash)

Return the bound rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. shash is an object that can be called to return a static hash value. This rule set is used to analyze loop bounds in Finch.

source
+Finch.isannihilator(::MyAlgebra, ::typeof(gcd), x) = x == 1

We pass the algebra to Finch as an optional first argument:

@finch MyAlgebra() (w .= 1; for i=_; w[i] = gcd(u[i], v[i]) end; return w)

Rewriting

Define custom rewrite rules by overloading the get_simplify_rules function on your algebra. Unless you want to write the full rule set from scratch, be sure to append your new rules to the old rules, which can be obtained by calling get_simplify_rules with another algebra. Rules can be specified directly on Finch IR using RewriteTools.jl.

Finch.get_simplify_rulesFunction
get_simplify_rules(alg, shash)

Return the program rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. Defaults to a collection of straightforward rules that use the algebra to check properties of functions like associativity, commutativity, etc. shash is an object that can be called to return a static hash value. This rule set simplifies, normalizes, and propagates constants, and is the basis for how Finch understands sparsity.

source
Finch.get_prove_rulesFunction
get_prove_rules(alg, shash)

Return the bound rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. shash is an object that can be called to return a static hash value. This rule set is used to analyze loop bounds in Finch.

source
diff --git a/dev/index.html b/dev/index.html index 64800e377..b95b146f9 100644 --- a/dev/index.html +++ b/dev/index.html @@ -33,4 +33,4 @@ result = () s.val = s_val result -end

We're working on adding more documentation, for now take a look at the examples!

+end

We're working on adding more documentation, for now take a look at the examples!

diff --git a/dev/reference/internals/compiler_interface/index.html b/dev/reference/internals/compiler_interface/index.html index 4bba9b6db..0ded23e45 100644 --- a/dev/reference/internals/compiler_interface/index.html +++ b/dev/reference/internals/compiler_interface/index.html @@ -1,2 +1,2 @@ -Compiler Interfaces · Finch.jl

Compiler Internals

Finch has several compiler modules with separate interfaces.

SymbolicContexts

SymbolicContexts are used to represent the symbolic information of a program. They are used to reason about the bounds of loops and the symbolic information of the program, and are defined on an algebra

Finch.StaticHashType
StaticHash

A hash function which is static, i.e. the hashes are the same when objects are hashed in the same order. The hash is used to memoize the results of simplification and proof rules.

source
Finch.get_static_hashFunction
get_static_hash(ctx)

Return an object which can be called as a hash function. The hashes are the same when objects are hashed in the same order.

source
Finch.proveFunction
prove(ctx, root; verbose = false)

use the rules in ctx to attempt to prove that the program root is true. Return false if the program cannot be shown to be true.

source
Finch.simplifyFunction

simplify(ctx, node)

simplify the program node using the rules in ctx

source

ScopeContexts

ScopeContexts are used to represent the scope of a program. They are used to reason about values bound to variables and also the modes of tensor variables.

Finch.get_bindingFunction
get_binding(ctx, var)

Get the binding of a variable in the context.

source
get_binding(ctx, var, val)

Get the binding of a variable in the context, or return a default value.

source

JuliaContexts

JuliaContexts are used to represent the execution environment of a program, including variables and tasks. They are used to generate code.

Finch.NamespaceType
Namespace

A namespace for managing variable names and aesthetic fresh variable generation.

source
Finch.JuliaContextType
JuliaContext

A context for compiling Julia code, managing side effects, parallelism, and variable names in the generated code of the executing environment.

source
Finch.push_preamble!Function
push_preamble!(ctx, thunk)

Push the thunk onto the preamble in the currently executing context. The preamble will be evaluated before the code returned by the given function in the context.

source
Finch.push_epilogue!Function
push_epilogue!(ctx, thunk)

Push the thunk onto the epilogue in the currently executing context. The epilogue will be evaluated after the code returned by the given function in the context.

source
Finch.freshenFunction
freshen(ctx, tags...)

Return a fresh variable in the current context named after Symbol(tags...)

source
Finch.containFunction
contain(f, ctx)

Call f on a subcontext of ctx and return the result. Variable bindings, preambles, and epilogues defined in the subcontext will not escape the call to contain.

source

AbstractCompiler

The AbstractCompiler interface requires all of the functionality of the above contexts, as well as the following two methods:

Finch.get_resultFunction
get_result(ctx)

Return a variable which evaluates to the result of the program which should be returned to the user.

source
+Compiler Interfaces · Finch.jl

Compiler Internals

Finch has several compiler modules with separate interfaces.

SymbolicContexts

SymbolicContexts are used to represent the symbolic information of a program. They are used to reason about the bounds of loops and the symbolic information of the program, and are defined on an algebra

Finch.StaticHashType
StaticHash

A hash function which is static, i.e. the hashes are the same when objects are hashed in the same order. The hash is used to memoize the results of simplification and proof rules.

source
Finch.get_static_hashFunction
get_static_hash(ctx)

Return an object which can be called as a hash function. The hashes are the same when objects are hashed in the same order.

source
Finch.proveFunction
prove(ctx, root; verbose = false)

use the rules in ctx to attempt to prove that the program root is true. Return false if the program cannot be shown to be true.

source
Finch.simplifyFunction

simplify(ctx, node)

simplify the program node using the rules in ctx

source

ScopeContexts

ScopeContexts are used to represent the scope of a program. They are used to reason about values bound to variables and also the modes of tensor variables.

Finch.get_bindingFunction
get_binding(ctx, var)

Get the binding of a variable in the context.

source
get_binding(ctx, var, val)

Get the binding of a variable in the context, or return a default value.

source

JuliaContexts

JuliaContexts are used to represent the execution environment of a program, including variables and tasks. They are used to generate code.

Finch.NamespaceType
Namespace

A namespace for managing variable names and aesthetic fresh variable generation.

source
Finch.JuliaContextType
JuliaContext

A context for compiling Julia code, managing side effects, parallelism, and variable names in the generated code of the executing environment.

source
Finch.push_preamble!Function
push_preamble!(ctx, thunk)

Push the thunk onto the preamble in the currently executing context. The preamble will be evaluated before the code returned by the given function in the context.

source
Finch.push_epilogue!Function
push_epilogue!(ctx, thunk)

Push the thunk onto the epilogue in the currently executing context. The epilogue will be evaluated after the code returned by the given function in the context.

source
Finch.freshenFunction
freshen(ctx, tags...)

Return a fresh variable in the current context named after Symbol(tags...)

source
Finch.containFunction
contain(f, ctx)

Call f on a subcontext of ctx and return the result. Variable bindings, preambles, and epilogues defined in the subcontext will not escape the call to contain.

source

AbstractCompiler

The AbstractCompiler interface requires all of the functionality of the above contexts, as well as the following two methods:

Finch.get_resultFunction
get_result(ctx)

Return a variable which evaluates to the result of the program which should be returned to the user.

source
diff --git a/dev/reference/internals/finch_logic/index.html b/dev/reference/internals/finch_logic/index.html index 60720e868..492abcd03 100644 --- a/dev/reference/internals/finch_logic/index.html +++ b/dev/reference/internals/finch_logic/index.html @@ -1,2 +1,2 @@ -Finch Logic · Finch.jl

Finch Logic (High-Level IR)

Finch Logic is an internal high-level intermediate representation (IR) that allows us to fuse and optimize successive calls to array operations such as map, reduce, and broadcast. It is reminiscent to database query notation, representing the a sequence of tensor expressions bound to variables. Values in the program are tensors, with named indices. The order of indices is semantically meaningful.

The nodes are as follows:

Finch.FinchLogic.mapjoinConstant
mapjoin(op, args...)

Logical AST expression for mapping the function op across args.... The order of fields in the mapjoin is unique(vcat(map(getfields, args)...))

source
Finch.FinchLogic.aggregateConstant
aggregate(op, init, arg, idxs...)

Logical AST statement that reduces arg using op, starting with init. idxs are the dimensions to reduce. May happen in any order.

source
Finch.FinchLogic.reorderConstant
reorder(arg, idxs...)

Logical AST statement that reorders the dimensions of arg to be idxs.... Dimensions known to be length 1 may be dropped. Dimensions that do not exist in arg may be added.

source
Finch.FinchLogic.producesConstant
produces(args...)

Logical AST statement that returns args... from the current plan. Halts execution of the program.

source

Finch Logic Internals

Finch.FinchLogic.LogicNodeType
LogicNode

A Finch Logic IR node. Finch uses a variant of Concrete Field Notation as an intermediate representation.

The LogicNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchLogic.LogicNodeKind enum.

source
Finch.FinchLogic.logic_leafFunction
logic_leaf(x)

Return a terminal finch node wrapper around x. A convenience function to determine whether x should be understood by default as a immediate or value.

source

Executing FinchLogic

+Finch Logic · Finch.jl

Finch Logic (High-Level IR)

Finch Logic is an internal high-level intermediate representation (IR) that allows us to fuse and optimize successive calls to array operations such as map, reduce, and broadcast. It is reminiscent to database query notation, representing the a sequence of tensor expressions bound to variables. Values in the program are tensors, with named indices. The order of indices is semantically meaningful.

The nodes are as follows:

Finch.FinchLogic.mapjoinConstant
mapjoin(op, args...)

Logical AST expression for mapping the function op across args.... The order of fields in the mapjoin is unique(vcat(map(getfields, args)...))

source
Finch.FinchLogic.aggregateConstant
aggregate(op, init, arg, idxs...)

Logical AST statement that reduces arg using op, starting with init. idxs are the dimensions to reduce. May happen in any order.

source
Finch.FinchLogic.reorderConstant
reorder(arg, idxs...)

Logical AST statement that reorders the dimensions of arg to be idxs.... Dimensions known to be length 1 may be dropped. Dimensions that do not exist in arg may be added.

source
Finch.FinchLogic.producesConstant
produces(args...)

Logical AST statement that returns args... from the current plan. Halts execution of the program.

source

Finch Logic Internals

Finch.FinchLogic.LogicNodeType
LogicNode

A Finch Logic IR node. Finch uses a variant of Concrete Field Notation as an intermediate representation.

The LogicNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchLogic.LogicNodeKind enum.

source
Finch.FinchLogic.logic_leafFunction
logic_leaf(x)

Return a terminal finch node wrapper around x. A convenience function to determine whether x should be understood by default as a immediate or value.

source

Executing FinchLogic

diff --git a/dev/reference/internals/finch_notation/index.html b/dev/reference/internals/finch_notation/index.html index f0fd991e4..858cbbd1f 100644 --- a/dev/reference/internals/finch_notation/index.html +++ b/dev/reference/internals/finch_notation/index.html @@ -1,2 +1,2 @@ -Finch Notation · Finch.jl

Finch Notation Internals

Finch IR is a tree structure that represents a finch program. Different types of nodes are delineated by a FinchKind enum, for type stability. There are a few useful functions to be aware of:

Finch.FinchNotation.FinchNodeType
FinchNode

A Finch IR node, used to represent an imperative, physical Finch program.

The FinchNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchNotation.FinchNodeKind enum.

source
Finch.FinchNotation.finch_leafFunction
finch_leaf(x)

Return a terminal finch node wrapper around x. A convenience function to determine whether x should be understood by default as a literal, value, or virtual.

source
Finch.FinchNotation.isstatefulFunction
isstateful(node)

Returns true if the node is a finch statement, and false if the node is an index expression. Typically, statements specify control flow and expressions describe values.

source
+Finch Notation · Finch.jl

Finch Notation Internals

Finch IR is a tree structure that represents a finch program. Different types of nodes are delineated by a FinchKind enum, for type stability. There are a few useful functions to be aware of:

Finch.FinchNotation.FinchNodeType
FinchNode

A Finch IR node, used to represent an imperative, physical Finch program.

The FinchNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchNotation.FinchNodeKind enum.

source
Finch.FinchNotation.finch_leafFunction
finch_leaf(x)

Return a terminal finch node wrapper around x. A convenience function to determine whether x should be understood by default as a literal, value, or virtual.

source
Finch.FinchNotation.isstatefulFunction
isstateful(node)

Returns true if the node is a finch statement, and false if the node is an index expression. Typically, statements specify control flow and expressions describe values.

source
diff --git a/dev/reference/internals/looplets_coiteration/index.html b/dev/reference/internals/looplets_coiteration/index.html index 56fc559ed..e97672da8 100644 --- a/dev/reference/internals/looplets_coiteration/index.html +++ b/dev/reference/internals/looplets_coiteration/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl
diff --git a/dev/reference/internals/tensor_interface/index.html b/dev/reference/internals/tensor_interface/index.html index 52c1fb87a..61b18fca4 100644 --- a/dev/reference/internals/tensor_interface/index.html +++ b/dev/reference/internals/tensor_interface/index.html @@ -1,13 +1,13 @@ -Tensor Interface · Finch.jl

Tensor Interface

The AbstractTensor interface (defined in src/abstract_tensor.jl) is the interface through which Finch understands tensors. It is a high-level interace which allows tensors to interact with the rest of the Finch system. The interface is designed to be extensible, allowing users to define their own tensor types and behaviors. For a minimal example, read the definitions in /ext/SparseArraysExt.jl and in /src/interface/abstractarray.jl. Once these methods are defined that tell Finch how to generate code for an array, the AbstractTensor interface will also use Finch to generate code for several Julia AbstractArray methods, such as getindex, setindex!, map, and reduce. An important note: getindex and setindex! are not a source of truth for Finch tensors. Search the codebase for ::AbstractTensor for a full list of methods that are implemented for AbstractTensor. Note than most AbstractTensor implement labelled_show and labelled_children methods instead of show(::IO, ::MIME"text/plain", t::AbstractTensor) for pretty printed display.

Tensor Methods

Finch.declare!Function
declare!(ctx, tns, init)

Declare the read-only virtual tensor tns in the context ctx with a starting value of init and return it. Afterwards the tensor is update-only.

source
Finch.freeze!Function
freeze!(ctx, tns)

Freeze the update-only virtual tensor tns in the context ctx and return it. This may involve trimming any excess overallocated memory. Afterwards, the tensor is read-only.

source
Finch.thaw!Function
thaw!(ctx, tns)

Thaw the read-only virtual tensor tns in the context ctx and return it. Afterwards, the tensor is update-only.

source
Finch.unfurlFunction
unfurl(ctx, tns, ext, proto)

Return an array object (usually a looplet nest) for lowering the outermost dimension of virtual tensor tns. ext is the extent of the looplet. proto is the protocol that should be used for this index, but one doesn't need to unfurl all the indices at once.

source
Finch.instantiateFunction
instantiate(ctx, tns, mode)

Process the tensor tns in the context ctx, just after it has been unfurled, declared, or thawed. The earliest opportunity to process tns.

source
Finch.fill_valueFunction
fill_value(arr)

Return the initializer for arr. For SparseArrays, this is 0. Often, the "fill" value becomes the "background" value of a tensor.

source
Finch.virtual_sizeFunction
virtual_size(ctx, tns)

Return a tuple of the dimensions of tns in the context ctx. This is a function similar in spirit to Base.axes.

source
Finch.virtual_resize!Function
virtual_resize!(ctx, tns, dims...)

Resize tns in the context ctx. This is a function similar in spirit to Base.resize!.

source
Finch.movetoFunction
moveto(arr, device)

If the array is not on the given device, it creates a new version of this array on that device and copies the data in to it, according to the device trait.

source
Finch.virtual_movetoFunction
virtual_moveto(device, arr)

If the virtual array is not on the given device, copy the array to that device. This function may modify underlying data arrays, but cannot change the virtual itself. This function is used to move data to the device before a kernel is launched.

source
Finch.labelled_childrenFunction
labelled_children(node)

Return the children of node in a LabelledTree. You may label the children by returning a LabelledTree(key, value), which will be shown as key: value a.

source
Finch.is_injectiveFunction
is_injective(ctx, tns)

Returns a vector of booleans, one for each dimension of the tensor, indicating whether the access is injective in that dimension. A dimension is injective if each index in that dimension maps to a different memory space in the underlying array.

source
Finch.is_atomicFunction
is_atomic(ctx, tns)
+Tensor Interface · Finch.jl

Tensor Interface

The AbstractTensor interface (defined in src/abstract_tensor.jl) is the interface through which Finch understands tensors. It is a high-level interace which allows tensors to interact with the rest of the Finch system. The interface is designed to be extensible, allowing users to define their own tensor types and behaviors. For a minimal example, read the definitions in /ext/SparseArraysExt.jl and in /src/interface/abstractarray.jl. Once these methods are defined that tell Finch how to generate code for an array, the AbstractTensor interface will also use Finch to generate code for several Julia AbstractArray methods, such as getindex, setindex!, map, and reduce. An important note: getindex and setindex! are not a source of truth for Finch tensors. Search the codebase for ::AbstractTensor for a full list of methods that are implemented for AbstractTensor. Note than most AbstractTensor implement labelled_show and labelled_children methods instead of show(::IO, ::MIME"text/plain", t::AbstractTensor) for pretty printed display.

Tensor Methods

Finch.declare!Function
declare!(ctx, tns, init)

Declare the read-only virtual tensor tns in the context ctx with a starting value of init and return it. Afterwards the tensor is update-only.

source
Finch.freeze!Function
freeze!(ctx, tns)

Freeze the update-only virtual tensor tns in the context ctx and return it. This may involve trimming any excess overallocated memory. Afterwards, the tensor is read-only.

source
Finch.thaw!Function
thaw!(ctx, tns)

Thaw the read-only virtual tensor tns in the context ctx and return it. Afterwards, the tensor is update-only.

source
Finch.unfurlFunction
unfurl(ctx, tns, ext, proto)

Return an array object (usually a looplet nest) for lowering the outermost dimension of virtual tensor tns. ext is the extent of the looplet. proto is the protocol that should be used for this index, but one doesn't need to unfurl all the indices at once.

source
Finch.instantiateFunction
instantiate(ctx, tns, mode)

Process the tensor tns in the context ctx, just after it has been unfurled, declared, or thawed. The earliest opportunity to process tns.

source
Finch.fill_valueFunction
fill_value(arr)

Return the initializer for arr. For SparseArrays, this is 0. Often, the "fill" value becomes the "background" value of a tensor.

source
Finch.virtual_sizeFunction
virtual_size(ctx, tns)

Return a tuple of the dimensions of tns in the context ctx. This is a function similar in spirit to Base.axes.

source
Finch.virtual_resize!Function
virtual_resize!(ctx, tns, dims...)

Resize tns in the context ctx. This is a function similar in spirit to Base.resize!.

source
Finch.movetoFunction
moveto(arr, device)

If the array is not on the given device, it creates a new version of this array on that device and copies the data in to it, according to the device trait.

source
Finch.virtual_movetoFunction
virtual_moveto(device, arr)

If the virtual array is not on the given device, copy the array to that device. This function may modify underlying data arrays, but cannot change the virtual itself. This function is used to move data to the device before a kernel is launched.

source
Finch.labelled_childrenFunction
labelled_children(node)

Return the children of node in a LabelledTree. You may label the children by returning a LabelledTree(key, value), which will be shown as key: value a.

source
Finch.is_injectiveFunction
is_injective(ctx, tns)

Returns a vector of booleans, one for each dimension of the tensor, indicating whether the access is injective in that dimension. A dimension is injective if each index in that dimension maps to a different memory space in the underlying array.

source
Finch.is_atomicFunction
is_atomic(ctx, tns)
 
 Returns a tuple (atomicities, overall) where atomicities is a vector, indicating which indices have an atomic that guards them,
-and overall is a boolean that indicates is the last level had an atomic guarding it.
source
Finch.is_concurrentFunction
is_concurrent(ctx, tns)
+and overall is a boolean that indicates is the last level had an atomic guarding it.
source
Finch.is_concurrentFunction
is_concurrent(ctx, tns)
 
 Returns a vector of booleans, one for each dimension of the tensor, indicating
 whether the index can be written to without any execution state. So if a matrix returns [true, false],
 then we can write to A[i, j] and A[i_2, j] without any shared execution state between the two, but
-we can't write to A[i, j] and A[i, j_2] without carrying over execution state.
source

Level Interface

julia> A = [0.0 0.0 4.4; 1.1 0.0 0.0; 2.2 0.0 5.5; 3.3 0.0 0.0]
+we can't write to A[i, j] and A[i, j_2] without carrying over execution state.
source

Level Interface

julia> A = [0.0 0.0 4.4; 1.1 0.0 0.0; 2.2 0.0 5.5; 3.3 0.0 0.0]
 4×3 Matrix{Float64}:
  0.0  0.0  4.4
  1.1  0.0  0.0
@@ -88,4 +88,4 @@
    ├─ [3, 1]: 2.2
    ├─ ⋮
    ├─ [1, 3]: 4.4
-   └─ [3, 3]: 5.5

COO Format Index Tree

The COO format is compact and straightforward, but doesn't support random access. For random access, one should use the SparseDict or SparseBytemap format. A full listing of supported formats is described after a rough description of shared common internals of level, relating to types and storage.

Types and Storage of Level

All levels have a postype, typically denoted as Tp in the constructors, used for internal pointer types but accessible by the function:

Finch.postypeFunction
postype(lvl)

Return a position type with the same flavor as those used to store the positions of the fibers contained in lvl. The name position descends from the pos or position or pointer arrays found in many definitions of CSR or CSC. In Finch, positions should be data used to access either a subfiber or some other similar auxiliary data. Thus, we often end up iterating over positions.

source

Additionally, many levels have a Vp or Vi in their constructors; these stand for vector of element type Tp or Ti. More generally, levels are paramterized by the types that they use for storage. By default, all levels use Vector, but a user could could change any or all of the storage types of a tensor so that the tensor would be stored on a GPU or CPU or some combination thereof, or even just via a vector with a different allocation mechanism. The storage type should behave like AbstractArray and needs to implement the usual abstract array functions and Base.resize!. See the tests for an example.

When levels are constructed in short form as in the examples above, the index, position, and storage types are inferred from the level below. All the levels at the bottom of a Tensor (Element, Pattern, Repeater) specify an index type, position type, and storage type even if they don't need them. These are used by levels that take these as parameters.

Level Methods

Tensor levels are implemented using the following methods:

Finch.declare_level!Function
declare_level!(ctx, lvl, pos, init)

Initialize and thaw all fibers within lvl, assuming positions 1:pos were previously assembled and frozen. The resulting level has no assembled positions.

source
Finch.assemble_level!Function
assemble_level!(ctx, lvl, pos, new_pos)

Assemble and positions pos+1:new_pos in lvl, assuming positions 1:pos were previously assembled.

source
Finch.reassemble_level!Function
reassemble_level!(lvl, ctx, pos_start, pos_end)

Set the previously assempled positions from pos_start to pos_end to level_fill_value(lvl). Not avaliable on all level types as this presumes updating.

source
Finch.freeze_level!Function
freeze_level!(ctx, lvl, pos, init)

Given the last reference position, pos, freeze all fibers within lvl assuming that we have potentially updated 1:pos.

source
Finch.level_ndimsFunction
level_ndims(::Type{Lvl})

The result of level_ndims(Lvl) defines ndims for all subfibers in a level of type Lvl.

source
Finch.level_sizeFunction
level_size(lvl)

The result of level_size(lvl) defines the size of all subfibers in the level lvl.

source
Finch.level_axesFunction
level_axes(lvl)

The result of level_axes(lvl) defines the axes of all subfibers in the level lvl.

source
Finch.level_eltypeFunction
level_eltype(::Type{Lvl})

The result of level_eltype(Lvl) defines eltype for all subfibers in a level of type Lvl.

source

Combinator Interface

Tensor Combinators allow us to modify the behavior of tensors. The AbstractCombinator interface (defined in src/tensors/abstract_combinator.jl) is the interface through which Finch understands tensor combinators. The interface requires the combinator to overload all of the tensor methods, as well as the methods used by Looplets when lowering ranges, etc. For a minimal example, read the definitions in /src/tensors/combinators/offset.jl.

+ └─ [3, 3]: 5.5

COO Format Index Tree

The COO format is compact and straightforward, but doesn't support random access. For random access, one should use the SparseDict or SparseBytemap format. A full listing of supported formats is described after a rough description of shared common internals of level, relating to types and storage.

Types and Storage of Level

All levels have a postype, typically denoted as Tp in the constructors, used for internal pointer types but accessible by the function:

Finch.postypeFunction
postype(lvl)

Return a position type with the same flavor as those used to store the positions of the fibers contained in lvl. The name position descends from the pos or position or pointer arrays found in many definitions of CSR or CSC. In Finch, positions should be data used to access either a subfiber or some other similar auxiliary data. Thus, we often end up iterating over positions.

source

Additionally, many levels have a Vp or Vi in their constructors; these stand for vector of element type Tp or Ti. More generally, levels are paramterized by the types that they use for storage. By default, all levels use Vector, but a user could could change any or all of the storage types of a tensor so that the tensor would be stored on a GPU or CPU or some combination thereof, or even just via a vector with a different allocation mechanism. The storage type should behave like AbstractArray and needs to implement the usual abstract array functions and Base.resize!. See the tests for an example.

When levels are constructed in short form as in the examples above, the index, position, and storage types are inferred from the level below. All the levels at the bottom of a Tensor (Element, Pattern, Repeater) specify an index type, position type, and storage type even if they don't need them. These are used by levels that take these as parameters.

Level Methods

Tensor levels are implemented using the following methods:

Finch.declare_level!Function
declare_level!(ctx, lvl, pos, init)

Initialize and thaw all fibers within lvl, assuming positions 1:pos were previously assembled and frozen. The resulting level has no assembled positions.

source
Finch.assemble_level!Function
assemble_level!(ctx, lvl, pos, new_pos)

Assemble and positions pos+1:new_pos in lvl, assuming positions 1:pos were previously assembled.

source
Finch.reassemble_level!Function
reassemble_level!(lvl, ctx, pos_start, pos_end)

Set the previously assempled positions from pos_start to pos_end to level_fill_value(lvl). Not avaliable on all level types as this presumes updating.

source
Finch.freeze_level!Function
freeze_level!(ctx, lvl, pos, init)

Given the last reference position, pos, freeze all fibers within lvl assuming that we have potentially updated 1:pos.

source
Finch.level_ndimsFunction
level_ndims(::Type{Lvl})

The result of level_ndims(Lvl) defines ndims for all subfibers in a level of type Lvl.

source
Finch.level_sizeFunction
level_size(lvl)

The result of level_size(lvl) defines the size of all subfibers in the level lvl.

source
Finch.level_axesFunction
level_axes(lvl)

The result of level_axes(lvl) defines the axes of all subfibers in the level lvl.

source
Finch.level_eltypeFunction
level_eltype(::Type{Lvl})

The result of level_eltype(Lvl) defines eltype for all subfibers in a level of type Lvl.

source
Finch.level_fill_valueFunction
level_fill_value(::Type{Lvl})

The result of level_fill_value(Lvl) defines fill_value for all subfibers in a level of type Lvl.

source

Combinator Interface

Tensor Combinators allow us to modify the behavior of tensors. The AbstractCombinator interface (defined in src/tensors/abstract_combinator.jl) is the interface through which Finch understands tensor combinators. The interface requires the combinator to overload all of the tensor methods, as well as the methods used by Looplets when lowering ranges, etc. For a minimal example, read the definitions in /src/tensors/combinators/offset.jl.

diff --git a/dev/reference/internals/virtualization/index.html b/dev/reference/internals/virtualization/index.html index 1819ed52b..97293dc1a 100644 --- a/dev/reference/internals/virtualization/index.html +++ b/dev/reference/internals/virtualization/index.html @@ -156,7 +156,7 @@ s.val = s_val result end -

The "virtual" IR Node

Users can also create their own virtual nodes to represent their custom types. While most calls to virtualize result in a Finch IR Node, some objects, such as tensors and dimensions, are virtualized to a virtual object, which holds the custom virtual type. These types may contain constants and other virtuals, as well as reference variables in the scope of the executing context. Any aspect of virtuals visible to Finch should be considered immutable, but virtuals may reference mutable variables in the scope of the executing context.

Finch.virtualizeFunction
virtualize(ctx, ex, T, [tag])

Return the virtual program corresponding to the Julia expression ex of type T in the JuliaContext ctx. Implementaters may support the optional tag argument is used to name the resulting virtual variable.

source
Finch.FinchNotation.virtualConstant
virtual(val)

Finch AST expression for an object val which has special meaning to the compiler. This type is typically used for tensors, as it allows users to specify the tensor's shape and data type.

source

Virtual Methods

Many methods have analogues we can call on the virtual version of the object. For example, we can call size an an array, and virtual_size on a virtual array. The virtual methods are used to generate code, so if they are pure they may return an expression which computes the results, and if they have side effects they may accept a context argument into which they can emit their side-effecting code.

In addition to the special compiler methods which are prefixed virtual_, there is also a function virtual_call, which is used to evaluate function calls on Finch IR when it would result in a virtual object. The behavior should mirror the concrete behavior of the corresponding function.

Finch.virtual_callFunction
virtual_call(ctx, f, a...)

Given the virtual arguments a..., and a literal function f, return a virtual object representing the result of the function call. If the function is not foldable, return nothing. This function is used so that we can call e.g. tensor constructors in finch code.

source

Working with Finch IR

Calling print on a finch program or program instance will print the structure of the program as one would call constructors to build it. For example,

julia> prgm_inst = Finch.@finch_program_instance for i = _
+

The "virtual" IR Node

Users can also create their own virtual nodes to represent their custom types. While most calls to virtualize result in a Finch IR Node, some objects, such as tensors and dimensions, are virtualized to a virtual object, which holds the custom virtual type. These types may contain constants and other virtuals, as well as reference variables in the scope of the executing context. Any aspect of virtuals visible to Finch should be considered immutable, but virtuals may reference mutable variables in the scope of the executing context.

Finch.virtualizeFunction
virtualize(ctx, ex, T, [tag])

Return the virtual program corresponding to the Julia expression ex of type T in the JuliaContext ctx. Implementaters may support the optional tag argument is used to name the resulting virtual variable.

source
Finch.FinchNotation.virtualConstant
virtual(val)

Finch AST expression for an object val which has special meaning to the compiler. This type is typically used for tensors, as it allows users to specify the tensor's shape and data type.

source

Virtual Methods

Many methods have analogues we can call on the virtual version of the object. For example, we can call size an an array, and virtual_size on a virtual array. The virtual methods are used to generate code, so if they are pure they may return an expression which computes the results, and if they have side effects they may accept a context argument into which they can emit their side-effecting code.

In addition to the special compiler methods which are prefixed virtual_, there is also a function virtual_call, which is used to evaluate function calls on Finch IR when it would result in a virtual object. The behavior should mirror the concrete behavior of the corresponding function.

Finch.virtual_callFunction
virtual_call(ctx, f, a...)

Given the virtual arguments a..., and a literal function f, return a virtual object representing the result of the function call. If the function is not foldable, return nothing. This function is used so that we can call e.g. tensor constructors in finch code.

source

Working with Finch IR

Calling print on a finch program or program instance will print the structure of the program as one would call constructors to build it. For example,

julia> prgm_inst = Finch.@finch_program_instance for i = _
             s[] += A[i]
         end;
 
@@ -191,4 +191,4 @@
 
 julia> idx
 Finch program: i
-
+ diff --git a/dev/reference/listing/index.html b/dev/reference/listing/index.html index 55fcb0d1e..1f1719130 100644 --- a/dev/reference/listing/index.html +++ b/dev/reference/listing/index.html @@ -1,10 +1,10 @@ -Documentation Listing · Finch.jl

Documentation Listing

Finch.bandmaskConstant
bandmask

A mask for a banded tensor, bandmask[i, j, k] = j <= i <= k. Note that this specializes each column for the cases where i < j, j <= i <= k, and k < i.

source
Finch.diagmaskConstant
diagmask

A mask for a diagonal tensor, diagmask[i, j] = i == j. Note that this specializes each column for the cases where i < j, i == j, and i > j.

source
Finch.lotrimaskConstant
lotrimask

A mask for an upper triangular tensor, lotrimask[i, j] = i >= j. Note that this specializes each column for the cases where i < j and i >= j.

source
Finch.uptrimaskConstant
uptrimask

A mask for an upper triangular tensor, uptrimask[i, j] = i <= j. Note that this specializes each column for the cases where i <= j and i > j.

source
Core.ArrayMethod
Array(arr::Union{Tensor, SwizzleArray})

Construct an array from a tensor or swizzle. May reuse memory, will usually densify the tensor.

source
Finch.AtomicElementLevelType
AtomicElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()

Like an ElementLevel, but updates to the level are performed atomically.

julia> Tensor(Dense(AtomicElement(0.0)), [1, 2, 3])
+Documentation Listing · Finch.jl

Documentation Listing

Finch.bandmaskConstant
bandmask

A mask for a banded tensor, bandmask[i, j, k] = j <= i <= k. Note that this specializes each column for the cases where i < j, j <= i <= k, and k < i.

source
Finch.diagmaskConstant
diagmask

A mask for a diagonal tensor, diagmask[i, j] = i == j. Note that this specializes each column for the cases where i < j, i == j, and i > j.

source
Finch.lotrimaskConstant
lotrimask

A mask for an upper triangular tensor, lotrimask[i, j] = i >= j. Note that this specializes each column for the cases where i < j and i >= j.

source
Finch.uptrimaskConstant
uptrimask

A mask for an upper triangular tensor, uptrimask[i, j] = i <= j. Note that this specializes each column for the cases where i <= j and i > j.

source
Core.ArrayMethod
Array(arr::Union{Tensor, SwizzleArray})

Construct an array from a tensor or swizzle. May reuse memory, will usually densify the tensor.

source
Finch.AtomicElementLevelType
AtomicElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()

Like an ElementLevel, but updates to the level are performed atomically.

julia> Tensor(Dense(AtomicElement(0.0)), [1, 2, 3])
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: 1.0
    ├─ [2]: 2.0
-   └─ [3]: 3.0
source
Finch.CPUType
CPU(n)

A device that represents a CPU with n threads.

source
Finch.DefaultLogicOptimizerType
DefaultLogicOptimizer(ctx)

The default optimizer for finch logic programs. Optimizes to a structure suitable for the LogicCompiler or LogicInterpreter, then calls ctx on the resulting program.

source
Finch.DenseDataType
DenseData(lvl)

Represents a tensor A where each A[:, ..., :, i] is represented by lvl.

source
Finch.DenseLevelType
DenseLevel{[Ti=Int]}(lvl, [dim])

A subfiber of a dense level is an array which stores every slice A[:, ..., :, i] as a distinct subfiber in lvl. Optionally, dim is the size of the last dimension. Ti is the type of the indices used to index the level.

julia> ndims(Tensor(Dense(Element(0.0))))
+   └─ [3]: 3.0
source
Finch.CPUType
CPU(n)

A device that represents a CPU with n threads.

source
Finch.DefaultLogicOptimizerType
DefaultLogicOptimizer(ctx)

The default optimizer for finch logic programs. Optimizes to a structure suitable for the LogicCompiler or LogicInterpreter, then calls ctx on the resulting program.

source
Finch.DenseDataType
DenseData(lvl)

Represents a tensor A where each A[:, ..., :, i] is represented by lvl.

source
Finch.DenseLevelType
DenseLevel{[Ti=Int]}(lvl, [dim])

A subfiber of a dense level is an array which stores every slice A[:, ..., :, i] as a distinct subfiber in lvl. Optionally, dim is the size of the last dimension. Ti is the type of the indices used to index the level.

julia> ndims(Tensor(Dense(Element(0.0))))
 1
 
 julia> ndims(Tensor(Dense(Dense(Element(0.0)))))
@@ -18,12 +18,12 @@
    │  └─ [2]: 3.0
    └─ [:, 2]: Dense [1:2]
       ├─ [1]: 2.0
-      └─ [2]: 4.0
source
Finch.ElementDataType
ElementData(fill_value, eltype)

Represents a scalar element of type eltype and fillvalue `fillvalue`.

source
Finch.ElementLevelType
ElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()

A subfiber of an element level is a scalar of type Tv, initialized to Vf. Vf may optionally be given as the first argument.

The data is stored in a vector of type Val with eltype(Val) = Tv. The type Tp is the index type used to access Val.

julia> Tensor(Dense(Element(0.0)), [1, 2, 3])
+      └─ [2]: 4.0
source
Finch.ElementDataType
ElementData(fill_value, eltype)

Represents a scalar element of type eltype and fillvalue `fillvalue`.

source
Finch.ElementLevelType
ElementLevel{Vf, [Tv=typeof(Vf)], [Tp=Int], [Val]}()

A subfiber of an element level is a scalar of type Tv, initialized to Vf. Vf may optionally be given as the first argument.

The data is stored in a vector of type Val with eltype(Val) = Tv. The type Tp is the index type used to access Val.

julia> Tensor(Dense(Element(0.0)), [1, 2, 3])
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: 1.0
    ├─ [2]: 2.0
-   └─ [3]: 3.0
source
Finch.ExtrudeDataType
ExtrudeData(lvl)

Represents a tensor A where A[:, ..., :, 1] is the only slice, and is represented by lvl.

source
Finch.HollowDataType
HollowData(lvl)

Represents a tensor which is represented by lvl but is sometimes entirely fill_value(lvl).

source
Finch.InfinitesimalType
Infintesimal(s)

The Infintesimal type represents an infinitestimal number. The sign field is used to represent positive, negative, or zero in this number system.

```jl-doctest julia> tiny() +0

julia> positive_tiny() +ϵ

julia> negative_tiny() -ϵ

julia> positivetiny() + negativetiny() +0

julia> positive_tiny() * 2 +ϵ

julia> positivetiny() * negativetiny() -ϵ

source
Finch.JuliaContextType
JuliaContext

A context for compiling Julia code, managing side effects, parallelism, and variable names in the generated code of the executing environment.

source
Finch.LimitType
Limit{T}(x, s)

The Limit type represents endpoints of closed and open intervals. The val field is the value of the endpoint. The sign field is used to represent the openness/closedness of the interval endpoint, using an Infinitesmal.

```jl-doctest julia> limit(1.0) 1.0+0

julia> plus_eps(1.0) 1.0+ϵ

julia> minus_eps(1.0) 1.0-ϵ

julia> pluseps(1.0) + minuseps(1.0) 2.0+0.0

julia> plus_eps(1.0) * 2 2.0+2.0ϵ

julia> pluseps(1.0) * minuseps(1.0) 1.0-1.0ϵ

julia> pluseps(-1.0) * minuseps(1.0) -1.0+2.0ϵ

julia> 1.0 < plus_eps(1.0) true

julia> 1.0 < minus_eps(1.0) false

source
Finch.LogicCompilerType
LogicCompiler

The LogicCompiler is a simple compiler for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE | DEFERRED, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)

source
Finch.LogicExecutorType
LogicExecutor(ctx, verbose=false)

Executes a logic program by compiling it with the given compiler ctx. Compiled codes are cached, and are only compiled once for each program with the same structure.

source
Finch.LogicInterpreterType
LogicInterpreter(scope = Dict(), verbose = false, mode = :fast)

The LogicInterpreter is a simple interpreter for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)

source
Finch.MutexLevelType
MutexLevel{Val, Lvl}()

Mutex Level Protects the level directly below it with atomics

Each position in the level below the Mutex level is protected by a lock.

julia> Tensor(Dense(Mutex(Element(0.0))), [1, 2, 3])
+   └─ [3]: 3.0
source
Finch.ExtrudeDataType
ExtrudeData(lvl)

Represents a tensor A where A[:, ..., :, 1] is the only slice, and is represented by lvl.

source
Finch.HollowDataType
HollowData(lvl)

Represents a tensor which is represented by lvl but is sometimes entirely fill_value(lvl).

source
Finch.InfinitesimalType
Infintesimal(s)

The Infintesimal type represents an infinitestimal number. The sign field is used to represent positive, negative, or zero in this number system.

```jl-doctest julia> tiny() +0

julia> positive_tiny() +ϵ

julia> negative_tiny() -ϵ

julia> positivetiny() + negativetiny() +0

julia> positive_tiny() * 2 +ϵ

julia> positivetiny() * negativetiny() -ϵ

source
Finch.JuliaContextType
JuliaContext

A context for compiling Julia code, managing side effects, parallelism, and variable names in the generated code of the executing environment.

source
Finch.LimitType
Limit{T}(x, s)

The Limit type represents endpoints of closed and open intervals. The val field is the value of the endpoint. The sign field is used to represent the openness/closedness of the interval endpoint, using an Infinitesmal.

```jl-doctest julia> limit(1.0) 1.0+0

julia> plus_eps(1.0) 1.0+ϵ

julia> minus_eps(1.0) 1.0-ϵ

julia> pluseps(1.0) + minuseps(1.0) 2.0+0.0

julia> plus_eps(1.0) * 2 2.0+2.0ϵ

julia> pluseps(1.0) * minuseps(1.0) 1.0-1.0ϵ

julia> pluseps(-1.0) * minuseps(1.0) -1.0+2.0ϵ

julia> 1.0 < plus_eps(1.0) true

julia> 1.0 < minus_eps(1.0) false

source
Finch.LogicCompilerType
LogicCompiler

The LogicCompiler is a simple compiler for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE | DEFERRED, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)

source
Finch.LogicExecutorType
LogicExecutor(ctx, verbose=false)

Executes a logic program by compiling it with the given compiler ctx. Compiled codes are cached, and are only compiled once for each program with the same structure.

source
Finch.LogicInterpreterType
LogicInterpreter(scope = Dict(), verbose = false, mode = :fast)

The LogicInterpreter is a simple interpreter for finch logic programs. The interpreter is only capable of executing programs of the form: REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...) ACCESS := reorder(relabel(ALIAS, idxs1::FIELD...), idxs2::FIELD...) where issubsequence(idxs1, idxs2) POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...) TABLE := table(IMMEDIATE, FIELD...) COMPUTEQUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE))) INPUTQUERY := query(ALIAS, TABLE) STEP := COMPUTEQUERY | INPUTQUERY | produces(ALIAS...) ROOT := PLAN(STEP...)

source
Finch.MutexLevelType
MutexLevel{Val, Lvl}()

Mutex Level Protects the level directly below it with atomics

Each position in the level below the Mutex level is protected by a lock.

julia> Tensor(Dense(Mutex(Element(0.0))), [1, 2, 3])
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: Mutex ->
@@ -31,12 +31,12 @@
    ├─ [2]: Mutex ->
    │  └─ 2.0
    └─ [3]: Mutex ->
-      └─ 3.0
source
Finch.NamespaceType
Namespace

A namespace for managing variable names and aesthetic fresh variable generation.

source
Finch.PatternLevelType
PatternLevel{[Tp=Int]}()

A subfiber of a pattern level is the Boolean value true, but it's fill_value is false. PatternLevels are used to create tensors that represent which values are stored by other fibers. See pattern! for usage examples.

julia> Tensor(Dense(Pattern()), 3)
+      └─ 3.0
source
Finch.NamespaceType
Namespace

A namespace for managing variable names and aesthetic fresh variable generation.

source
Finch.PatternLevelType
PatternLevel{[Tp=Int]}()

A subfiber of a pattern level is the Boolean value true, but it's fill_value is false. PatternLevels are used to create tensors that represent which values are stored by other fibers. See pattern! for usage examples.

julia> Tensor(Dense(Pattern()), 3)
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: true
    ├─ [2]: true
-   └─ [3]: true
source
Finch.RepeatDataType
RepeatData(lvl)

Represents a tensor A where A[:, ..., :, i] is sometimes entirely fill_value(lvl) and is sometimes represented by repeated runs of lvl.

source
Finch.RunListLevelType
RunListLevel{[Ti=Int], [Ptr, Right]}(lvl, [dim], [merge = true])

The RunListLevel represent runs of equivalent slices A[:, ..., :, i]. A sorted list is used to record the right endpoint of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(RunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+   └─ [3]: true
source
Finch.RepeatDataType
RepeatData(lvl)

Represents a tensor A where A[:, ..., :, i] is sometimes entirely fill_value(lvl) and is sometimes represented by repeated runs of lvl.

source
Finch.RunListLevelType
RunListLevel{[Ti=Int], [Ptr, Right]}(lvl, [dim], [merge = true])

The RunListLevel represent runs of equivalent slices A[:, ..., :, i]. A sorted list is used to record the right endpoint of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(RunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: RunList (0.0) [1:3]
@@ -48,7 +48,7 @@
    └─ [:, 3]: RunList (0.0) [1:3]
       ├─ [1:1]: 20.0
       ├─ [2:2]: 0.0
-      └─ [3:3]: 40.0
source
Finch.SeparateLevelType
SeparateLevel{Lvl, [Val]}()

A subfiber of a Separate level is a separate tensor of type Lvl, in it's own memory space.

Each sublevel is stored in a vector of type Val with eltype(Val) = Lvl.

julia> Tensor(Dense(Separate(Element(0.0))), [1, 2, 3])
+      └─ [3:3]: 40.0
source
Finch.SeparateLevelType
SeparateLevel{Lvl, [Val]}()

A subfiber of a Separate level is a separate tensor of type Lvl, in it's own memory space.

Each sublevel is stored in a vector of type Val with eltype(Val) = Lvl.

julia> Tensor(Dense(Separate(Element(0.0))), [1, 2, 3])
 3-Tensor
 └─ Dense [1:3]
    ├─ [1]: Pointer ->
@@ -56,7 +56,7 @@
    ├─ [2]: Pointer ->
    │  └─ 2.0
    └─ [3]: Pointer ->
-      └─ 3.0
source
Finch.SparseBandLevelType

SparseBandLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseBlockListLevel, but stores only a single block, and fills in zeros.

```jldoctest julia> Tensor(Dense(SparseBand(Element(0.0))), [10 0 20; 30 40 0; 0 0 50]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparseBlockListLevelType

SparseBlockListLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseListLevel, but contiguous subfibers are stored together in blocks.

```jldoctest julia> Tensor(Dense(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

julia> Tensor(SparseBlockList(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) SparseList (0.0) [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparseByteMapLevelType
SparseByteMapLevel{[Ti=Int], [Ptr, Tbl]}(lvl, [dims])

Like the SparseListLevel, but a dense bitmap is used to encode which slices are stored. This allows the ByteMap level to support random access.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level.

julia> Tensor(Dense(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+      └─ 3.0
source
Finch.SparseBandLevelType

SparseBandLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseBlockListLevel, but stores only a single block, and fills in zeros.

```jldoctest julia> Tensor(Dense(SparseBand(Element(0.0))), [10 0 20; 30 40 0; 0 0 50]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparseBlockListLevelType

SparseBlockListLevel{[Ti=Int], [Ptr, Idx, Ofs]}(lvl, [dim])

Like the SparseListLevel, but contiguous subfibers are stored together in blocks.

```jldoctest julia> Tensor(Dense(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) Dense [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,2]: SparseList (0.0) [1:3] ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

julia> Tensor(SparseBlockList(SparseBlockList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40]) SparseList (0.0) [:,1:3] ├─[:,1]: SparseList (0.0) [1:3] │ ├─[1]: 10.0 │ ├─[2]: 30.0 ├─[:,3]: SparseList (0.0) [1:3] │ ├─[1]: 20.0 │ ├─[3]: 40.0

source
Finch.SparseByteMapLevelType
SparseByteMapLevel{[Ti=Int], [Ptr, Tbl]}(lvl, [dims])

Like the SparseListLevel, but a dense bitmap is used to encode which slices are stored. This allows the ByteMap level to support random access.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level.

julia> Tensor(Dense(SparseByteMap(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseByteMap (0.0) [1:3]
@@ -73,7 +73,7 @@
    ├─ [:, 1]: SparseByteMap (0.0) [1:3]
    │  ├─ [1]: 10.0
    │  └─ [2]: 30.0
-   └─ [:, 3]: SparseByteMap (0.0) [1:3]
source
Finch.SparseCOOLevelType
SparseCOOLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl]}(lvl, [dims])

A subfiber of a sparse level does not need to represent slices which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. The sparse coo level corresponds to N indices in the subfiber, so fibers in the sublevel are the slices A[:, ..., :, i_1, ..., i_n]. A set of N lists (one for each index) are used to record which slices are stored. The coordinates (sets of N indices) are sorted in column major order. Optionally, dims are the sizes of the last dimensions.

TI is the type of the last N tensor indices, and Tp is the type used for positions in the level.

The type Tbl is an NTuple type where each entry k is a subtype AbstractVector{TI[k]}.

The type Ptr is the type for the pointer array.

julia> Tensor(Dense(SparseCOO{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+   └─ [:, 3]: SparseByteMap (0.0) [1:3]
source
Finch.SparseCOOLevelType
SparseCOOLevel{[N], [TI=Tuple{Int...}], [Ptr, Tbl]}(lvl, [dims])

A subfiber of a sparse level does not need to represent slices which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. The sparse coo level corresponds to N indices in the subfiber, so fibers in the sublevel are the slices A[:, ..., :, i_1, ..., i_n]. A set of N lists (one for each index) are used to record which slices are stored. The coordinates (sets of N indices) are sorted in column major order. Optionally, dims are the sizes of the last dimensions.

TI is the type of the last N tensor indices, and Tp is the type used for positions in the level.

The type Tbl is an NTuple type where each entry k is a subtype AbstractVector{TI[k]}.

The type Ptr is the type for the pointer array.

julia> Tensor(Dense(SparseCOO{1}(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseCOO{1} (0.0) [1:3]
@@ -90,7 +90,7 @@
    ├─ [1, 1]: 10.0
    ├─ [2, 1]: 30.0
    ├─ [1, 3]: 20.0
-   └─ [3, 3]: 40.0
source
Finch.SparseDataType
SparseData(lvl)

Represents a tensor A where A[:, ..., :, i] is sometimes entirely fill_value(lvl) and is sometimes represented by lvl.

source
Finch.SparseDictLevelType
SparseDictLevel{[Ti=Int], [Tp=Int], [Ptr, Idx, Val, Tbl, Pool=Dict]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A datastructure specified by Tbl is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last fiber index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+   └─ [3, 3]: 40.0
source
Finch.SparseDataType
SparseData(lvl)

Represents a tensor A where A[:, ..., :, i] is sometimes entirely fill_value(lvl) and is sometimes represented by lvl.

source
Finch.SparseDictLevelType
SparseDictLevel{[Ti=Int], [Tp=Int], [Ptr, Idx, Val, Tbl, Pool=Dict]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A datastructure specified by Tbl is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last fiber index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseDict(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseDict (0.0) [1:3]
@@ -110,7 +110,7 @@
    └─ [:, 3]: SparseDict (0.0) [1:3]
       ├─ [1]: 20.0
       └─ [3]: 40.0
-
source
Finch.SparseIntervalLevelType
SparseIntervalLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])

The SparseIntervalLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A main difference compared to SparseRunList level is that SparseInterval level only stores a 'single' non-fill run. It emits an error if the program tries to write multiple (>=2) runs into SparseInterval.

Ti is the type of the last tensor index. The types Ptr, Left, and 'Right' are the types of the arrays used to store positions and endpoints.

julia> Tensor(SparseInterval(Element(0)), [0, 10, 0])
+
source
Finch.SparseIntervalLevelType
SparseIntervalLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim])

The SparseIntervalLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A main difference compared to SparseRunList level is that SparseInterval level only stores a 'single' non-fill run. It emits an error if the program tries to write multiple (>=2) runs into SparseInterval.

Ti is the type of the last tensor index. The types Ptr, Left, and 'Right' are the types of the arrays used to store positions and endpoints.

julia> Tensor(SparseInterval(Element(0)), [0, 10, 0])
 3-Tensor
 └─ SparseInterval (0) [1:3]
    └─ [2:2]: 10
@@ -123,7 +123,7 @@
 10-Tensor
 └─ SparseInterval (0) [1:10]
    └─ [3:6]: 1
-
source
Finch.SparseListLevelType
SparseListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A sorted list is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+
source
Finch.SparseListLevelType
SparseListLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a sparse level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A sorted list is used to record which slices are stored. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparseList(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseList (0.0) [1:3]
@@ -143,7 +143,7 @@
    └─ [:, 3]: SparseList (0.0) [1:3]
       ├─ [1]: 20.0
       └─ [3]: 40.0
-
source
Finch.SparsePointLevelType
SparsePointLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a SparsePoint level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A main difference compared to SparseList level is that SparsePoint level only stores a 'single' non-fill slice. It emits an error if the program tries to write multiple (>=2) coordinates into SparsePoint.

Ti is the type of the last tensor index. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparsePoint(Element(0.0))), [10 0 0; 0 20 0; 0 0 30])
+
source
Finch.SparsePointLevelType
SparsePointLevel{[Ti=Int], [Ptr, Idx]}(lvl, [dim])

A subfiber of a SparsePoint level does not need to represent slices A[:, ..., :, i] which are entirely fill_value. Instead, only potentially non-fill slices are stored as subfibers in lvl. A main difference compared to SparseList level is that SparsePoint level only stores a 'single' non-fill slice. It emits an error if the program tries to write multiple (>=2) coordinates into SparsePoint.

Ti is the type of the last tensor index. The types Ptr and Idx are the types of the arrays used to store positions and indicies.

julia> Tensor(Dense(SparsePoint(Element(0.0))), [10 0 0; 0 20 0; 0 0 30])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparsePoint (0.0) [1:3]
@@ -160,7 +160,7 @@
       ├─ [1]: 0.0
       ├─ [2]: 30.0
       └─ [3]: 30.0
-
source
Finch.SparseRunListLevelType
SparseRunListLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim]; [merge = true])

The SparseRunListLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A sorted list is used to record the left and right endpoints of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr, Left, and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(SparseRunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
+
source
Finch.SparseRunListLevelType
SparseRunListLevel{[Ti=Int], [Ptr, Left, Right]}(lvl, [dim]; [merge = true])

The SparseRunListLevel represent runs of equivalent slices A[:, ..., :, i] which are not entirely fill_value. A sorted list is used to record the left and right endpoints of each run. Optionally, dim is the size of the last dimension.

Ti is the type of the last tensor index, and Tp is the type used for positions in the level. The types Ptr, Left, and Right are the types of the arrays used to store positions and endpoints.

The merge keyword argument is used to specify whether the level should merge duplicate consecutive runs.

julia> Tensor(Dense(SparseRunListLevel(Element(0.0))), [10 0 20; 30 0 0; 0 0 40])
 3×3-Tensor
 └─ Dense [:,1:3]
    ├─ [:, 1]: SparseRunList (0.0) [1:3]
@@ -169,13 +169,13 @@
    ├─ [:, 2]: SparseRunList (0.0) [1:3]
    └─ [:, 3]: SparseRunList (0.0) [1:3]
       ├─ [1:1]: 20.0
-      └─ [3:3]: 40.0
source
Finch.StaticHashType
StaticHash

A hash function which is static, i.e. the hashes are the same when objects are hashed in the same order. The hash is used to memoize the results of simplification and proof rules.

source
Finch.SubFiberType
SubFiber(lvl, pos)

SubFiber represents a tensor at position pos within lvl.

source
Finch.TensorMethod
Tensor(lvl, arr)

Construct a Tensor and initialize it to the contents of arr. To explicitly copy into a tensor, use @ref[copyto!]

source
Finch.TensorMethod
Tensor(lvl, [undef], dims...)

Construct a Tensor of size dims, and initialize to undef, potentially allocating memory. Here undef is the UndefInitializer singleton type. dims... may be a variable number of dimensions or a tuple of dimensions, but it must correspond to the number of dimensions in lvl.

source
Finch.TensorMethod
Tensor(lvl)

Construct a Tensor using the tensor level storage lvl. No initialization of storage is performed, it is assumed that position 1 of lvl corresponds to a valid tensor, and lvl will be wrapped as-is. Call a different constructor to initialize the storage.

source
Finch.TensorMethod
Tensor(arr, [init = zero(eltype(arr))])

Copy an array-like object arr into a corresponding, similar Tensor datastructure. Uses init as an initial value. May reuse memory when possible. To explicitly copy into a tensor, use @ref[copyto!].

Examples

julia> println(summary(Tensor(sparse([1 0; 0 1]))))
+      └─ [3:3]: 40.0
source
Finch.StaticHashType
StaticHash

A hash function which is static, i.e. the hashes are the same when objects are hashed in the same order. The hash is used to memoize the results of simplification and proof rules.

source
Finch.SubFiberType
SubFiber(lvl, pos)

SubFiber represents a tensor at position pos within lvl.

source
Finch.TensorMethod
Tensor(lvl, arr)

Construct a Tensor and initialize it to the contents of arr. To explicitly copy into a tensor, use @ref[copyto!]

source
Finch.TensorMethod
Tensor(lvl, [undef], dims...)

Construct a Tensor of size dims, and initialize to undef, potentially allocating memory. Here undef is the UndefInitializer singleton type. dims... may be a variable number of dimensions or a tuple of dimensions, but it must correspond to the number of dimensions in lvl.

source
Finch.TensorMethod
Tensor(lvl)

Construct a Tensor using the tensor level storage lvl. No initialization of storage is performed, it is assumed that position 1 of lvl corresponds to a valid tensor, and lvl will be wrapped as-is. Call a different constructor to initialize the storage.

source
Finch.TensorMethod
Tensor(arr, [init = zero(eltype(arr))])

Copy an array-like object arr into a corresponding, similar Tensor datastructure. Uses init as an initial value. May reuse memory when possible. To explicitly copy into a tensor, use @ref[copyto!].

Examples

julia> println(summary(Tensor(sparse([1 0; 0 1]))))
 2×2 Tensor(Dense(SparseList(Element(0))))
 
 julia> println(summary(Tensor(ones(3, 2, 4))))
-3×2×4 Tensor(Dense(Dense(Dense(Element(0.0)))))
source
Finch.TensorType
Tensor{Lvl} <: AbstractFiber{Lvl}

The multidimensional array type used by Finch. Tensor is a thin wrapper around the hierarchical level storage of type Lvl.

source
Base.resize!Method
resize!(fbr, dims...)

Set the shape of fbr equal to dims. May reuse memory and render the original tensor unusable when modified.

source
Finch.aggregate_repMethod
aggregate_rep(op, init, tns, dims)

Return a trait object representing the result of reducing a tensor represented by tns on dims by op starting at init.

source
Finch.aquire_lock!Method
aquire_lock!(dev::AbstractDevice, val)

Lock the lock, val, on the device dev, waiting until it can acquire lock.

source
Finch.assemble_level!Function
assemble_level!(ctx, lvl, pos, new_pos)

Assemble and positions pos+1:new_pos in lvl, assuming positions 1:pos were previously assembled.

source
Finch.bspreadFunction

bspread(::AbstractString) bspread(::HDF5.File) bspread(::NPYPath)

Read the Binsparse file into a Finch tensor.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source
Finch.bspwriteFunction
bspwrite(::AbstractString, tns)
+3×2×4 Tensor(Dense(Dense(Dense(Element(0.0)))))
source
Finch.TensorType
Tensor{Lvl} <: AbstractFiber{Lvl}

The multidimensional array type used by Finch. Tensor is a thin wrapper around the hierarchical level storage of type Lvl.

source
Base.resize!Method
resize!(fbr, dims...)

Set the shape of fbr equal to dims. May reuse memory and render the original tensor unusable when modified.

source
Finch.aggregate_repMethod
aggregate_rep(op, init, tns, dims)

Return a trait object representing the result of reducing a tensor represented by tns on dims by op starting at init.

source
Finch.aquire_lock!Method
aquire_lock!(dev::AbstractDevice, val)

Lock the lock, val, on the device dev, waiting until it can acquire lock.

source
Finch.assemble_level!Function
assemble_level!(ctx, lvl, pos, new_pos)

Assemble and positions pos+1:new_pos in lvl, assuming positions 1:pos were previously assembled.

source
Finch.bspreadFunction

bspread(::AbstractString) bspread(::HDF5.File) bspread(::NPYPath)

Read the Binsparse file into a Finch tensor.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source
Finch.bspwriteFunction
bspwrite(::AbstractString, tns)
 bspwrite(::HDF5.File, tns)
-bspwrite(::NPYPath, tns)

Write the Finch tensor to a file using Binsparse file format.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source
Finch.cache_deferred!Method
cache_deferred(ctx, root::LogicNode, seen)

Replace deferred expressions with simpler expressions, and cache their evaluation in the preamble.

source
Finch.chooseMethod
choose(z)(a, b)

choose(z) is a function which returns whichever of a or b is not isequal to z. If neither are z, then return a. Useful for getting the first nonfill value in a sparse array.

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
+bspwrite(::NPYPath, tns)

Write the Finch tensor to a file using Binsparse file format.

Supported file extensions are:

  • .bsp.h5: HDF5 file format (HDF5 must be loaded)
  • .bspnpy: NumPy and JSON directory format (NPZ must be loaded)
Warning

The Binsparse spec is under development. Additionally, this function may not be fully conformant. Please file bug reports if you see anything amiss.

source
Finch.cache_deferred!Method
cache_deferred(ctx, root::LogicNode, seen)

Replace deferred expressions with simpler expressions, and cache their evaluation in the preamble.

source
Finch.chooseMethod
choose(z)(a, b)

choose(z) is a function which returns whichever of a or b is not isequal to z. If neither are z, then return a. Useful for getting the first nonfill value in a sparse array.

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
 5-Tensor
 └─ SparseList (0.0) [1:5]
    ├─ [2]: 1.1
@@ -184,8 +184,8 @@
 julia> x = Scalar(0.0); @finch for i=_; x[] <<choose(1.1)>>= a[i] end;
 
 julia> x[]
-0.0
source
Finch.chunkmaskFunction
chunkmask(b)

A mask for a chunked tensor, chunkmask[i, j] = b * (j - 1) < i <= b * j. Note that this specializes each column for the cases where i < b * (j - 1), `b * (j

    1. < i <= b * j, andb * j < i`.
source
Finch.cld_nothrowMethod
cld_nothrow(x, y)

Returns cld(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.collapse_repMethod
collapse_rep(tns)

Normalize a trait object to collapse subfiber information into the parent tensor.

source
Finch.collapsedMethod
collapsed(algebra, f, idx, ext, node)

Return collapsed expression with respect to f.

source
Finch.combinedimMethod
combinedim(ctx, a, b)

Combine the two dimensions a and b. To avoid ambiguity, only define one of

combinedim(ctx, ::A, ::B)
-combinedim(ctx, ::B, ::A)
source
Finch.computeMethod
compute(args..., ctx=default_scheduler()) -> Any

Compute the value of a lazy tensor. The result is the argument itself, or a tuple of arguments if multiple arguments are passed.

source
Finch.concordizeMethod
concordize(root)

Accepts a program of the following form:

        TABLE := table(IMMEDIATE, FIELD...)
+0.0
source
Finch.chunkmaskFunction
chunkmask(b)

A mask for a chunked tensor, chunkmask[i, j] = b * (j - 1) < i <= b * j. Note that this specializes each column for the cases where i < b * (j - 1), `b * (j

    1. < i <= b * j, andb * j < i`.
source
Finch.cld_nothrowMethod
cld_nothrow(x, y)

Returns cld(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.collapse_repMethod
collapse_rep(tns)

Normalize a trait object to collapse subfiber information into the parent tensor.

source
Finch.collapsedMethod
collapsed(algebra, f, idx, ext, node)

Return collapsed expression with respect to f.

source
Finch.combinedimMethod
combinedim(ctx, a, b)

Combine the two dimensions a and b. To avoid ambiguity, only define one of

combinedim(ctx, ::A, ::B)
+combinedim(ctx, ::B, ::A)
source
Finch.computeMethod
compute(args..., ctx=default_scheduler()) -> Any

Compute the value of a lazy tensor. The result is the argument itself, or a tuple of arguments if multiple arguments are passed.

source
Finch.concordizeMethod
concordize(root)

Accepts a program of the following form:

        TABLE := table(IMMEDIATE, FIELD...)
        ACCESS := reorder(relabel(ALIAS, FIELD...), FIELD...)
       COMPUTE := ACCESS |
                  mapjoin(IMMEDIATE, COMPUTE...) |
@@ -195,7 +195,7 @@
 COMPUTE_QUERY := query(ALIAS, COMPUTE)
   INPUT_QUERY := query(ALIAS, TABLE)
          STEP := COMPUTE_QUERY | INPUT_QUERY | produces((ALIAS | ACCESS)...)
-         ROOT := PLAN(STEP...)

Inserts permutation statements of the form query(ALIAS, reorder(ALIAS, FIELD...)) and updates relabels so that they match their containing reorders. Modified ACCESS statements match the form:

ACCESS := reorder(relabel(ALIAS, idxs_1::FIELD...), idxs_2::FIELD...) where issubsequence(idxs_1, idxs_2)
source
Finch.concordizeMethod
concordize(ctx, root)

A raw index is an index expression consisting of a single index node (i.e. A[i] as opposed to A[i + 1]). A Finch program is concordant when all indices are raw and column major with respect to the program loop ordering. The concordize transformation ensures that tensor indices are concordant by inserting loops and lifting index expressions or transposed indices into the loop bounds.

For example,

@finch for i = :
+         ROOT := PLAN(STEP...)

Inserts permutation statements of the form query(ALIAS, reorder(ALIAS, FIELD...)) and updates relabels so that they match their containing reorders. Modified ACCESS statements match the form:

ACCESS := reorder(relabel(ALIAS, idxs_1::FIELD...), idxs_2::FIELD...) where issubsequence(idxs_1, idxs_2)
source
Finch.concordizeMethod
concordize(ctx, root)

A raw index is an index expression consisting of a single index node (i.e. A[i] as opposed to A[i + 1]). A Finch program is concordant when all indices are raw and column major with respect to the program loop ordering. The concordize transformation ensures that tensor indices are concordant by inserting loops and lifting index expressions or transposed indices into the loop bounds.

For example,

@finch for i = :
     b[] += A[f(i)]
 end

becomes

@finch for i = :
     t = f(i)
@@ -206,7 +206,7 @@
     b[] += A[i, j]
 end

becomes

@finch for i = :, j = :, s = i:i
     b[] += A[s, j]
-end
source
Finch.containMethod
contain(f, ctx)

Call f on a subcontext of ctx and return the result. Variable bindings, preambles, and epilogues defined in the subcontext will not escape the call to contain.

source
Finch.countstoredMethod
countstored(arr)

Return the number of stored elements in arr. If there are explicitly stored fill elements, they are counted too.

See also: (SparseArrays.nnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nnz) and (Base.summarysize)(https://docs.julialang.org/en/v1/base/base/#Base.summarysize)

source
Finch.data_repMethod
data_rep(tns)

Return a trait object representing everything that can be learned about the data based on the storage format (type) of the tensor

source
Finch.dataflowMethod
dataflow(ex)

Run dead code elimination and constant propagation. ex is the target Julia expression.

source
Finch.declare!Method
declare!(ctx, tns, init)

Declare the read-only virtual tensor tns in the context ctx with a starting value of init and return it. Afterwards the tensor is update-only.

source
Finch.declare_level!Function
declare_level!(ctx, lvl, pos, init)

Initialize and thaw all fibers within lvl, assuming positions 1:pos were previously assembled and frozen. The resulting level has no assembled positions.

source
Finch.defer_tablesMethod
defer_tables(root::LogicNode)

Replace immediate tensors with deferred expressions assuming the original program structure is given as input to the program.

source
Finch.dimensionalize!Method
dimensionalize!(prgm, ctx)

A program traversal which coordinates dimensions based on shared indices. In particular, loops and declaration statements have dimensions. Accessing a tensor with a raw index hints that the loop should have a dimension corresponding to the tensor axis. Accessing a tensor on the left hand side with a raw index also hints that the tensor declaration should have a dimension corresponding to the loop axis. All hints inside a loop body are used to evaluate loop dimensions, and all hints after a declaration until the first freeze are used to evaluate declaration dimensions. One may refer to the automatically determined dimension using a variable named _ or :. Index sharing is transitive, so A[i] = B[i] and B[j] = C[j] will induce a gathering of the dimensions of A, B, and C into one.

The dimensions are semantically evaluated just before the corresponding loop or declaration statement. The program is assumed to be scoped, so that all loops have unique index names.

See also: virtual_size, virtual_resize!, combinedim

source
Finch.dropfillsMethod
dropfills(src)

Drop the fill values from src and return a new tensor with the same shape and format.

source
Finch.enforce_lifecyclesMethod
enforce_lifecycles(prgm)

A transformation which adds freeze and thaw statements automatically to tensor roots, depending on whether they appear on the left or right hand side.

source
Finch.enforce_scopesMethod
enforce_scopes(prgm)

A transformation which gives all loops unique index names and enforces that tensor roots are declared in a containing scope and enforces that variables are declared once within their scope. Note that loop and sieve both introduce new scopes.

source
Finch.ensure_concurrentMethod

ensure_concurrent(root, ctx)

Ensures that all nonlocal assignments to the tensor root are consistently accessed with the same indices and associative operator. Also ensures that the tensor is either atomic, or accessed by i and concurrent and injective on i.

source
Finch.evaluate_partialMethod
evaluate_partial(ctx, root)

This pass evaluates tags, global variable definitions, and foldable functions into the context bindings.

source
Finch.exit_on_yieldbindMethod
exit_on_yieldbind(prgm)

This pass rewrites the program so that yieldbind expressions are only present at the end of a block. It also adds a yieldbind if not present already.

source
Finch.expanddimsMethod
expanddims(arr::AbstractTensor, dims)

Expand the dimensions of an array by inserting a new singleton axis or axes that will appear at the dims position in the expanded array shape.

source
Finch.expanddims_repMethod
expanddims_rep(tns, dims)

Expand the representation of tns by inserting singleton dimensions dims.

source
Finch.ffindnzMethod
ffindnz(arr)

Return the nonzero elements of arr, as Finch understands arr. Returns (I..., V), where I are the coordinate vectors, one for each mode of arr, and V is a vector of corresponding nonzero values, which can be passed to fsparse.

See also: (findnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.findnz)

source
Finch.fiber_ctrFunction
fiber_ctr(tns, protos...)

Return an expression that would construct a tensor suitable to hold data with a representation described by tns. Assumes representation is collapsed.

source
Finch.fill_valueFunction
fill_value(arr)

Return the initializer for arr. For SparseArrays, this is 0. Often, the "fill" value becomes the "background" value of a tensor.

source
Finch.filteropMethod
filterop(z)(cond, arg)

filterop(z) is a function which returns ifelse(cond, arg, z). This operation is handy for filtering out values based on a mask or a predicate. map(filterop(0), cond, arg) is analogous to filter(x -> cond ? x: z, arg).

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
+end
source
Finch.containMethod
contain(f, ctx)

Call f on a subcontext of ctx and return the result. Variable bindings, preambles, and epilogues defined in the subcontext will not escape the call to contain.

source
Finch.countstoredMethod
countstored(arr)

Return the number of stored elements in arr. If there are explicitly stored fill elements, they are counted too.

See also: (SparseArrays.nnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.nnz) and (Base.summarysize)(https://docs.julialang.org/en/v1/base/base/#Base.summarysize)

source
Finch.data_repMethod
data_rep(tns)

Return a trait object representing everything that can be learned about the data based on the storage format (type) of the tensor

source
Finch.dataflowMethod
dataflow(ex)

Run dead code elimination and constant propagation. ex is the target Julia expression.

source
Finch.declare!Method
declare!(ctx, tns, init)

Declare the read-only virtual tensor tns in the context ctx with a starting value of init and return it. Afterwards the tensor is update-only.

source
Finch.declare_level!Function
declare_level!(ctx, lvl, pos, init)

Initialize and thaw all fibers within lvl, assuming positions 1:pos were previously assembled and frozen. The resulting level has no assembled positions.

source
Finch.defer_tablesMethod
defer_tables(root::LogicNode)

Replace immediate tensors with deferred expressions assuming the original program structure is given as input to the program.

source
Finch.dimensionalize!Method
dimensionalize!(prgm, ctx)

A program traversal which coordinates dimensions based on shared indices. In particular, loops and declaration statements have dimensions. Accessing a tensor with a raw index hints that the loop should have a dimension corresponding to the tensor axis. Accessing a tensor on the left hand side with a raw index also hints that the tensor declaration should have a dimension corresponding to the loop axis. All hints inside a loop body are used to evaluate loop dimensions, and all hints after a declaration until the first freeze are used to evaluate declaration dimensions. One may refer to the automatically determined dimension using a variable named _ or :. Index sharing is transitive, so A[i] = B[i] and B[j] = C[j] will induce a gathering of the dimensions of A, B, and C into one.

The dimensions are semantically evaluated just before the corresponding loop or declaration statement. The program is assumed to be scoped, so that all loops have unique index names.

See also: virtual_size, virtual_resize!, combinedim

source
Finch.dropfillsMethod
dropfills(src)

Drop the fill values from src and return a new tensor with the same shape and format.

source
Finch.enforce_lifecyclesMethod
enforce_lifecycles(prgm)

A transformation which adds freeze and thaw statements automatically to tensor roots, depending on whether they appear on the left or right hand side.

source
Finch.enforce_scopesMethod
enforce_scopes(prgm)

A transformation which gives all loops unique index names and enforces that tensor roots are declared in a containing scope and enforces that variables are declared once within their scope. Note that loop and sieve both introduce new scopes.

source
Finch.ensure_concurrentMethod

ensure_concurrent(root, ctx)

Ensures that all nonlocal assignments to the tensor root are consistently accessed with the same indices and associative operator. Also ensures that the tensor is either atomic, or accessed by i and concurrent and injective on i.

source
Finch.evaluate_partialMethod
evaluate_partial(ctx, root)

This pass evaluates tags, global variable definitions, and foldable functions into the context bindings.

source
Finch.exit_on_yieldbindMethod
exit_on_yieldbind(prgm)

This pass rewrites the program so that yieldbind expressions are only present at the end of a block. It also adds a yieldbind if not present already.

source
Finch.expanddimsMethod
expanddims(arr::AbstractTensor, dims)

Expand the dimensions of an array by inserting a new singleton axis or axes that will appear at the dims position in the expanded array shape.

source
Finch.expanddims_repMethod
expanddims_rep(tns, dims)

Expand the representation of tns by inserting singleton dimensions dims.

source
Finch.ffindnzMethod
ffindnz(arr)

Return the nonzero elements of arr, as Finch understands arr. Returns (I..., V), where I are the coordinate vectors, one for each mode of arr, and V is a vector of corresponding nonzero values, which can be passed to fsparse.

See also: (findnz)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.findnz)

source
Finch.fiber_ctrFunction
fiber_ctr(tns, protos...)

Return an expression that would construct a tensor suitable to hold data with a representation described by tns. Assumes representation is collapsed.

source
Finch.fill_valueFunction
fill_value(arr)

Return the initializer for arr. For SparseArrays, this is 0. Often, the "fill" value becomes the "background" value of a tensor.

source
Finch.filteropMethod
filterop(z)(cond, arg)

filterop(z) is a function which returns ifelse(cond, arg, z). This operation is handy for filtering out values based on a mask or a predicate. map(filterop(0), cond, arg) is analogous to filter(x -> cond ? x: z, arg).

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
 5-Tensor
 └─ SparseList (0.0) [1:5]
    ├─ [2]: 1.1
@@ -222,7 +222,7 @@
 julia> x
 5-Tensor
 └─ SparseList (0.0) [1:5]
-   └─ [4]: 4.4
source
Finch.finch_kernelMethod
finch_kernel(fname, args, prgm; options...)

Return a function definition for which can execute a Finch program of type prgm. Here, fname is the name of the function and args is a iterable of argument name => type pairs.

See also: @finch

source
Finch.fld1_nothrowMethod
fld1_nothrow(x, y)

Returns fld1(x, y) normally, returns one and issues a warning if y is zero.

source
Finch.fld_nothrowMethod
fld_nothrow(x, y)

Returns fld(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.freadMethod
fread(filename::AbstractString)

Read the Finch tensor from a file using a file format determined by the file extension. The following file extensions are supported:

source
Finch.freeze!Function
freeze!(ctx, tns)

Freeze the update-only virtual tensor tns in the context ctx and return it. This may involve trimming any excess overallocated memory. Afterwards, the tensor is read-only.

source
Finch.freeze_level!Function
freeze_level!(ctx, lvl, pos, init)

Given the last reference position, pos, freeze all fibers within lvl assuming that we have potentially updated 1:pos.

source
Finch.freshenMethod
freshen(ctx, tags...)

Return a fresh variable in the current context named after Symbol(tags...)

source
Finch.fsparse!Method
fsparse!(I..., V,[ M::Tuple])

Like fsparse, but the coordinates must be sorted and unique, and memory is reused.

source
Finch.fsparseMethod
fsparse(I::Tuple, V,[ M::Tuple, combine]; fill_value=zero(eltype(V)))

Create a sparse COO tensor S such that size(S) == M and S[(i[q] for i = I)...] = V[q]. The combine function is used to combine duplicates. If M is not specified, it is set to map(maximum, I). If the combine function is not supplied, combine defaults to + unless the elements of V are Booleans in which case combine defaults to |. All elements of I must satisfy 1 <= I[n][q] <= M[n]. Numerical zeros are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

See also: sparse

Examples

julia> I = ( [1, 2, 3], [1, 2, 3], [1, 2, 3]);

julia> V = [1.0; 2.0; 3.0];

julia> fsparse(I, V) SparseCOO (0.0) [1:3×1:3×1:3] │ │ │ └─└─└─[1, 1, 1] [2, 2, 2] [3, 3, 3] 1.0 2.0 3.0

source
Finch.fsprandMethod
fsprand([rng],[type], M..., p, [rfn])

Create a random sparse tensor of size m in COO format. There are two cases: - If p is floating point, the probability of any element being nonzero is independently given by p (and hence the expected density of nonzeros is also p). - If p is an integer, exactly p nonzeros are distributed uniformly at random throughout the tensor (and hence the density of nonzeros is exactly p / prod(M)). Nonzero values are sampled from the distribution specified by rfn and have the type type. The uniform distribution is used in case rfn is not specified. The optional rng argument specifies a random number generator.

See also: (sprand)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.sprand)

Examples

julia> fsprand(Bool, 3, 3, 0.5)
+   └─ [4]: 4.4
source
Finch.finch_kernelMethod
finch_kernel(fname, args, prgm; options...)

Return a function definition for which can execute a Finch program of type prgm. Here, fname is the name of the function and args is a iterable of argument name => type pairs.

See also: @finch

source
Finch.fld1_nothrowMethod
fld1_nothrow(x, y)

Returns fld1(x, y) normally, returns one and issues a warning if y is zero.

source
Finch.fld_nothrowMethod
fld_nothrow(x, y)

Returns fld(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.freadMethod
fread(filename::AbstractString)

Read the Finch tensor from a file using a file format determined by the file extension. The following file extensions are supported:

source
Finch.freeze!Function
freeze!(ctx, tns)

Freeze the update-only virtual tensor tns in the context ctx and return it. This may involve trimming any excess overallocated memory. Afterwards, the tensor is read-only.

source
Finch.freeze_level!Function
freeze_level!(ctx, lvl, pos, init)

Given the last reference position, pos, freeze all fibers within lvl assuming that we have potentially updated 1:pos.

source
Finch.freshenMethod
freshen(ctx, tags...)

Return a fresh variable in the current context named after Symbol(tags...)

source
Finch.fsparse!Method
fsparse!(I..., V,[ M::Tuple])

Like fsparse, but the coordinates must be sorted and unique, and memory is reused.

source
Finch.fsparseMethod
fsparse(I::Tuple, V,[ M::Tuple, combine]; fill_value=zero(eltype(V)))

Create a sparse COO tensor S such that size(S) == M and S[(i[q] for i = I)...] = V[q]. The combine function is used to combine duplicates. If M is not specified, it is set to map(maximum, I). If the combine function is not supplied, combine defaults to + unless the elements of V are Booleans in which case combine defaults to |. All elements of I must satisfy 1 <= I[n][q] <= M[n]. Numerical zeros are retained as structural nonzeros; to drop numerical zeros, use dropzeros!.

See also: sparse

Examples

julia> I = ( [1, 2, 3], [1, 2, 3], [1, 2, 3]);

julia> V = [1.0; 2.0; 3.0];

julia> fsparse(I, V) SparseCOO (0.0) [1:3×1:3×1:3] │ │ │ └─└─└─[1, 1, 1] [2, 2, 2] [3, 3, 3] 1.0 2.0 3.0

source
Finch.fsprandMethod
fsprand([rng],[type], M..., p, [rfn])

Create a random sparse tensor of size m in COO format. There are two cases: - If p is floating point, the probability of any element being nonzero is independently given by p (and hence the expected density of nonzeros is also p). - If p is an integer, exactly p nonzeros are distributed uniformly at random throughout the tensor (and hence the density of nonzeros is exactly p / prod(M)). Nonzero values are sampled from the distribution specified by rfn and have the type type. The uniform distribution is used in case rfn is not specified. The optional rng argument specifies a random number generator.

See also: (sprand)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.sprand)

Examples

julia> fsprand(Bool, 3, 3, 0.5)
 SparseCOO (false) [1:3,1:3]
 ├─├─[1, 1]: true
 ├─├─[3, 1]: true
@@ -234,38 +234,38 @@
 SparseCOO (0.0) [1:2,1:2,1:2]
 ├─├─├─[2, 2, 1]: 0.6478553157718558
 ├─├─├─[1, 1, 2]: 0.996665291437684
-├─├─├─[2, 1, 2]: 0.7491940599574348
source
Finch.fspzerosMethod
fspzeros([type], M...)

Create a random zero tensor of size M, with elements of type type. The tensor is in COO format.

See also: (spzeros)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.spzeros)

Examples

julia> fspzeros(Bool, 3, 3)
+├─├─├─[2, 1, 2]: 0.7491940599574348
source
Finch.fspzerosMethod
fspzeros([type], M...)

Create a random zero tensor of size M, with elements of type type. The tensor is in COO format.

See also: (spzeros)(https://docs.julialang.org/en/v1/stdlib/SparseArrays/#SparseArrays.spzeros)

Examples

julia> fspzeros(Bool, 3, 3)
 3×3-Tensor
 └─ SparseCOO{2} (false) [:,1:3]
 
 julia> fspzeros(Float64, 2, 2, 2)
 2×2×2-Tensor
-└─ SparseCOO{3} (0.0) [:,:,1:2]
source
Finch.ftnsreadMethod
ftnsread(filename)

Read the contents of the FROSTT .tns file 'filename' into a Finch COO Tensor.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnsread

source
Finch.ftnswriteMethod
ftnswrite(filename, tns)

Write a sparse Finch tensor to a FROSTT .tns file.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnswrite

source
Finch.fttreadMethod
fttread(filename, infoonly=false, retcoord=false)

Read the TensorMarket file into a Finch tensor. The tensor will be dense or COO depending on the format of the file.

TensorMarket must be loaded for this function to be available.

See also: ttread

source
Finch.fusedMethod
fused(f, args...; kwargs...)

This function decorator modifies f to fuse the contained array operations and optimize the resulting program. The function must return a single array or tuple of arrays. kwargs are passed to compute

source
Finch.fwriteMethod
fwrite(filename::AbstractString, tns::Finch.Tensor)

Write the Finch tensor to a file using a file format determined by the file extension. The following file extensions are supported:

source
Finch.get_binding!Method
get_binding!(ctx, var, val)

Get the binding of a variable in the context, or set it to a default value.

source
Finch.get_bindingMethod
get_binding(ctx, var, val)

Get the binding of a variable in the context, or return a default value.

source
Finch.get_lockMethod
get_lock(dev::AbstractDevice, arr, idx, ty)

Given a device, an array of elements of type ty, and an index to the array, idx, gets a lock of type ty associated to arr[idx] on dev.

source
Finch.get_prove_rulesMethod
get_prove_rules(alg, shash)

Return the bound rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. shash is an object that can be called to return a static hash value. This rule set is used to analyze loop bounds in Finch.

source
Finch.get_resultMethod
get_result(ctx)

Return a variable which evaluates to the result of the program which should be returned to the user.

source
Finch.get_schedulerMethod
get_scheduler()

Get the current Finch scheduler used by compute to execute lazy tensor programs.

source
Finch.get_simplify_rulesMethod
get_simplify_rules(alg, shash)

Return the program rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. Defaults to a collection of straightforward rules that use the algebra to check properties of functions like associativity, commutativity, etc. shash is an object that can be called to return a static hash value. This rule set simplifies, normalizes, and propagates constants, and is the basis for how Finch understands sparsity.

source
Finch.get_static_hashMethod
get_static_hash(ctx)

Return an object which can be called as a hash function. The hashes are the same when objects are hashed in the same order.

source
Finch.get_structureFunction
get_structure(root::LogicNode)

Quickly produce a normalized structure for a logic program. Note: the result will not be a runnable logic program, but can be hashed and compared for equality. Two programs will have equal structure if their tensors have the same type and their program structure is equivalent up to renaming.

source
Finch.get_styleMethod
get_style(ctx, root)

return the style to use for lowering root in ctx. This method is used to determine which pass should be used to lower a given node. The default implementation returns DefaultStyle(). Overload the three argument form of this method, get_style(ctx, node, root) and specialize on node.

source
Finch.get_wrapper_rulesMethod
get_wrapper_rules(ctx, depth, alg)

Return the wrapperizing rule set for Finch, which converts expressions like `A[i

  • 1]to array combinator expressions likeOffsetArray(A, (1,))`. The rules have

access to the algebra alg and the depth lookup depthOne can dispatch on thealg` trait to specialize the rule set for different algebras. These rules run after simplification so one can expect constants to be folded.

source
Finch.getindex_repMethod
getindex_rep(tns, idxs...)

Return a trait object representing the result of calling getindex(tns, idxs...) on the tensor represented by tns. Assumes traits are in collapsed form.

source
Finch.getunboundMethod
getunbound(stmt)

Return an iterator over the indices in a Finch program that have yet to be bound.

julia> getunbound(@finch_program for i=_; :a[i, j] += 2 end)
+└─ SparseCOO{3} (0.0) [:,:,1:2]
source
Finch.ftnsreadMethod
ftnsread(filename)

Read the contents of the FROSTT .tns file 'filename' into a Finch COO Tensor.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnsread

source
Finch.ftnswriteMethod
ftnswrite(filename, tns)

Write a sparse Finch tensor to a FROSTT .tns file.

TensorMarket must be loaded for this function to be available.

Danger

This file format does not record the size or eltype of the tensor, and is provided for archival purposes only.

See also: tnswrite

source
Finch.fttreadMethod
fttread(filename, infoonly=false, retcoord=false)

Read the TensorMarket file into a Finch tensor. The tensor will be dense or COO depending on the format of the file.

TensorMarket must be loaded for this function to be available.

See also: ttread

source
Finch.fusedMethod
fused(f, args...; kwargs...)

This function decorator modifies f to fuse the contained array operations and optimize the resulting program. The function must return a single array or tuple of arrays. kwargs are passed to compute

source
Finch.fwriteMethod
fwrite(filename::AbstractString, tns::Finch.Tensor)

Write the Finch tensor to a file using a file format determined by the file extension. The following file extensions are supported:

source
Finch.get_binding!Method
get_binding!(ctx, var, val)

Get the binding of a variable in the context, or set it to a default value.

source
Finch.get_bindingMethod
get_binding(ctx, var, val)

Get the binding of a variable in the context, or return a default value.

source
Finch.get_lockMethod
get_lock(dev::AbstractDevice, arr, idx, ty)

Given a device, an array of elements of type ty, and an index to the array, idx, gets a lock of type ty associated to arr[idx] on dev.

source
Finch.get_prove_rulesMethod
get_prove_rules(alg, shash)

Return the bound rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. shash is an object that can be called to return a static hash value. This rule set is used to analyze loop bounds in Finch.

source
Finch.get_resultMethod
get_result(ctx)

Return a variable which evaluates to the result of the program which should be returned to the user.

source
Finch.get_schedulerMethod
get_scheduler()

Get the current Finch scheduler used by compute to execute lazy tensor programs.

source
Finch.get_simplify_rulesMethod
get_simplify_rules(alg, shash)

Return the program rule set for Finch. One can dispatch on the alg trait to specialize the rule set for different algebras. Defaults to a collection of straightforward rules that use the algebra to check properties of functions like associativity, commutativity, etc. shash is an object that can be called to return a static hash value. This rule set simplifies, normalizes, and propagates constants, and is the basis for how Finch understands sparsity.

source
Finch.get_static_hashMethod
get_static_hash(ctx)

Return an object which can be called as a hash function. The hashes are the same when objects are hashed in the same order.

source
Finch.get_structureFunction
get_structure(root::LogicNode)

Quickly produce a normalized structure for a logic program. Note: the result will not be a runnable logic program, but can be hashed and compared for equality. Two programs will have equal structure if their tensors have the same type and their program structure is equivalent up to renaming.

source
Finch.get_styleMethod
get_style(ctx, root)

return the style to use for lowering root in ctx. This method is used to determine which pass should be used to lower a given node. The default implementation returns DefaultStyle(). Overload the three argument form of this method, get_style(ctx, node, root) and specialize on node.

source
Finch.get_wrapper_rulesMethod
get_wrapper_rules(ctx, depth, alg)

Return the wrapperizing rule set for Finch, which converts expressions like `A[i

  • 1]to array combinator expressions likeOffsetArray(A, (1,))`. The rules have

access to the algebra alg and the depth lookup depthOne can dispatch on thealg` trait to specialize the rule set for different algebras. These rules run after simplification so one can expect constants to be folded.

source
Finch.getindex_repMethod
getindex_rep(tns, idxs...)

Return a trait object representing the result of calling getindex(tns, idxs...) on the tensor represented by tns. Assumes traits are in collapsed form.

source
Finch.getunboundMethod
getunbound(stmt)

Return an iterator over the indices in a Finch program that have yet to be bound.

julia> getunbound(@finch_program for i=_; :a[i, j] += 2 end)
 [j]
 julia> getunbound(@finch_program i + j * 2 * i)
-[i, j]
source
Finch.instantiate!Method
instantiate!(ctx, prgm)

A transformation to call instantiate on tensors before executing an expression.

source
Finch.instantiateMethod
instantiate(ctx, tns, mode)

Process the tensor tns in the context ctx, just after it has been unfurled, declared, or thawed. The earliest opportunity to process tns.

source
Finch.instantiate!Method
instantiate!(ctx, prgm)

A transformation to call instantiate on tensors before executing an expression.

source
Finch.instantiateMethod
instantiate(ctx, tns, mode)

Process the tensor tns in the context ctx, just after it has been unfurled, declared, or thawed. The earliest opportunity to process tns.

source
Finch.is_atomicFunction
is_atomic(ctx, tns)
 
 Returns a tuple (atomicities, overall) where atomicities is a vector, indicating which indices have an atomic that guards them,
-and overall is a boolean that indicates is the last level had an atomic guarding it.
source
Finch.is_concurrentFunction
is_concurrent(ctx, tns)
+and overall is a boolean that indicates is the last level had an atomic guarding it.
source
Finch.is_concurrentFunction
is_concurrent(ctx, tns)
 
 Returns a vector of booleans, one for each dimension of the tensor, indicating
 whether the index can be written to without any execution state. So if a matrix returns [true, false],
 then we can write to A[i, j] and A[i_2, j] without any shared execution state between the two, but
-we can't write to A[i, j] and A[i, j_2] without carrying over execution state.
source
Finch.is_injectiveFunction
is_injective(ctx, tns)

Returns a vector of booleans, one for each dimension of the tensor, indicating whether the access is injective in that dimension. A dimension is injective if each index in that dimension maps to a different memory space in the underlying array.

source
Finch.isassociativeMethod
isassociative(algebra, f)

Return true when f(a..., f(b...), c...) = f(a..., b..., c...) in algebra.

source
Finch.iscommutativeMethod
iscommutative(algebra, f)

Return true when for all permutations p, f(a...) = f(a[p]...) in algebra.

source
Finch.isdistributiveMethod
isdistributive(algebra, f, g)

Return true when f(a, g(b, c)) = g(f(a, b), f(a, c)) in algebra.

source
Finch.isidentityMethod
isidentity(algebra, f, x)

Return true when f(a..., x, b...) = f(a..., b...) in algebra.

source
Finch.isinverseMethod
isinverse(algebra, f, g)

Return true when f(a, g(a)) is the identity under f in algebra.

source
Finch.labelled_childrenMethod
labelled_children(node)

Return the children of node in a LabelledTree. You may label the children by returning a LabelledTree(key, value), which will be shown as key: value a.

source
Finch.lazyMethod
lazy(arg)

Create a lazy tensor from an argument. All operations on lazy tensors are lazy, and will not be executed until compute is called on their result.

for example,

x = lazy(rand(10))
+we can't write to A[i, j] and A[i, j_2] without carrying over execution state.
source
Finch.is_injectiveFunction
is_injective(ctx, tns)

Returns a vector of booleans, one for each dimension of the tensor, indicating whether the access is injective in that dimension. A dimension is injective if each index in that dimension maps to a different memory space in the underlying array.

source
Finch.isassociativeMethod
isassociative(algebra, f)

Return true when f(a..., f(b...), c...) = f(a..., b..., c...) in algebra.

source
Finch.iscommutativeMethod
iscommutative(algebra, f)

Return true when for all permutations p, f(a...) = f(a[p]...) in algebra.

source
Finch.isdistributiveMethod
isdistributive(algebra, f, g)

Return true when f(a, g(b, c)) = g(f(a, b), f(a, c)) in algebra.

source
Finch.isidentityMethod
isidentity(algebra, f, x)

Return true when f(a..., x, b...) = f(a..., b...) in algebra.

source
Finch.isinverseMethod
isinverse(algebra, f, g)

Return true when f(a, g(a)) is the identity under f in algebra.

source
Finch.labelled_childrenMethod
labelled_children(node)

Return the children of node in a LabelledTree. You may label the children by returning a LabelledTree(key, value), which will be shown as key: value a.

source
Finch.lazyMethod
lazy(arg)

Create a lazy tensor from an argument. All operations on lazy tensors are lazy, and will not be executed until compute is called on their result.

for example,

x = lazy(rand(10))
 y = lazy(rand(10))
 z = x + y
 z = z + 1
-z = compute(z)

will not actually compute z until compute(z) is called, so the execution of x + y is fused with the execution of z + 1.

source
Finch.level_axesFunction
level_axes(lvl)

The result of level_axes(lvl) defines the axes of all subfibers in the level lvl.

source
Finch.level_eltypeFunction
level_eltype(::Type{Lvl})

The result of level_eltype(Lvl) defines eltype for all subfibers in a level of type Lvl.

source
Finch.level_ndimsFunction
level_ndims(::Type{Lvl})

The result of level_ndims(Lvl) defines ndims for all subfibers in a level of type Lvl.

source
Finch.level_sizeFunction
level_size(lvl)

The result of level_size(lvl) defines the size of all subfibers in the level lvl.

source
Finch.lift_fieldsMethod

This one is a placeholder that places reorder statements inside aggregate and mapjoin query nodes. only works on the output of propagatefields(pushfields(prgm))

source
Finch.lift_subqueriesMethod
lift_subqueries

Creates a plan that lifts all subqueries to the top level of the program, with unique queries for each distinct subquery alias. This function processes the rhs of each subquery once, to carefully extract SSA form from any nested pointer structure. After calling lift_subqueries, it is safe to map over the program (recursive pointers to subquery structures will not incur exponential overhead).

source
Finch.map_repMethod
map_rep(f, args...)

Return a storage trait object representing the result of mapping f over storage traits args. Assumes representation is collapsed.

source
Finch.maxbyMethod
maxby(a, b)

Return the max of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmax operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(-Inf => 0);
+z = compute(z)

will not actually compute z until compute(z) is called, so the execution of x + y is fused with the execution of z + 1.

source
Finch.level_axesFunction
level_axes(lvl)

The result of level_axes(lvl) defines the axes of all subfibers in the level lvl.

source
Finch.level_eltypeFunction
level_eltype(::Type{Lvl})

The result of level_eltype(Lvl) defines eltype for all subfibers in a level of type Lvl.

source
Finch.level_ndimsFunction
level_ndims(::Type{Lvl})

The result of level_ndims(Lvl) defines ndims for all subfibers in a level of type Lvl.

source
Finch.level_sizeFunction
level_size(lvl)

The result of level_size(lvl) defines the size of all subfibers in the level lvl.

source
Finch.lift_fieldsMethod

This one is a placeholder that places reorder statements inside aggregate and mapjoin query nodes. only works on the output of propagatefields(pushfields(prgm))

source
Finch.lift_subqueriesMethod
lift_subqueries

Creates a plan that lifts all subqueries to the top level of the program, with unique queries for each distinct subquery alias. This function processes the rhs of each subquery once, to carefully extract SSA form from any nested pointer structure. After calling lift_subqueries, it is safe to map over the program (recursive pointers to subquery structures will not incur exponential overhead).

source
Finch.map_repMethod
map_rep(f, args...)

Return a storage trait object representing the result of mapping f over storage traits args. Assumes representation is collapsed.

source
Finch.maxbyMethod
maxby(a, b)

Return the max of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmax operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(-Inf => 0);
 
 julia> @finch for i=_; x[] <<maxby>>= a[i] => i end;
 
 julia> x[]
-9.9 => 3
source
Finch.minbyMethod
minby(a, b)

Return the min of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmin operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(Inf => 0);
+9.9 => 3
source
Finch.minbyMethod
minby(a, b)

Return the min of a or b, comparing them by a[1] and b[1], and breaking ties to the left. Useful for implementing argmin operations:

julia> a = [7.7, 3.3, 9.9, 3.3, 9.9]; x = Scalar(Inf => 0);
 
 julia> @finch for i=_; x[] <<minby>>= a[i] => i end;
 
 julia> x[]
-3.3 => 2
source
Finch.mod1_nothrowMethod
mod1_nothrow(x, y)

Returns mod1(x, y) normally, returns one and issues a warning if y is zero.

source
Finch.mod_nothrowMethod
mod_nothrow(x, y)

Returns mod(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.movetoFunction
moveto(arr, device)

If the array is not on the given device, it creates a new version of this array on that device and copies the data in to it, according to the device trait.

source
Finch.offsetMethod
offset(tns, delta...)

Create an OffsetArray such that offset(tns, delta...)[i...] == tns[i .+ delta...]. The dimensions declared by an OffsetArray are shifted, so that size(offset(tns, delta...)) == size(tns) .+ delta.

source
Finch.parallelFunction
parallel(ext, device=CPU(nthreads()))

A dimension ext that is parallelized over device. The ext field is usually _, or dimensionless, but can be any standard dimension argument.

source
Finch.pattern!Method
pattern!(fbr)

Return the pattern of fbr. That is, return a tensor which is true wherever fbr is structurally unequal to its fill_value. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
+3.3 => 2
source
Finch.mod1_nothrowMethod
mod1_nothrow(x, y)

Returns mod1(x, y) normally, returns one and issues a warning if y is zero.

source
Finch.mod_nothrowMethod
mod_nothrow(x, y)

Returns mod(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.movetoFunction
moveto(arr, device)

If the array is not on the given device, it creates a new version of this array on that device and copies the data in to it, according to the device trait.

source
Finch.offsetMethod
offset(tns, delta...)

Create an OffsetArray such that offset(tns, delta...)[i...] == tns[i .+ delta...]. The dimensions declared by an OffsetArray are shifted, so that size(offset(tns, delta...)) == size(tns) .+ delta.

source
Finch.parallelFunction
parallel(ext, device=CPU(nthreads()))

A dimension ext that is parallelized over device. The ext field is usually _, or dimensionless, but can be any standard dimension argument.

source
Finch.pattern!Method
pattern!(fbr)

Return the pattern of fbr. That is, return a tensor which is true wherever fbr is structurally unequal to its fill_value. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
 10-Tensor
 └─ SparseList (0.0) [1:10]
    ├─ [1]: 2.0
@@ -281,12 +281,12 @@
    ├─ [3]: true
    ├─ ⋮
    ├─ [7]: true
-   └─ [9]: true
source
Finch.permissiveMethod
permissive(tns, dims...)

Create an PermissiveArray where permissive(tns, dims...)[i...] is missing if i[n] is not in the bounds of tns when dims[n] is true. This wrapper allows all permissive dimensions to be exempt from dimension checks, and is useful when we need to access an array out of bounds, or for padding. More formally,

    permissive(tns, dims...)[i...] =
+   └─ [9]: true
source
Finch.permissiveMethod
permissive(tns, dims...)

Create an PermissiveArray where permissive(tns, dims...)[i...] is missing if i[n] is not in the bounds of tns when dims[n] is true. This wrapper allows all permissive dimensions to be exempt from dimension checks, and is useful when we need to access an array out of bounds, or for padding. More formally,

    permissive(tns, dims...)[i...] =
         if any(n -> dims[n] && !(i[n] in axes(tns)[n]))
             missing
         else
             tns[i...]
-        end
source
Finch.permutedims_repMethod
permutedims_rep(tns, perm)

Return a trait object representing the result of permuting a tensor represented by tns to the permutation perm.

source
Finch.postypeFunction
postype(lvl)

Return a position type with the same flavor as those used to store the positions of the fibers contained in lvl. The name position descends from the pos or position or pointer arrays found in many definitions of CSR or CSC. In Finch, positions should be data used to access either a subfiber or some other similar auxiliary data. Thus, we often end up iterating over positions.

source
Finch.prettyMethod
pretty(ex)

Make ex prettier. Shorthand for ex |> unblock |> striplines |> regensym.

source
Finch.productsMethod
products(tns, dim)

Create a ProductArray such that

    products(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] * i[dim + 1], i[dim + 2:end]...]

This is like toeplitz but with times instead of plus.

source
Finch.propagate_transpose_queriesFunction

propagatetransposequeries(root)

Removes non-materializing permutation queries by propagating them to the expressions they contain. Pushes fields and also removes copies. Removes queries of the form:

    query(ALIAS, reorder(relabel(ALIAS, FIELD...), FIELD...))

Does not remove queries which define production aliases.

Accepts programs of the form:

       TABLE  := table(IMMEDIATE, FIELD...)
+        end
source
Finch.permutedims_repMethod
permutedims_rep(tns, perm)

Return a trait object representing the result of permuting a tensor represented by tns to the permutation perm.

source
Finch.postypeFunction
postype(lvl)

Return a position type with the same flavor as those used to store the positions of the fibers contained in lvl. The name position descends from the pos or position or pointer arrays found in many definitions of CSR or CSC. In Finch, positions should be data used to access either a subfiber or some other similar auxiliary data. Thus, we often end up iterating over positions.

source
Finch.prettyMethod
pretty(ex)

Make ex prettier. Shorthand for ex |> unblock |> striplines |> regensym.

source
Finch.productsMethod
products(tns, dim)

Create a ProductArray such that

    products(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] * i[dim + 1], i[dim + 2:end]...]

This is like toeplitz but with times instead of plus.

source
Finch.propagate_transpose_queriesFunction

propagatetransposequeries(root)

Removes non-materializing permutation queries by propagating them to the expressions they contain. Pushes fields and also removes copies. Removes queries of the form:

    query(ALIAS, reorder(relabel(ALIAS, FIELD...), FIELD...))

Does not remove queries which define production aliases.

Accepts programs of the form:

       TABLE  := table(IMMEDIATE, FIELD...)
        ACCESS := reorder(relabel(ALIAS, FIELD...), FIELD...)
     POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE
     MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...)
@@ -294,14 +294,14 @@
 COMPUTE_QUERY := query(ALIAS, reformat(IMMEDIATE, MAPREDUCE)) | query(ALIAS, MAPREDUCE))
          PLAN := plan(STEP...)
          STEP := COMPUTE_QUERY | INPUT_QUERY | PLAN | produces((ALIAS | ACCESS)...)
-         ROOT := STEP
source
Finch.protocolizeMethod
protocolize(tns, protos...)

Create a ProtocolizedArray that accesses dimension n with protocol protos[n], if protos[n] is not nothing. See the documention for Iteration Protocols for more information. For example, to gallop along the inner dimension of a matrix A, we write A[gallop(i), j], which becomes protocolize(A, gallop, nothing)[i, j].

source
Finch.proveMethod
prove(ctx, root; verbose = false)

use the rules in ctx to attempt to prove that the program root is true. Return false if the program cannot be shown to be true.

source
Finch.push_epilogue!Method
push_epilogue!(ctx, thunk)

Push the thunk onto the epilogue in the currently executing context. The epilogue will be evaluated after the code returned by the given function in the context.

source
Finch.push_fieldsMethod

push_fields(node)

This program modifies all EXPR statements in the program, as defined in the following grammar:

    LEAF := relabel(ALIAS, FIELD...) |
+         ROOT := STEP
source
Finch.protocolizeMethod
protocolize(tns, protos...)

Create a ProtocolizedArray that accesses dimension n with protocol protos[n], if protos[n] is not nothing. See the documention for Iteration Protocols for more information. For example, to gallop along the inner dimension of a matrix A, we write A[gallop(i), j], which becomes protocolize(A, gallop, nothing)[i, j].

source
Finch.proveMethod
prove(ctx, root; verbose = false)

use the rules in ctx to attempt to prove that the program root is true. Return false if the program cannot be shown to be true.

source
Finch.push_epilogue!Method
push_epilogue!(ctx, thunk)

Push the thunk onto the epilogue in the currently executing context. The epilogue will be evaluated after the code returned by the given function in the context.

source
Finch.push_fieldsMethod

push_fields(node)

This program modifies all EXPR statements in the program, as defined in the following grammar:

    LEAF := relabel(ALIAS, FIELD...) |
             table(IMMEDIATE, FIELD...) |
             IMMEDIATE
     EXPR := LEAF |
             reorder(EXPR, FIELD...) |
             relabel(EXPR, FIELD...) |
             mapjoin(IMMEDIATE, EXPR...) |
-            aggregate(IMMEDIATE, IMMEDIATE, EXPR, FIELD...)

Pushes all reorder and relabel statements down to LEAF nodes of each EXPR. Output LEAF nodes will match the form reorder(relabel(LEAF, FIELD...), FIELD...), omitting reorder or relabel if not present as an ancestor of the LEAF in the original EXPR. Tables and immediates will absorb relabels.

source
Finch.push_preamble!Method
push_preamble!(ctx, thunk)

Push the thunk onto the preamble in the currently executing context. The preamble will be evaluated before the code returned by the given function in the context.

source
Finch.reassemble_level!Function
reassemble_level!(lvl, ctx, pos_start, pos_end)

Set the previously assempled positions from pos_start to pos_end to level_fill_value(lvl). Not avaliable on all level types as this presumes updating.

source
Finch.refreshMethod
Finch.refresh()

Finch caches the code for kernels as soon as they are run. If you modify the Finch compiler after running a kernel, you'll need to invalidate the Finch caches to reflect these changes by calling Finch.refresh(). This function should only be called at global scope, and never during precompilation.

source
Finch.regensymMethod
regensym(ex)

Give gensyms prettier names by renumbering them. ex is the target Julia expression.

source
Finch.rem_nothrowMethod
rem_nothrow(x, y)

Returns rem(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.rep_constructFunction
rep_construct(tns, protos...)

Construct a tensor suitable to hold data with a representation described by tns. Assumes representation is collapsed.

source
Finch.return_typeMethod
return_type(algebra, f, arg_types...)

Give the return type of f when applied to arguments of types arg_types... in algebra. Used to determine output types of functions in the high-level interface. This function falls back to Base.promote_op.

source
Finch.scaleMethod
scale(tns, delta...)

Create a ScaleArray such that scale(tns, delta...)[i...] == tns[i .* delta...]. The dimensions declared by an OffsetArray are shifted, so that size(scale(tns, delta...)) == size(tns) .* delta. This is only supported on tensors with real-valued dimensions.

source
Finch.scansearchMethod
scansearch(v, x, lo, hi)

return the first value of v greater than or equal to x, within the range lo:hi. Return hi+1 if all values are less than x. This implemantation uses an exponential search strategy which involves two steps: 1) searching for binary search bounds via exponential steps rightward 2) binary searching within those bounds.

source
Finch.set_fill_value!Method
set_fill_value!(fbr, init)

Return a tensor which is equal to fbr, but with the fill (implicit) value set to init. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
+            aggregate(IMMEDIATE, IMMEDIATE, EXPR, FIELD...)

Pushes all reorder and relabel statements down to LEAF nodes of each EXPR. Output LEAF nodes will match the form reorder(relabel(LEAF, FIELD...), FIELD...), omitting reorder or relabel if not present as an ancestor of the LEAF in the original EXPR. Tables and immediates will absorb relabels.

source
Finch.push_preamble!Method
push_preamble!(ctx, thunk)

Push the thunk onto the preamble in the currently executing context. The preamble will be evaluated before the code returned by the given function in the context.

source
Finch.reassemble_level!Function
reassemble_level!(lvl, ctx, pos_start, pos_end)

Set the previously assempled positions from pos_start to pos_end to level_fill_value(lvl). Not avaliable on all level types as this presumes updating.

source
Finch.refreshMethod
Finch.refresh()

Finch caches the code for kernels as soon as they are run. If you modify the Finch compiler after running a kernel, you'll need to invalidate the Finch caches to reflect these changes by calling Finch.refresh(). This function should only be called at global scope, and never during precompilation.

source
Finch.regensymMethod
regensym(ex)

Give gensyms prettier names by renumbering them. ex is the target Julia expression.

source
Finch.rem_nothrowMethod
rem_nothrow(x, y)

Returns rem(x, y) normally, returns zero and issues a warning if y is zero.

source
Finch.rep_constructFunction
rep_construct(tns, protos...)

Construct a tensor suitable to hold data with a representation described by tns. Assumes representation is collapsed.

source
Finch.return_typeMethod
return_type(algebra, f, arg_types...)

Give the return type of f when applied to arguments of types arg_types... in algebra. Used to determine output types of functions in the high-level interface. This function falls back to Base.promote_op.

source
Finch.scaleMethod
scale(tns, delta...)

Create a ScaleArray such that scale(tns, delta...)[i...] == tns[i .* delta...]. The dimensions declared by an OffsetArray are shifted, so that size(scale(tns, delta...)) == size(tns) .* delta. This is only supported on tensors with real-valued dimensions.

source
Finch.scansearchMethod
scansearch(v, x, lo, hi)

return the first value of v greater than or equal to x, within the range lo:hi. Return hi+1 if all values are less than x. This implemantation uses an exponential search strategy which involves two steps: 1) searching for binary search bounds via exponential steps rightward 2) binary searching within those bounds.

source
Finch.set_fill_value!Method
set_fill_value!(fbr, init)

Return a tensor which is equal to fbr, but with the fill (implicit) value set to init. May reuse memory and render the original tensor unusable when modified.

julia> A = Tensor(SparseList(Element(0.0), 10), [2.0, 0.0, 3.0, 0.0, 4.0, 0.0, 5.0, 0.0, 6.0, 0.0])
 10-Tensor
 └─ SparseList (0.0) [1:10]
    ├─ [1]: 2.0
@@ -317,7 +317,7 @@
    ├─ [3]: 3.0
    ├─ ⋮
    ├─ [7]: 5.0
-   └─ [9]: 6.0
source
Finch.set_loop_orderFunction

setlooporder(root)

Heuristically chooses a total order for all loops in the program by inserting reorder statments inside reformat, query, and aggregate nodes.

Accepts programs of the form:

      REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...)
+   └─ [9]: 6.0
source
Finch.set_loop_orderFunction

setlooporder(root)

Heuristically chooses a total order for all loops in the program by inserting reorder statments inside reformat, query, and aggregate nodes.

Accepts programs of the form:

      REORDER := reorder(relabel(ALIAS, FIELD...), FIELD...)
        ACCESS := reorder(relabel(ALIAS, idxs_1::FIELD...), idxs_2::FIELD...) where issubsequence(idxs_1, idxs_2)
     POINTWISE := ACCESS | mapjoin(IMMEDIATE, POINTWISE...) | reorder(IMMEDIATE, FIELD...) | IMMEDIATE
     MAPREDUCE := POINTWISE | aggregate(IMMEDIATE, IMMEDIATE, POINTWISE, FIELD...)
@@ -325,12 +325,12 @@
 COMPUTE_QUERY := query(ALIAS, reformat(IMMEDIATE, arg::(REORDER | MAPREDUCE)))
   INPUT_QUERY := query(ALIAS, TABLE)
          STEP := COMPUTE_QUERY | INPUT_QUERY
-         ROOT := PLAN(STEP..., produces(ALIAS...))
source
Finch.set_scheduler!Method
set_scheduler!(scheduler)

Set the current scheduler to scheduler. The scheduler is used by compute to execute lazy tensor programs.

source
Finch.swizzleMethod
swizzle(tns, dims)

Create a SwizzleArray to transpose any tensor tns such that

    swizzle(tns, dims)[i...] == tns[i[dims]]
source
Finch.thaw!Method
thaw!(ctx, tns)

Thaw the read-only virtual tensor tns in the context ctx and return it. Afterwards, the tensor is update-only.

source
Finch.thaw_level!Function
thaw_level!(ctx, lvl, pos, init)

Given the last reference position, pos, thaw all fibers within lvl assuming that we have previously assembled and frozen 1:pos.

source
Finch.toeplitzMethod
toeplitz(tns, dim)

Create a ToeplitzArray such that

    Toeplitz(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] + i[dim + 1], i[dim + 2:end]...]

The ToplitzArray can be thought of as adding a dimension that shifts another dimension of the original tensor.

source
Finch.unblockMethod
unblock(ex)

Flatten any redundant blocks into a single block, over the whole expression. ex is the target Julia expression.

source
Finch.unfurlMethod
unfurl(ctx, tns, ext, proto)

Return an array object (usually a looplet nest) for lowering the outermost dimension of virtual tensor tns. ext is the extent of the looplet. proto is the protocol that should be used for this index, but one doesn't need to unfurl all the indices at once.

source
Finch.unquote_literalsMethod
unquote_literals(ex)

unquote QuoteNodes when this doesn't change the semantic meaning. ex is the target Julia expression.

source
Finch.unresolveMethod
unresolve(ex)

Unresolve function literals into function symbols. ex is the target Julia expression.

source
Finch.virtual_callMethod
virtual_call(ctx, f, a...)

Given the virtual arguments a..., and a literal function f, return a virtual object representing the result of the function call. If the function is not foldable, return nothing. This function is used so that we can call e.g. tensor constructors in finch code.

source
Finch.virtual_movetoFunction
virtual_moveto(device, arr)

If the virtual array is not on the given device, copy the array to that device. This function may modify underlying data arrays, but cannot change the virtual itself. This function is used to move data to the device before a kernel is launched.

source
Finch.virtual_resize!Function
virtual_resize!(ctx, tns, dims...)

Resize tns in the context ctx. This is a function similar in spirit to Base.resize!.

source
Finch.virtual_sizeFunction
virtual_size(ctx, tns)

Return a tuple of the dimensions of tns in the context ctx. This is a function similar in spirit to Base.axes.

source
Finch.virtualizeMethod
virtualize(ctx, ex, T, [tag])

Return the virtual program corresponding to the Julia expression ex of type T in the JuliaContext ctx. Implementaters may support the optional tag argument is used to name the resulting virtual variable.

source
Finch.windowMethod
window(tns, dims)

Create a WindowedArray which represents a view into another tensor

    window(tns, dims)[i...] == tns[dim[1][i], dim[2][i], ...]

The windowed array restricts the new dimension to the dimension of valid indices of each dim. The dims may also be nothing to represent a full view of the underlying dimension.

source
Finch.wrapperizeMethod
wrapperize(ctx, root)

Convert index expressions in the program root to wrapper arrays, according to the rules in get_wrapper_rules. By default, the following transformations are performed:

A[i - j] => A[i + (-j)]
+         ROOT := PLAN(STEP..., produces(ALIAS...))
source
Finch.set_scheduler!Method
set_scheduler!(scheduler)

Set the current scheduler to scheduler. The scheduler is used by compute to execute lazy tensor programs.

source
Finch.swizzleMethod
swizzle(tns, dims)

Create a SwizzleArray to transpose any tensor tns such that

    swizzle(tns, dims)[i...] == tns[i[dims]]
source
Finch.thaw!Method
thaw!(ctx, tns)

Thaw the read-only virtual tensor tns in the context ctx and return it. Afterwards, the tensor is update-only.

source
Finch.thaw_level!Function
thaw_level!(ctx, lvl, pos, init)

Given the last reference position, pos, thaw all fibers within lvl assuming that we have previously assembled and frozen 1:pos.

source
Finch.toeplitzMethod
toeplitz(tns, dim)

Create a ToeplitzArray such that

    Toeplitz(tns, dim)[i...] == tns[i[1:dim-1]..., i[dim] + i[dim + 1], i[dim + 2:end]...]

The ToplitzArray can be thought of as adding a dimension that shifts another dimension of the original tensor.

source
Finch.unblockMethod
unblock(ex)

Flatten any redundant blocks into a single block, over the whole expression. ex is the target Julia expression.

source
Finch.unfurlMethod
unfurl(ctx, tns, ext, proto)

Return an array object (usually a looplet nest) for lowering the outermost dimension of virtual tensor tns. ext is the extent of the looplet. proto is the protocol that should be used for this index, but one doesn't need to unfurl all the indices at once.

source
Finch.unquote_literalsMethod
unquote_literals(ex)

unquote QuoteNodes when this doesn't change the semantic meaning. ex is the target Julia expression.

source
Finch.unresolveMethod
unresolve(ex)

Unresolve function literals into function symbols. ex is the target Julia expression.

source
Finch.virtual_callMethod
virtual_call(ctx, f, a...)

Given the virtual arguments a..., and a literal function f, return a virtual object representing the result of the function call. If the function is not foldable, return nothing. This function is used so that we can call e.g. tensor constructors in finch code.

source
Finch.virtual_movetoFunction
virtual_moveto(device, arr)

If the virtual array is not on the given device, copy the array to that device. This function may modify underlying data arrays, but cannot change the virtual itself. This function is used to move data to the device before a kernel is launched.

source
Finch.virtual_resize!Function
virtual_resize!(ctx, tns, dims...)

Resize tns in the context ctx. This is a function similar in spirit to Base.resize!.

source
Finch.virtual_sizeFunction
virtual_size(ctx, tns)

Return a tuple of the dimensions of tns in the context ctx. This is a function similar in spirit to Base.axes.

source
Finch.virtualizeMethod
virtualize(ctx, ex, T, [tag])

Return the virtual program corresponding to the Julia expression ex of type T in the JuliaContext ctx. Implementaters may support the optional tag argument is used to name the resulting virtual variable.

source
Finch.windowMethod
window(tns, dims)

Create a WindowedArray which represents a view into another tensor

    window(tns, dims)[i...] == tns[dim[1][i], dim[2][i], ...]

The windowed array restricts the new dimension to the dimension of valid indices of each dim. The dims may also be nothing to represent a full view of the underlying dimension.

source
Finch.wrapperizeMethod
wrapperize(ctx, root)

Convert index expressions in the program root to wrapper arrays, according to the rules in get_wrapper_rules. By default, the following transformations are performed:

A[i - j] => A[i + (-j)]
 A[3 * i] => ScaleArray(A, (3,))[i]
 A[i * j] => ProductArray(A, 1)[i, j]
 A[i + 1] => OffsetArray(A, (1,))[i]
 A[i + j] => ToeplitzArray(A, 1)[i, j]
-A[~i] => PermissiveArray(A, 1)[i]

The loop binding order may be used to determine which index comes first in an expression like A[i + j]. Thus, for i=:,j=:; ... A[i + j] will result in ToeplitzArray(A, 1)[j, i], but for j=:,i=:; ... A[i + j] results in ToeplitzArray(A, 1)[i, j]. wrapperize runs before dimensionalization, so resulting raw indices may participate in dimensionalization according to the semantics of the wrapper.

source
Finch.@barrierMacro
@barrier args... ex

Wrap ex in a let block that captures all free variables in ex that are bound in the arguments. This is useful for ensuring that the variables in ex are not mutated by the arguments.

source
Finch.@closureMacro
@closure closure_expression

Wrap the closure definition closure_expression in a let block to encourage the julia compiler to generate improved type information. For example:

callfunc(f) = f()
+A[~i] => PermissiveArray(A, 1)[i]

The loop binding order may be used to determine which index comes first in an expression like A[i + j]. Thus, for i=:,j=:; ... A[i + j] will result in ToeplitzArray(A, 1)[j, i], but for j=:,i=:; ... A[i + j] results in ToeplitzArray(A, 1)[i, j]. wrapperize runs before dimensionalization, so resulting raw indices may participate in dimensionalization according to the semantics of the wrapper.

source
Finch.@barrierMacro
@barrier args... ex

Wrap ex in a let block that captures all free variables in ex that are bound in the arguments. This is useful for ensuring that the variables in ex are not mutated by the arguments.

source
Finch.@closureMacro
@closure closure_expression

Wrap the closure definition closure_expression in a let block to encourage the julia compiler to generate improved type information. For example:

callfunc(f) = f()
 
 function foo(n)
    for i=1:n
@@ -342,19 +342,19 @@
            callfunc(@closure ()->println("Hello $i"))
        end
    end
-end

There's nothing nice about this - it's a heuristic workaround for some inefficiencies in the type information inferred by the julia 0.6 compiler. However, it can result in large speedups in many cases, without the need to restructure the code to avoid the closure.

source
Finch.@einsumMacro
@einsum tns[idxs...] <<op>>= ex...

Construct an einsum expression that computes the result of applying op to the tensor tns with the indices idxs and the tensors in the expression ex. The result is stored in the variable tns.

ex may be any pointwise expression consisting of function calls and tensor references of the form tns[idxs...], where tns and idxs are symbols.

The <<op>> operator can be any binary operator that is defined on the element type of the expression ex.

The einsum will evaluate the pointwise expression tns[idxs...] <<op>>= ex... over all combinations of index values in tns and the tensors in ex.

Here are a few examples:

@einsum C[i, j] += A[i, k] * B[k, j]
+end

There's nothing nice about this - it's a heuristic workaround for some inefficiencies in the type information inferred by the julia 0.6 compiler. However, it can result in large speedups in many cases, without the need to restructure the code to avoid the closure.

source
Finch.@einsumMacro
@einsum tns[idxs...] <<op>>= ex...

Construct an einsum expression that computes the result of applying op to the tensor tns with the indices idxs and the tensors in the expression ex. The result is stored in the variable tns.

ex may be any pointwise expression consisting of function calls and tensor references of the form tns[idxs...], where tns and idxs are symbols.

The <<op>> operator can be any binary operator that is defined on the element type of the expression ex.

The einsum will evaluate the pointwise expression tns[idxs...] <<op>>= ex... over all combinations of index values in tns and the tensors in ex.

Here are a few examples:

@einsum C[i, j] += A[i, k] * B[k, j]
 @einsum C[i, j, k] += A[i, j] * B[j, k]
 @einsum D[i, k] += X[i, j] * Y[j, k]
 @einsum J[i, j] = H[i, j] * I[i, j]
 @einsum N[i, j] = K[i, k] * L[k, j] - M[i, j]
 @einsum R[i, j] <<max>>= P[i, k] + Q[k, j]
-@einsum x[i] = A[i, j] * x[j]
source
Finch.@finchMacro
@finch [options...] prgm

Run a finch program prgm. The syntax for a finch program is a set of nested loops, statements, and branches over pointwise array assignments. For example, the following program computes the sum of two arrays A = B + C:

@finch begin
+@einsum x[i] = A[i, j] * x[j]
source
Finch.@finchMacro
@finch [options...] prgm

Run a finch program prgm. The syntax for a finch program is a set of nested loops, statements, and branches over pointwise array assignments. For example, the following program computes the sum of two arrays A = B + C:

@finch begin
     A .= 0
     for i = _
         A[i] = B[i] + C[i]
     end
     return A
-end

Finch programs are composed using the following syntax:

  • arr .= 0: an array declaration initializing arr to zero.
  • arr[inds...]: an array access, the array must be a variable and each index may be another finch expression.
  • x + y, f(x, y): function calls, where x and y are finch expressions.
  • arr[inds...] = ex: an array assignment expression, setting arr[inds] to the value of ex.
  • arr[inds...] += ex: an incrementing array expression, adding ex to arr[inds]. *, &, |, are supported.
  • arr[inds...] <<min>>= ex: a incrementing array expression with a custom operator, e.g. <<min>> is the minimum operator.
  • for i = _ body end: a loop over the index i, where _ is computed from array access with i in body.
  • if cond body end: a conditional branch that executes only iterations where cond is true.
  • return (tnss...,): at global scope, exit the program and return the tensors tnss with their new dimensions. By default, any tensor declared in global scope is returned.

Symbols are used to represent variables, and their values are taken from the environment. Loops introduce index variables into the scope of their bodies.

Finch uses the types of the arrays and symbolic analysis to discover program optimizations. If B and C are sparse array types, the program will only run over the nonzeros of either.

Semantically, Finch programs execute every iteration. However, Finch can use sparsity information to reliably skip iterations when possible.

options are optional keyword arguments:

  • algebra: the algebra to use for the program. The default is DefaultAlgebra().
  • mode: the optimization mode to use for the program. Possible modes are:
    • :debug: run the program in debug mode, with bounds checking and better error handling.
    • :safe: run the program in safe mode, with modest checks for performance and correctness.
    • :fast: run the program in fast mode, with no checks or warnings, this mode is for power users.
    The default is :safe.

See also: @finch_code

source
Finch.@finch_codeMacro

@finch_code [options...] prgm

Return the code that would be executed in order to run a finch program prgm.

See also: @finch

source
Finch.@finch_kernelMacro
@finch_kernel [options...] fname(args...) = prgm

Return a definition for a function named fname which executes @finch prgm on the arguments args. args should be a list of variables holding representative argument instances or types.

See also: @finch

source
Finch.@stagedMacro
Finch.@staged

This function is used internally in Finch in lieu of @generated functions. It ensures the first Finch invocation runs in the latest world, and leaves hooks so that subsequent calls to Finch.refresh can update the world and invalidate old versions. If the body contains closures, this macro uses an eval and invokelatest strategy. Otherwise, it uses a generated function. This macro does not support type parameters, varargs, or keyword arguments.

source
Finch.FinchNotation.accessConstant
access(tns, mode, idx...)

Finch AST expression representing the value of tensor tns at the indices idx.... The mode differentiates between reads or updates and whether the access is in-place.

source
Finch.FinchNotation.assignConstant
assign(lhs, op, rhs)

Finch AST statement that updates the value of lhs to op(lhs, rhs). Overwriting is accomplished with the function overwrite(lhs, rhs) = rhs.

source
Finch.FinchNotation.defineConstant
define(lhs, rhs, body)

Finch AST statement that defines lhs as having the value rhs in body. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.indexConstant
index(name)

Finch AST expression for an index named name. Each index must be quantified by a corresponding loop which iterates over all values of the index.

source
Finch.FinchNotation.loopConstant
loop(idx, ext, body)

Finch AST statement that runs body for each value of idx in ext. Tensors in body must have ranges that agree with ext. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.sieveConstant
sieve(cond, body)

Finch AST statement that only executes body if cond is true. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.tagConstant
tag(var, bind)

Finch AST expression for a global variable var with the value bind. Because the finch compiler cannot pass variable state from the program domain to the type domain directly, the tag type represents a value bind referred to by a variable named bind. All tag in the same program must agree on the value of variables, and only one value will be virtualized.

source
Finch.FinchNotation.virtualConstant
virtual(val)

Finch AST expression for an object val which has special meaning to the compiler. This type is typically used for tensors, as it allows users to specify the tensor's shape and data type.

source
Finch.FinchNotation.yieldbindConstant
yieldbind(args...)

Finch AST statement that sets the result of the program to the values of variables args.... Subsequent statements will not affect the result of the program.

source
Finch.FinchNotation.DimensionlessType
Dimensionless()

A singleton type representing the lack of a dimension. This is used in place of a dimension when we want to avoid dimensionality checks. In the @finch macro, you can write Dimensionless() with an underscore as for i = _, allowing finch to pick up the loop bounds from the tensors automatically.

source
Finch.FinchNotation.FinchNodeType
FinchNode

A Finch IR node, used to represent an imperative, physical Finch program.

The FinchNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchNotation.FinchNodeKind enum.

source
Finch.FinchNotation.extrudeMethod
extrude(i)

The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.

source
Finch.FinchNotation.finch_leafMethod
finch_leaf(x)

Return a terminal finch node wrapper around x. A convenience function to determine whether x should be understood by default as a literal, value, or virtual.

source
Finch.FinchNotation.followMethod
follow(i)

The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.

source
Finch.FinchNotation.gallopMethod
gallop(i)

The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.

source
Finch.FinchNotation.initwriteMethod
initwrite(z)(a, b)

initwrite(z) is a function which may assert that a isequal to z, and returnsb. By default,lhs[] = rhsis equivalent tolhs[] <<initwrite(fill_value(lhs))>>= rhs`.

source
Finch.FinchNotation.isstatefulMethod
isstateful(node)

Returns true if the node is a finch statement, and false if the node is an index expression. Typically, statements specify control flow and expressions describe values.

source
Finch.FinchNotation.laminateMethod
laminate(i)

The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.

source
Finch.FinchNotation.overwriteMethod
overwrite(z)(a, b)

overwrite(z) is a function which returns b always. lhs[] := rhs is equivalent to lhs[] <<overwrite>>= rhs.

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
+end

Finch programs are composed using the following syntax:

  • arr .= 0: an array declaration initializing arr to zero.
  • arr[inds...]: an array access, the array must be a variable and each index may be another finch expression.
  • x + y, f(x, y): function calls, where x and y are finch expressions.
  • arr[inds...] = ex: an array assignment expression, setting arr[inds] to the value of ex.
  • arr[inds...] += ex: an incrementing array expression, adding ex to arr[inds]. *, &, |, are supported.
  • arr[inds...] <<min>>= ex: a incrementing array expression with a custom operator, e.g. <<min>> is the minimum operator.
  • for i = _ body end: a loop over the index i, where _ is computed from array access with i in body.
  • if cond body end: a conditional branch that executes only iterations where cond is true.
  • return (tnss...,): at global scope, exit the program and return the tensors tnss with their new dimensions. By default, any tensor declared in global scope is returned.

Symbols are used to represent variables, and their values are taken from the environment. Loops introduce index variables into the scope of their bodies.

Finch uses the types of the arrays and symbolic analysis to discover program optimizations. If B and C are sparse array types, the program will only run over the nonzeros of either.

Semantically, Finch programs execute every iteration. However, Finch can use sparsity information to reliably skip iterations when possible.

options are optional keyword arguments:

  • algebra: the algebra to use for the program. The default is DefaultAlgebra().
  • mode: the optimization mode to use for the program. Possible modes are:
    • :debug: run the program in debug mode, with bounds checking and better error handling.
    • :safe: run the program in safe mode, with modest checks for performance and correctness.
    • :fast: run the program in fast mode, with no checks or warnings, this mode is for power users.
    The default is :safe.

See also: @finch_code

source
Finch.@finch_codeMacro

@finch_code [options...] prgm

Return the code that would be executed in order to run a finch program prgm.

See also: @finch

source
Finch.@finch_kernelMacro
@finch_kernel [options...] fname(args...) = prgm

Return a definition for a function named fname which executes @finch prgm on the arguments args. args should be a list of variables holding representative argument instances or types.

See also: @finch

source
Finch.@stagedMacro
Finch.@staged

This function is used internally in Finch in lieu of @generated functions. It ensures the first Finch invocation runs in the latest world, and leaves hooks so that subsequent calls to Finch.refresh can update the world and invalidate old versions. If the body contains closures, this macro uses an eval and invokelatest strategy. Otherwise, it uses a generated function. This macro does not support type parameters, varargs, or keyword arguments.

source
Finch.FinchNotation.accessConstant
access(tns, mode, idx...)

Finch AST expression representing the value of tensor tns at the indices idx.... The mode differentiates between reads or updates and whether the access is in-place.

source
Finch.FinchNotation.assignConstant
assign(lhs, op, rhs)

Finch AST statement that updates the value of lhs to op(lhs, rhs). Overwriting is accomplished with the function overwrite(lhs, rhs) = rhs.

source
Finch.FinchNotation.defineConstant
define(lhs, rhs, body)

Finch AST statement that defines lhs as having the value rhs in body. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.indexConstant
index(name)

Finch AST expression for an index named name. Each index must be quantified by a corresponding loop which iterates over all values of the index.

source
Finch.FinchNotation.loopConstant
loop(idx, ext, body)

Finch AST statement that runs body for each value of idx in ext. Tensors in body must have ranges that agree with ext. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.sieveConstant
sieve(cond, body)

Finch AST statement that only executes body if cond is true. A new scope is introduced to evaluate body.

source
Finch.FinchNotation.tagConstant
tag(var, bind)

Finch AST expression for a global variable var with the value bind. Because the finch compiler cannot pass variable state from the program domain to the type domain directly, the tag type represents a value bind referred to by a variable named bind. All tag in the same program must agree on the value of variables, and only one value will be virtualized.

source
Finch.FinchNotation.virtualConstant
virtual(val)

Finch AST expression for an object val which has special meaning to the compiler. This type is typically used for tensors, as it allows users to specify the tensor's shape and data type.

source
Finch.FinchNotation.yieldbindConstant
yieldbind(args...)

Finch AST statement that sets the result of the program to the values of variables args.... Subsequent statements will not affect the result of the program.

source
Finch.FinchNotation.DimensionlessType
Dimensionless()

A singleton type representing the lack of a dimension. This is used in place of a dimension when we want to avoid dimensionality checks. In the @finch macro, you can write Dimensionless() with an underscore as for i = _, allowing finch to pick up the loop bounds from the tensors automatically.

source
Finch.FinchNotation.FinchNodeType
FinchNode

A Finch IR node, used to represent an imperative, physical Finch program.

The FinchNode struct represents many different Finch IR nodes. The nodes are differentiated by a FinchNotation.FinchNodeKind enum.

source
Finch.FinchNotation.extrudeMethod
extrude(i)

The extrude protocol declares that the tensor update happens in order and only once, so that reduction loops occur below the extrude loop. It is not usually necessary to declare an extrude protocol, but it is used internally to reason about tensor format requirements.

source
Finch.FinchNotation.finch_leafMethod
finch_leaf(x)

Return a terminal finch node wrapper around x. A convenience function to determine whether x should be understood by default as a literal, value, or virtual.

source
Finch.FinchNotation.followMethod
follow(i)

The follow protocol ignores the structure of the tensor. By itself, the follow protocol iterates over each value of the tensor in order, looking it up with random access. The follow protocol may specialize on e.g. the zero value of the tensor, but does not specialize on the structure of the tensor. This enables efficient random access and avoids large code sizes.

source
Finch.FinchNotation.gallopMethod
gallop(i)

The gallop protocol iterates over each pattern element of a tensor, leading the iteration and superceding the priority of other tensors. Mutual leading is possible, where we fast-forward to the largest step between either leader.

source
Finch.FinchNotation.initwriteMethod
initwrite(z)(a, b)

initwrite(z) is a function which may assert that a isequal to z, and returnsb. By default,lhs[] = rhsis equivalent tolhs[] <<initwrite(fill_value(lhs))>>= rhs`.

source
Finch.FinchNotation.isstatefulMethod
isstateful(node)

Returns true if the node is a finch statement, and false if the node is an index expression. Typically, statements specify control flow and expressions describe values.

source
Finch.FinchNotation.laminateMethod
laminate(i)

The laminate protocol declares that the tensor update may happen out of order and multiple times. It is not usually necessary to declare a laminate protocol, but it is used internally to reason about tensor format requirements.

source
Finch.FinchNotation.overwriteMethod
overwrite(z)(a, b)

overwrite(z) is a function which returns b always. lhs[] := rhs is equivalent to lhs[] <<overwrite>>= rhs.

julia> a = Tensor(SparseList(Element(0.0)), [0, 1.1, 0, 4.4, 0])
 5-Tensor
 └─ SparseList (0.0) [1:5]
    ├─ [2]: 1.1
@@ -363,4 +363,4 @@
 julia> x = Scalar(0.0); @finch for i=_; x[] <<overwrite>>= a[i] end;
 
 julia> x[]
-0.0
source
Finch.FinchNotation.walkMethod
walk(i)

The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.

source
+0.0
source
Finch.FinchNotation.walkMethod
walk(i)

The walk protocol usually iterates over each pattern element of a tensor in order. Note that the walk protocol "imposes" the structure of its argument on the kernel, so that we specialize the kernel to the structure of the tensor.

source
diff --git a/dev/tutorials_use_cases/tutorials_use_cases/index.html b/dev/tutorials_use_cases/tutorials_use_cases/index.html index ac76e9c30..240fda958 100644 --- a/dev/tutorials_use_cases/tutorials_use_cases/index.html +++ b/dev/tutorials_use_cases/tutorials_use_cases/index.html @@ -1,2 +1,2 @@ -TODO · Finch.jl
+TODO · Finch.jl