Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Toward defining the interface of AbstractArrays #10064

Closed
jiahao opened this issue Feb 3, 2015 · 17 comments
Closed

Toward defining the interface of AbstractArrays #10064

jiahao opened this issue Feb 3, 2015 · 17 comments
Assignees
Labels
arrays [a, r, r, a, y, s]

Comments

@jiahao
Copy link
Member

jiahao commented Feb 3, 2015

What is an AbstractArray?

This issue follows up from #987 (and complements #9586), where the question was raised about what it means to be an AbstractArray. My understanding of where this issue left off was that AbstractArray{T,N} has the following interface:

  • size is defined and returns a tuple of length N,
  • getindex is defined and returns data of type T "cheaply", where cheaply means "in O(1) time preferably but we will allow some small growth bounded very loosely above by O(length(n)) to accommodate sparse matrices".

This interface is incompletely defined because Julia arrays support nontrivial indexing semantics and not all of which have the same computational complexity.

The question

This issue is about spelling out the methods supported by AbstractArrays, and in particular:

  • what the minimal set of indexing behaviors AbstractArrays must support is, and
  • what kind of complexity guarantees each kind of indexing semantics must satisfy.

Indexing semantics

Julia arrays support several indexing semantics:

  • Indexing with a number (ordinary indexing)
  • Indexing with a Colon :, UnitRange or Range (slice indexing)
  • Indexing with an iterable (fancy indexing)
  • Indexing with a boolean vector (logical indexing)

For arrays of rank N>1 we have to further distinguish between indexing with a
single entity (linear indexing) vs indexing with N entities (Cartesian
indexing). So this gives us a matrix of 4 x 2 = 8 different indexing behaviors.

Some complexity computations

@andreasnoack @jakebolewski and I sat down this afternoon to work out the complexity of indexing semantics of various kinds of arrays. The goal of these calculations is to help clarify the importance of complexity bounds on the interface of AbstractArrays.

We considered the following types of arrays:

  • strided arrays: arrays which are completely described by dope
    vectors
    containing shape
    information or its equivalent.
    The corresponding Julia type is StridedArray, which is the union of SubArray and
    DenseArray (which in turn includes ordinary Array).
  • ragged arrays or "vectors of vectors", whose shape
    information and layout can be represented by
    Iliffe vectors.
    Each dimension need not be stored contiguously, but in general will be stored in
    M distinct chunks of contiguous memory.
    For simplicity we consider only N-dimensional arrays produced by ragged arrays of
    ragged arrays, nested N-1 times.
    We also distinguish between sorted and unsorted variants.
    Ragged arrays are the default in Java.
    Ragged arrays don't really exist in Julia for M>1. For M=1 chunk per dimension,
    they correspond to the recursive family of types
    Vector{T}, Vector{Vector{T}}, Vector{Vector{Vector{T}}},... Also, DArrays with
    customizable chunking schemes support some form of chunking where each chunk
    must live on a different process.
  • compressed sparse column (CSC), a.k.a. compressed column storage or CCS,
    matrices, which many people in array type hierarchy #987 want to be AbstractArrays.
    Base Julia has SparseMatrixCSC.

Where necessary the analysis was simplified by assuming a uniform prior distribution of matrix elements in each column and each row.

The results

Let

  • N be the rank of an array (i.e., the number of dimensions, or the length of the index tuple associated with a matrix element),
  • M be the number of chunks in a single ragged array dimension,
  • k be the length of a Range, Colon or iterable (= k1 * k2 * ... * kN in
    the case of using N of these entities),
  • ncol be the number of columns,
  • NNZ be the number of stored elements.

Considering only linear indexing and Cartesian indexing using the same type of
entity in each dimension, we can write the computational complexity of each
indexing behavior as

linear Cartesian
ordinary strided O(N) strided O(N)
ragged sorted O(N log M) ragged sorted O(N log M)
ragged unsorted O(N M) ragged unsorted O(N M)
CSC ~O(log(NNZ/ncol)) CSC ~O(log (NNZ/ncol))
--------- ------------------------ -------------------------
slicing strided O(N) strided O(N)
(view) ragged sorted O(min(M, k) N log M) ragged sorted O(min(M, k) N log M)
ragged unsorted ~O(k M^(N-1)) ragged unsorted ~O(k M^(N-1))
CSC ~O(k1 log(NNZ/ncol)) CSC ~O(k1 log(NNZ/ncol))
--------- ------------------------------ -------------------------
slicing strided O(k) strided O(k)
(copy) ragged sorted ~O(M^N) ragged sorted ~O(M^N)
ragged unsorted ~O(k M^(N-1)) ragged unsorted ~O(k M^(N-1))
CSC ~O(k ncol/NNZ log(NNZ/ncol)) CSC ~O(k ncol/NNZ log(NNZ/ncol))
--------- ------------------------------ -------------------------
iterable strided O(k N) strided O(k N)
ragged sorted O(k N log M) ragged sorted O(k N log M)
ragged unsorted O(k N M) ragged unsorted O(N M)
CSC ~O(k log(NNZ/ncol)) CSC ~O(k log (NNZ/ncol))
--------- ------------------------------ -------------------------
logical strided O(NNZ) strided O(NNZ)
ragged sorted O(NNZ N) ragged sorted O(NNZ N)
ragged unsorted O(NNZ N M) ragged unsorted O(NNZ N M)
CSC O(NNZ log(NNZ/ncol)) CSC O(NNZ log(NNZ/ncol))

In no case did I find that the cost of the Cartesian to linear offset computation dominates the complexity.

Note that we have different results depending on whether slicing returns

  • a view, where only the index set of reference elements needs to be computed,
    or
  • a copy, where all the matrix elements have to be fetched also.

Breakdown of the computations

Strided array

  • Ordinary indexing is easy: computing the offset grows as the rank.
  • Slicing a view requires computing the same bounds as the input Range
  • A slice that produces a copy means fetching every element, which dominates the cost
  • Indexing with an iterable requires an ordinary index operation per element
  • Logical slices can only be computed by iterating through each element in each
    dimension.

Ragged sorted array

  • If the chunks in each ragged dimension have sorted indexes, then lookup in
    ordinary indexing is a sorted search problem in each dimension. We get log(M)
    from binary search.
  • Slicing is also essentially 2 searches per dimension (plus more work if there
    is a nonunit stride in the slice which does not dominate) because at every
    dimension you have to compute the intersection of two intervals, namely the
    slice and the index set of that chunk
  • Iterables are repeated sorted searches
  • Logical slices traverse every chunk so the chunk structure doesn't really
    matter

Ragged unsorted array (indexes for each chunk are not sorted)

  • Ordinary indexing is a linear (unsorted) search problem in each dimension
  • Slicing means a general set intersection computation for the index set of
    each chunk. On average we expect something like (NNZ/M^N) entries in each
    index set (per dimension), assuming a uniform prior on the distribution
    of entries in chunks. We expect on average k/M matches for each chunk.
    So we expect to do O((k/M) * (M^N)) work. The work is the same to compute the
    index set for a view as it is to copy.
  • Indexing by an iterable is repeated set intersections
  • Logical slices have to be evaluated over all chunks but each entry still has
    to be mapped to a chunk by linear search each time.

CSC

  • Ordinary indexing means looking up the column pointer store of the current and next
    columns, identifying the corresponding row indexes and values in the resulting
    half-open interval. Again assuming for simplicity a uniform prior distribution
    of entries, the expected number of entries in any given column is O(NNZ/ncol).
    Since each entry is sorted by row index, we can use a sorted search so the
    complexity of lookup is O(log (NNZ/ncol)).
  • Slicing means looking up the boundaries of the interval and indexing into each,
    so the cost is O(log (NNZ/ncol)). When the stride of the slice is nonunit there
    is extra work that needs to be done but I haven't checked the cost of this yet.
    The probability to match one element is ~ncol/NNZ, so we expect to retrieve
    k * ncol/NNZ stored elements.
  • Indexing by an iterable reduces to multiple single element lookups.
  • Logical indexing can be computed by traversing all the stored entries in order.

Cartesian indexing behaviors mixing different indexing types in each dimension
can get more complicated. CSC is an interesting one to look at:

  • A[:,5] returning a view costs O(1) since the answer is given immediately by
    looking up the start and end of the column in the list of column offsets.
  • A[:,5] returning a copy costs ~O(NNZ/ncol), the expected number of entries in
    the column.
  • A[5,:] returning a view costs O(ncol log(NNZ/ncol)), since you have to do a
    sorted search for each column.
  • A[5,:] returning a copy costs O(NNZ log(NNZ/ncol)), since you have to compute
    the index set then copy the elements, which on average we expect to have
    NNZ/ncol of them.
@tkelman
Copy link
Contributor

tkelman commented Feb 3, 2015

Ordinary indexing means looking up the column pointer of the current and next rows

Ordinary indexing means looking up the column pointer of the current and next rows columns

@johnmyleswhite
Copy link
Member

Bless you for opening this, Jiahao.

@timholy
Copy link
Member

timholy commented Feb 4, 2015

By "rank" do you mean dimensionality? "rank" has a different meaning in linear algebra, of course, and that meaning of rank does not influence the cost of indexing.

@jiahao
Copy link
Member Author

jiahao commented Feb 4, 2015

Updated with clarifications and corrections.

@timholy it is quite unfortunate that "rank of an array" means something very different from "rank of a matrix", and that both meanings are quite widely used. It is rather unpretty. One might even say it is a rank situation.

@jiahao
Copy link
Member Author

jiahao commented Aug 23, 2016

@JeffBezanson
Copy link
Member

Yes, thank you for posting that link!

@JeffBezanson
Copy link
Member

JeffBezanson commented Aug 23, 2016

Here's a very quick summary of the current state of the API proposal in the gist:

  • indices(A) - Returns an iterator that generates objects representing the canonical indices of A.
  • order(A) - Gives a total order (isless) predicate on index objects, describing the order in which indices(A) generates them.
  • axes(A) - Returns a container that maps dimension labels to objects describing possible values of components of indices. (note ndims(A) == length(axes(A))). The same dimension labels are accepted by functions like reducedims.
  • Maybe: bounds(A) which returns basically map(extrema, axes(A)), but possibly using some kind of interval type.
  • A[I] should be implemented, where I is a single index object.
  • A constructor ArrayType(IndexSet, Data) should exist.

Arrays can have the ability to return multiple index iterators. For example SparseMatrixCSC should be able to return an iterator over the indices of its nonzeros. Or you might want to return an iterator that provides faster access.

It's possible that dimensions need to be ordered, such that 1:N are always valid dimension labels. We might also want to mandate that "index objects" be tuples, providing compatibility that allows code like A[indices(B)] = 0.

@kshyatt kshyatt added the arrays [a, r, r, a, y, s] label Aug 23, 2016
@andreasnoack
Copy link
Member

It would be great with some thoughts on distributed arrays.

@timholy
Copy link
Member

timholy commented Aug 24, 2016

Here's one dirt-simple thought, that's possible to implement now: the local chunks should have indices that correspond to their "piece" of the whole array. That seems like it might make a number of things easier (fewer index gymastics to worry about).

@JeffBezanson
Copy link
Member

That does sound compelling. I also have a half-baked idea that the outer container managing the distribution could somehow be an array mapping bounding boxes to chunks with the data, e.g. for easily identifying which chunks are involved in a computation.

@jakebolewski
Copy link
Member

This is exactly the idea promoted in ZPL ("regions") and Chapel ("distributions" (index set mapping global index view to individual chunks), and "layouts" (local index view of underlying buffer, aka Jeff's proposal). Lots of previous work to draw on here.

@JaredCrean2
Copy link
Contributor

PETSc also has the notions of local to global mapping index sets, views of local data, and fast access to local data (not quite iteration, but pretty close considering it is written in C), and it has worked out really well for them in terms of usability.

@mauro3
Copy link
Contributor

mauro3 commented Aug 25, 2016

There are also Fortran co-arrays. Aside, those use two sets of parenthesis for local and global indexing. Maybe something to consider for {}.

@StefanKarpinski
Copy link
Member

Not sure what's actually the action item here, but @mbauman, I'll leave that up to you.

@Keno Keno added the triage This should be discussed on a triage call label Oct 5, 2017
@Keno
Copy link
Member

Keno commented Oct 5, 2017

This issue is a bit vague for a 1.0 milestone issue at this point. Let's discuss on the call next week.

@StefanKarpinski StefanKarpinski removed the triage This should be discussed on a triage call label Oct 12, 2017
@StefanKarpinski StefanKarpinski removed this from the 1.0 milestone Oct 12, 2017
@StefanKarpinski
Copy link
Member

Too vague to be actionable for 1.0; everything actionable is traits stuff which can be done in 1.x.

@tpapp
Copy link
Contributor

tpapp commented Jan 20, 2020

Is this still an open issue? My understanding is that the AbstractArray interface is pretty well-defined now, even if not in the manner proposed here (traits).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
arrays [a, r, r, a, y, s]
Projects
None yet
Development

No branches or pull requests