Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG] Add LazyTensor for large scale OT #544

Merged
merged 14 commits into from
Oct 31, 2023
Merged

[MRG] Add LazyTensor for large scale OT #544

merged 14 commits into from
Oct 31, 2023

Conversation

rflamary
Copy link
Collaborator

@rflamary rflamary commented Oct 26, 2023

Types of changes

This PR adds a very simple LazyTensor class that can implement large scale matrices and tensors that are computed only lazily when getting slices of the tensor.

This class will be very interesting when implementing large scale OT problems such as factored OT, low rank OT and lazy implementation of sinkhorn. The objective is to be very simple with a storage of all necessary data and a function that compute the values on sliced (without computing the full tensor).

Exemple of use

import numpy as np
from ot.utils import LazyTensor, 

n1=100
n2=200
x1 = np.random.randn(n1,2)
x2 = np.random.randn(n2,2)

# i,j can be integers or slices, x1,x2 have to be passed as keyword arguments
def getitem(i,j, x1, x2):
    return np.dot(x1[i], x2[j].T)

# create a lazy tensor (giv data as arguments)
T = LazyTensor((n1,n2),getitem, x1=x1, x2=x2)

print(T.shape)
# (100, 200)
print(T)
# LazyTensor(shape=(100, 200),attributes=(x1,x2))

# get the full tensor (not lazy)
full_T = T[:]

# get one component
T11 = T[1,1]

# get one row
T1 = T[1]

# get one column with slices
Tsliced = T[::10,5]

# get the data as attributes
x1_0 = T.x1
x2_0 = T.x2

# create a LazyTensor from another one : exp(T)
T2 = ot.utils.LazyTensor(T.shape,lambda i,j,T: np.exp(T[i,j]) , T=T)

# compute sum
s = reduce_lazytensor(T, np.sum) 

# compute marginals
c = reduce_lazytensor(T, np.sum, axis=1) 
r = reduce_lazytensor(T, np.sum, axis=0, batch_size=50) 

Motivation and context / Related issue

How has this been tested (if it applies)

PR checklist

  • I have read the CONTRIBUTING document.
  • The documentation is up-to-date with the changes I made (check build artifacts).
  • All tests passed, and additional code has been covered with new tests.
  • I have added the PR and Issue fix to the RELEASES.md file.

@codecov
Copy link

codecov bot commented Oct 26, 2023

Codecov Report

Merging #544 (418a1de) into master (6a29551) will increase coverage by 0.02%.
The diff coverage is 98.86%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #544      +/-   ##
==========================================
+ Coverage   96.46%   96.49%   +0.02%     
==========================================
  Files          67       67              
  Lines       14490    14663     +173     
==========================================
+ Hits        13978    14149     +171     
- Misses        512      514       +2     

test/test_utils.py Outdated Show resolved Hide resolved
test/test_utils.py Outdated Show resolved Hide resolved
test/test_utils.py Outdated Show resolved Hide resolved
test/test_utils.py Outdated Show resolved Hide resolved
test/test_utils.py Outdated Show resolved Hide resolved
ot/utils.py Outdated Show resolved Hide resolved
ot/utils.py Outdated Show resolved Hide resolved
ot/utils.py Outdated Show resolved Hide resolved
ot/utils.py Outdated Show resolved Hide resolved
ot/utils.py Show resolved Hide resolved
@rflamary rflamary changed the title [WIP] Add LazyTensor for large scale OT [MRG] Add LazyTensor for large scale OT Oct 30, 2023
@rflamary rflamary merged commit 53dde7a into master Oct 31, 2023
15 checks passed
@rflamary rflamary deleted the lazy_tensor branch November 23, 2023 09:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants