Skip to content

Commit

Permalink
Use int64 when calculating data size in split_acquisition_function (#795
Browse files Browse the repository at this point in the history
)

* Use int64 to allow large x sizes

* Add comment

* Add warning about mem usage
  • Loading branch information
khurram-ghani authored Nov 22, 2023
1 parent aa94c86 commit ae11f17
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 1 deletion.
3 changes: 3 additions & 0 deletions trieste/acquisition/optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,9 @@ def generate_continuous_optimizer(
If all `num_optimization_runs` optimizations fail to converge then we run
`num_recovery_runs` additional runs starting from random locations (also ran in parallel).
**Note:** using a large number of `num_initial_samples` and `num_optimization_runs` with a
high-dimensional search space can consume a large amount of CPU memory (RAM).
:param num_initial_samples: The size of the random sample used to find the starting point(s) of
the optimization.
:param num_optimization_runs: The number of separate optimizations to run.
Expand Down
3 changes: 2 additions & 1 deletion trieste/acquisition/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,8 @@ def wrapper(x: TensorType) -> TensorType:
if length == 0:
return fn(x)

elements_per_block = tf.size(x) / length
# Use int64 to calculate the input tensor size, otherwise we can overflow for large tensors.
elements_per_block = tf.size(x, out_type=tf.int64) / length
blocks_per_batch = tf.cast(tf.math.ceil(split_size / elements_per_block), tf.int32)

num_batches = tf.cast(tf.math.ceil(length / blocks_per_batch) - 1, tf.int32)
Expand Down

0 comments on commit ae11f17

Please sign in to comment.