Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEA] Add host memory allocation APIs #8879

Closed
revans2 opened this issue Jul 31, 2023 · 0 comments · Fixed by #9170
Closed

[FEA] Add host memory allocation APIs #8879

revans2 opened this issue Jul 31, 2023 · 0 comments · Fixed by #9170
Assignees
Labels
reliability Features to improve reliability or bugs that severly impact the reliability of the plugin task Work required that improves the product but is not user facing

Comments

@revans2
Copy link
Collaborator

revans2 commented Jul 31, 2023

Is your feature request related to a problem? Please describe.
In order to be able to limit host memory usage we need some new APIs to allow us to allocate host memory, but with some limits in place.

I am only going to outline the APIs in pseudo code here. The exact details will likely be worked out as we use them.

/**
  * Used to track what is happening with a reservation.
  */
trait ReservationTracking {
  def allocationSucceeded(length: Long, remaining: Long)
  def reservationReleased(remaining: Long)
}

/**
  * A reservation that says guarantees a set amount of memory to be allocated without blocking (possibly in multiple allocations)
  */
trait Reservation extends AutoCloseable {
  def amountRemaining: Long
  /**
   * If there is enough memory in the reservation do the allocation are return it. If there is not enough, output a big warning.
   * Do a tryAllocate, and if it fails, then we throw an exception. This API might cause spilling, but will not block waiting for memory
   * to be freed.
   */
  def allocate(length: Long, allowPinned: Boolean = true): HostMemoryBufferBase

  /**
   * Called if you want to track the status of a given reservation.
   */
  def track(tracking: ReservationTracking): Unit
}

class ImpossibleAllocationException extends Exception...

object HostAllocater {
  /**
   * Configure the HostAllocater with a hard limit. This should not really be called after it is up and running, but we need to support it
   * for tests. When it is reconfigured previous allocations can be ignored, but it is not a requirement to do so. Pending allocations 
   * on /blocked threads should either throw an exception about re-configuring the allocater, or seamlessly be fulfilled by the new
   * configuration when things are ready. I think the former should be a lot simpler to do, but if you want to make it hard...  
   */
  def configure(limit: Option[Long])
  /**
   * How much is remaining on the limit. This would not include any buffer that is currently made spillable. It is effectively the amount
   * of memory that could be allocated, even if spill happened. This should only really be used for debug purposes because it is 
   * going to have races, so you cannot use to guarantee an allocation will succeed.
   */
  def remainingLimit: Option[Long]

  /**
   * Try to allocate a buffer. This might spill data to disk, but it will not block the thread waiting for memory to be released.
   * If the request is larger than what the limit is, not just what is available, this will throw an ImpossibleAllocationException.
   * pinned memory can be excluded from the allocation with allowPinned = false.
   */
  def tryAllocate(length: Long, allowPinned: Boolean = true): Option[HostMemoryBufferBase]

  /**
   * Allocate a buffer. This might spill data to disk, or wait for other free memory to be released.
   * If the request is larger than what the limit is, this will throw an ImpossibleAllocationException.
   * pinnedMemory can be excluded from the allocation with allowPinned = false.
   * If you need an allocation to possibly break a deadlock you can set maxPriority = true, but that should really only
   * ever be used by spill.
   */
  def blockingAllocate(length: Long, allowPinned: Boolean = true, maxPriority: Boolean = false): HostMemoryBufferBase

  /**
   * Reserve a set amount of memory that can be allocated as multiple buffers.  This call may spill memory or block waiting
   * for other memory to be freed/made spillable.
   */
  def reserve(length: Long): Reservation
}

In the first go at this, these APIs would have no way to spill. We should think about what it would take to let spill work. But the goal is mostly to put these APIs in place. I expect these APIs to evolve a bit over time, but hopefully they are close to what the final APIs will look like.

Internally when blocking threads to wait for memory to free up in the reservation we need a priority system. The priority should be based on the same algorithm that the device retry framework works on, except tasks that holding the GPU semaphore gives a higher priority to a task than not holding it. In order to try and avoid deadlocks the priority is going to be a hard priority. All allocation requests/reservations will be handled in priority order and then FIFO order. This means that if there is a blocked allocation, all non-blocking allocations with the same or lower priority will fail, even if there is memory to satisfy them. This also means that when a task grabs or drops the GPU Semaphore the priority queue in the memory allocation APIs needs to adjust to that change.

If the allocator does not have a limit, then we just call into the normal memory buffer allocation APIs and return them. If there is a limit we will need a way to track when an allocation is freed so we can wake up pending tasks. This is where a lot of the testing will happen. Similar things will happen when we make a buffer spillable, but that will come in a later PR.

Note that in the case with a limit memory from the pinned pool is considered to be free. If we get it, that is great and it will not count against the limit. For a reservation, if it gets pinned memory then it should count against the reservation limit. We might need a good way in the callback to indicate this.

@revans2 revans2 added feature request New feature or request ? - Needs Triage Need team to review and classify task Work required that improves the product but is not user facing reliability Features to improve reliability or bugs that severly impact the reliability of the plugin labels Jul 31, 2023
@revans2 revans2 removed the feature request New feature or request label Jul 31, 2023
@mattahrens mattahrens removed the ? - Needs Triage Need team to review and classify label Aug 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
reliability Features to improve reliability or bugs that severly impact the reliability of the plugin task Work required that improves the product but is not user facing
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants