Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add paddle/memory/README.md #2552

Merged
merged 8 commits into from
Jun 26, 2017
Merged

Conversation

wangkuiyi
Copy link
Collaborator

No description provided.

paddle/README.md Outdated
In `paddle/memory/memory.h` we have:

```cpp
template <typeanme Place> void* Alloc(Place, size_t);
Copy link
Contributor

@gangliao gangliao Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typeame -> typename

Since Place is a variant, why not directly use the typeid to static analyze place type during compilation?

void* Alloc(Place, size_t);
void* Alloc(Place place, size_t size) {
  if (place.type() == typeid(GPUPlace)) {
     ....
  } else {...}   
}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we want the mismatched-type error get reported as early as possible. Err at compile time is earlier than runtime.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got it. Thanks.

But variant also supports throw the mismatched-type error during compile time.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I think if we regulate the way we use GPUPlace and CPUPlace, like here -- two and only two specializations of Alloc<Place>, we might not need typedef boost::variant<GPUPlace, CPUPlace> Place at all. And this might help us remove the dependency to boost.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's great to think about this. The hard part to remove boost::variant is Dim.

Copy link
Collaborator Author

@wangkuiyi wangkuiyi Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Agree. And thanks for reminding of boost::variant. Let's start by minimizing dependencies of each piece of our work.

paddle/README.md Outdated
BuddyAllocator* GetCPUBuddyAllocator() {
static BuddyAllocator* a = NULL;
if (a == NULL) {
a = new BuddyAllocator(new CPUAllocator /*backup allocator*/, ...);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All member functions in CPUAllocator are static functions, I thought it's better if CPUAllocator /CPUAllocator could be a template type of BuddyAllocator

Copy link
Collaborator Author

@wangkuiyi wangkuiyi Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think GPUAllocator::Alloc could be a static method, as it reads GPUAllocator::place_ and writes GPUAllocator::used_.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I guess we do not need the member variables in here.

GPUAllocator::place_ is unnecessary because you got place from memory::Alloc(place ... each time and GPUAllocator::Alloc insides memory::Alloc.

GPUAllocator::used_ is unnecessary. Because each time when you retrieve/query the used memory, it's non-deterministic and likely to change and fluctuate.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, in this design, each GPUAllocator instance corresponds to a GPU device.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why GPUAllocator::used_ is unnecessary? I think memory profiling is very important for a DL system?

Copy link
Contributor

@gangliao gangliao Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, It's useful. I mean it's unnecessary to save it in GPUAllocator::used_. because its behavior always to change and fluctuate.

Copy link
Collaborator Author

@wangkuiyi wangkuiyi Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I cannot tell how valuable GPUAllocator::used_ could be. But I'd like to have it, as it enables us to profile the memory usage. We may figure out good usages of this piece of information later.

paddle/README.md Outdated

### The Implementation

`GetCPUBuddyAllocator` and `GetGPUBuddyAllocator` are singletions.
Copy link
Contributor

@gangliao gangliao Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Singleton is great in here.

paddle/README.md Outdated

```cpp
template<>
void Alloc(GPUPlace)(GPUPlace p, size_t size) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alloc(GPUPlace) -> Alloc<GPUPlace>

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

paddle/README.md Outdated
private:
struct Block {
size_t size;
Blobk* left, right;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blobk -> Block.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

paddle/README.md Outdated

The `GPUAllocator` and `CPUAllocator` are calls *system allocators*. They hold information about the device, including the amount of memory has been allocated. So that we can call

- `GPUAllocator::Used` and
Copy link
Collaborator

@reyoung reyoung Jun 22, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that System Allocators are used when constructing a BuddyAllocator and they are private data member inside a BuddyAllocator. How can we get GPUAllocator::Used for each device?
Is our code like this?

auto* buddyAllocator = GetGPUBuddyAllocator(0);
buddyAllocator->SystemAllocator()->Used();

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or

GetGPUAllocator(0)->Used();

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the reminder. Let me explain more here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -0,0 +1,145 @@
In my mind, the memory package works like the following:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this line be removed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@wangkuiyi wangkuiyi mentioned this pull request Jun 26, 2017
Copy link
Contributor

@gangliao gangliao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wangkuiyi wangkuiyi merged commit 09eb6be into PaddlePaddle:develop Jun 26, 2017
@wangkuiyi wangkuiyi deleted the memory_design branch June 26, 2017 20:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants