Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

runtime: shrink map as elements are deleted #20135

Open
genez opened this issue Apr 26, 2017 · 56 comments
Open

runtime: shrink map as elements are deleted #20135

genez opened this issue Apr 26, 2017 · 56 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsFix The path to resolution is known, but the work has not been done. Performance
Milestone

Comments

@genez
Copy link

genez commented Apr 26, 2017

What version of Go are you using (go version)?

go version go1.8 windows/amd64

What operating system and processor architecture are you using (go env)?

set GOARCH=amd64
set GOBIN=
set GOEXE=.exe
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOOS=windows
set GOPATH=C:\dev\Go
set GORACE=
set GOROOT=C:\Go
set GOTOOLDIR=C:\Go\pkg\tool\windows_amd64
set GCCGO=gccgo
set CC=gcc
set GOGCCFLAGS=-m64 -mthreads -fmessage-length=0
set CXX=g++
set CGO_ENABLED=1
set PKG_CONFIG=pkg-config
set CGO_CFLAGS=-g -O2
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-g -O2
set CGO_FFLAGS=-g -O2
set CGO_LDFLAGS=-g -O2

What did you do?

See example on playground: https://play.golang.org/p/odsk9F1UH1
(edit: forgot to remove sleeps and changed the number of elements)

What did you expect to see?

removing elements from m1 map should release memory.

What did you see instead?

total allocated memory is always increasing

In the example the issue is not so relevant, but in my production scenario (several maps with more than 1million elements each) I can easily get OOM error, and the process is being killed.
Also I don't know if memstats.Alloc is the right counter to expose here, but I can observe the issue with regular process management tools in linux (e.g. top or htop)

@bradfitz bradfitz changed the title maps do not shrink after elements removal (delete) runtime: maps do not shrink after elements removal (delete) Apr 26, 2017
@bradfitz
Copy link
Contributor

/cc @randall77, @josharian

@bradfitz bradfitz added this to the Unplanned milestone Apr 26, 2017
@josharian
Copy link
Contributor

I'm surprised there isn't a dup of this already in the issue tracker.

Yes, maps that shrink permanently currently never get cleaned up after. As usual, the implementation challenge is with iterators.

Maps that shrink and grow repeatedly used to also cause leaks. That was #16070, fixed by CL 25049. I remember hoping when I started on that CL that the same mechanism would be useful for shrinking maps as well, but deciding it wouldn't. Sadly, I no longer remember why. If anyone wants to investigate this issue, I'd start by looking at that CL and thinking about whether that approach could be extended to shrinking maps.

The only available workaround is to make a new map and copy in elements from the old.

@tandr
Copy link

tandr commented Sep 4, 2018

just an observation - adding runtime.GC() after last loop of copy/delete brings memory down to about the same size (well, lower actually) as at "Alloc After M1" point

@hixichen
Copy link

Any update on this issue?

we load 1 Million entry into map. No matter we try to delete the value or set the map nil, seems the memory is always increasing until OOM.

@mvdan
Copy link
Member

mvdan commented Sep 24, 2018

@hixichen: see @josharian's workaround above:

The only available workaround is to make a new map and copy in elements from the old.

That is, you have to let the entire map be garbage-collected. Then all its memory will eventually be made available again, and you can start using a new and smaller map. If this doesn't work, please provide a small Go program to reproduce the problem.

As for progress - if there was any, you'd see it in this thread.

@voltrue2
Copy link

voltrue2 commented Jan 17, 2019

The only available workaround is to make a new map and copy in elements from the old.

Is this really an efficient way to handle this issue?
If you have a very large map, you'd have to loop a large map every time you delete an element or two, right?

@as
Copy link
Contributor

as commented Jan 17, 2019

@hixichen what happens if you set the map to nil (or any cleanup action you mentioned previously) and then run debug.FreeOSMemory()?

This may help differentiate between a "GC-issue" and a "returning memory to the OS" issue.

Edit: It seems you're using Go itself to gauge memory allocation so this message can be ignored (perhaps it will be useful to someone else so I'll post it anyway).

@randall77
Copy link
Contributor

Is this really an efficient way to handle this issue? If you have a very large map, you'd have to loop a large map every time you delete an element or two, right?

You can do it efficiently by delaying shrinking until you've done O(n) deletes.
That's what a built-in mechanism would do.
The map growing mechanism works similarly.

@hixichen
Copy link

hixichen commented Jan 17, 2019

@as yes, I am using Go itself to gauge memory allocation, and, personally I think Go should handle it by itself.

@rohanil

This comment has been minimized.

@mvdan

This comment has been minimized.

@mvdan mvdan added the NeedsFix The path to resolution is known, but the work has not been done. label Jun 5, 2019
@4nte
Copy link

4nte commented Dec 5, 2019

I'd expect GO to handle memory both ways here. This is unintuitive behavior and should be noted in the map docs until resolved. I just realized we have multiple eventual OOM's in our system.
cc @marco-hrlic data-handler affected

@hunterhug
Copy link

hunterhug commented Apr 16, 2020

go version go1.13.1 darwin/amd64

I have a question:

package main

import (
	"fmt"
	"runtime"
)

func main() {
	v := struct{}{}

	a := make(map[int]struct{})

	for i := 0; i < 10000; i++ {
		a[i] = v
	}

	runtime.GC()
	printMemStats("After Map Add 100000")

	for i := 0; i < 10000-1; i++ {
		delete(a, i)
	}

	runtime.GC()
	printMemStats("After Map Delete 9999")

	for i := 0; i < 10000-1; i++ {
		a[i] = v
	}

	runtime.GC()
	printMemStats("After Map Add 9999 again")

	a = nil
	runtime.GC()
	printMemStats("After Map Set nil")
}

func printMemStats(mag string) {
	var m runtime.MemStats
	runtime.ReadMemStats(&m)
	fmt.Printf("%v:memory = %vKB, GC Times = %v\n", mag, m.Alloc/1024, m.NumGC)
}

output:

After Map Add 100000:memory = 241KB, GC Times = 1
After Map Delete 9999:memory = 242KB, GC Times = 2
After Map Add 9999 again:memory = 65KB, GC Times = 3
After Map Set nil:memory = 65KB, GC Times = 4

Why a local var map a Delete 9999, it's memory not change, but Add 9999 again, it's memory reduce?

@randall77
Copy link
Contributor

	for i := 0; i < 10000-1; i++ {
		a[i] = v
	}

	runtime.GC()
	printMemStats("After Map Add 9999 again")

The map will be garbage collected at this runtime.GC. The compiler knows that the map will not be used again. Your later a==nil does nothing - the compiler is way ahead of you.

Try adding fmt.printf("%d\n", len(m)) at various places above to introduce another use of m. If you put it after the runtime.GC, you will see the behavior you are expecting.

@hunterhug

This comment has been minimized.

@mangatmodi
Copy link

Isn't the issue is that the GC is not triggered at the right point? GC process is able to recognize that the map has some deleted keys, that's why runtime.GC() helps. I guess it needs some tuning.

Also, I believe it is a pretty serious issue. Allocating a new map should be documented as best practices when using go.

@DmitriyMV
Copy link
Contributor

It looks like it no longer the case? I'm deleting half of the elements and the map gets shrinked. I'm also preserving the last map by accessing the last element.

@CAFxX
Copy link
Contributor

CAFxX commented Sep 25, 2023

If you delete all elements from the map, or you preallocate the map, you'll see that that's not the case. That decrease in memory that demo shows is likely to be just a byproduct artifact of how incremental map growth works.

@zigo101
Copy link

zigo101 commented Sep 25, 2023

@DmitriyMV, you should call runtime.GC at every memory-stat point.

@DmitriyMV
Copy link
Contributor

@CAFxX @go101 Indeed, looks like you both right - it's the byproduct of map incremental growth.

@zigo101
Copy link

zigo101 commented Sep 25, 2023

The result for your new demo program is indeed some weird.

before map creation, Allocated memory 89 KiB
after map creation, Allocated memory 34586 KiB
after removing half of the elements, Allocated memory 22396 KiB

The shrink only happens when mapSize is in [98_305, 109_635], but not out of the range.
Maybe this is not a real shrink, but a stat error?

[edit]: It looks there are some randomness here. The right boundary changes to 109_664 in my new wave of tests.
[edit 2]: It looks the shrink also happens when mapSize is about 200_000 and 400_000.

@DmitriyMV
Copy link
Contributor

@go101 shrink doesn't happen if you preallocate all 100_000 elements, so I think it's just some internal growth that happens. Changing from deleting half to calling clear on map doesn't change anything.

@zigo101
Copy link

zigo101 commented Sep 25, 2023

Yes, what you said is true. But the confusion is not cleared. Which part of memory is freed when half of elements are deleted?

[edit]: more precisely, at the point of "after map creation", it looks to me that some memory should be collected but it is not.

@introspection3
Copy link

introspection3 commented Feb 21, 2024

why not use community map :https://github.com/dolthub/swiss
replace org memory leak map.

@mvdan
Copy link
Member

mvdan commented Feb 21, 2024

@introspection3 see #54766.

fasaxc added a commit to fasaxc/calico that referenced this issue Nov 25, 2024
Go doesn't free map blocks, even after a map shrinks
considerably. The dedupe buffer tends to store a lot of
keys for a start-of-day snapshot, make sure we clean
up the leaked map capacity once we're back down to
zero.

Upstream issue: golang/go#20135
fasaxc added a commit to fasaxc/calico that referenced this issue Nov 25, 2024
Go doesn't free map blocks, even after a map shrinks
considerably. The dedupe buffer tends to store a lot of
keys for a start-of-day snapshot, make sure we clean
up the leaked map capacity once we're back down to
zero.

Upstream issue: golang/go#20135
fasaxc added a commit to fasaxc/calico that referenced this issue Nov 26, 2024
Go doesn't free map blocks, even after a map shrinks
considerably. The dedupe buffer tends to store a lot of
keys for a start-of-day snapshot, make sure we clean
up the leaked map capacity once we're back down to
zero.

Upstream issue: golang/go#20135
fasaxc added a commit to projectcalico/calico that referenced this issue Nov 26, 2024
Go doesn't free map blocks, even after a map shrinks
considerably. The dedupe buffer tends to store a lot of
keys for a start-of-day snapshot, make sure we clean
up the leaked map capacity once we're back down to
zero.

Upstream issue: golang/go#20135
fasaxc added a commit to fasaxc/calico that referenced this issue Nov 28, 2024
Go doesn't free map blocks, even after a map shrinks
considerably. The dedupe buffer tends to store a lot of
keys for a start-of-day snapshot, make sure we clean
up the leaked map capacity once we're back down to
zero.

Upstream issue: golang/go#20135
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsFix The path to resolution is known, but the work has not been done. Performance
Projects
None yet
Development

No branches or pull requests