Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: goroutine leak in newMultipartReader #1380

Closed
chessman opened this issue Apr 3, 2019 · 15 comments
Closed

storage: goroutine leak in newMultipartReader #1380

chessman opened this issue Apr 3, 2019 · 15 comments
Assignees
Labels
api: storage Issues related to the Cloud Storage API. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. 🚨 This issue needs some love. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@chessman
Copy link

chessman commented Apr 3, 2019

Client

We have 100 concurrent uploads, each object is about 500 KB.

	w := c.cli.Bucket(c.bucket).Object(objectId).NewWriter(ctx)
	defer w.Close()
	size, err := io.Copy(w, r)
        ...
[[projects]]
  name = "cloud.google.com/go"
  ...
  revision = "458e1f376a2b44413160b5d301183b65debaa3f6"
  version = "v0.37.2"

[[projects]]
  name = "google.golang.org/api"
  ...
  revision = "bce707a4d0ea3488942724b3bcc1c8338f38f991"
  version = "v0.3.0"

Describe Your Environment

Go 1.11.4
CentOS 7

Expected Behavior

No leaked goroutines.

Actual Behavior

We observe that memory is leaking over time and there are stalled goroutines in pprof.

goroutine 38785431 [select, 77 minutes]:
io.(*pipe).Write(0xc068678690, 0xc1699cd040, 0x62, 0xa0, 0x0, 0x0, 0x0)
    /usr/local/go/src/io/pipe.go:87 +0x1cc
io.(*PipeWriter).Write(0xc0024e24f0, 0xc1699cd040, 0x62, 0xa0, 0x19a4fe0, 0x1bd1680, 0x31c5f01)
    /usr/local/go/src/io/pipe.go:153 +0x4c
bytes.(*Buffer).WriteTo(0xc00e67bc70, 0x1e847a0, 0xc0024e24f0, 0x7f9aa81a6a68, 0xc00e67bc70, 0xc00234ed01)
    /usr/local/go/src/bytes/buffer.go:241 +0xb6
io.copyBuffer(0x1e847a0, 0xc0024e24f0, 0x1e82780, 0xc00e67bc70, 0x0, 0x0, 0x0, 0x2, 0xc001c260c0, 0x0)
    /usr/local/go/src/io/io.go:384 +0x352
io.Copy(0x1e847a0, 0xc0024e24f0, 0x1e82780, 0xc00e67bc70, 0x0, 0x0, 0x0)
    /usr/local/go/src/io/io.go:364 +0x5a
mime/multipart.(*Writer).CreatePart(0xc12e030120, 0xc12e0309c0, 0xc12e0309c0, 0x5, 0x8, 0xc0003c6840)
    /usr/local/go/src/mime/multipart/writer.go:115 +0x3e5
github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.newMultipartReader.func1(0xc002e6af80, 0x2, 0x2, 0xc12e030120, 0xc0024e24f0)
    /go/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:131 +0xa7
created by github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.newMultipartReader
    /go/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:129 +0x22f

UPDATE: It seems that these routines appear each hour. For example, if the application is running for 204 minutes then there are stalled goroutines in pprof with durations 24, 84, 144 minutes.

@jeanbza jeanbza added api: storage Issues related to the Cloud Storage API. status: investigating The issue is under investigation, which is determined to be non-trivial. labels Apr 3, 2019
@odeke-em
Copy link
Contributor

odeke-em commented Apr 4, 2019

Thank you for reporting this issue @chessman and welcome to the Googleapis Go cloud project!

Great details and thank you for that backtrace. So many years ago I encountered such problems with an unwieldy number of concurrent uploads that saturated the networks of my users of https://github.com/odeke-em/drive.
Is it 100 concurrent uploads per second or altogether 100 concurrent uploads for the lifetime of your program? If it is 100 concurrent uploads per second, ~500kB * 100 will empirically require ~50MBps of upload speed sustained or stall some uploads for quite a while. What might your upload speed be?

While I am still speculating, I suggest that perhaps you might also want to set Writer.ChunkSize explicitly as per https://godoc.org/cloud.google.com/go/storage#Writer.ChunkSize to avoid having to send all the bytes in a single request per upload which will not retry uploads if it fails /cc @frankyn for advice.

By leaking "memory" could you please provide some more information? Also do those goroutines finally disappear after sometime? A period of 60 minutes seems like a max timeout for an upload was reach and a retry was started.

In https://gist.github.com/odeke-em/c51e2dd5979964cb19a6fddf2bd53976 or inlined below I've composed a program that perhaps might help examine the concurrent uploads/sec and the bandwidth you might be experience

Source code
package main

import (
	"context"
	"fmt"
	"io"
	"log"
	"math/rand"
	"strings"
	"time"

	"cloud.google.com/go/storage"
)

func main() {
	ctx := context.Background()
	client, err := storage.NewClient(ctx)
	if err != nil {
		log.Fatalf("Failed to create storage client: %v", err)
	}
	defer client.Close()

	projectID := "<your_project_id>"         // TODO: Fill me in.
	bucket := client.Bucket("<your_bucket>") // TODO: Fill me in.

	// Ensure the bucket is created firstly
	_ = bucket.Create(ctx, projectID, nil)

	rng := rand.New(rand.NewSource(10))
	i := uint64(1)

	reporterCh := make(chan *report)
	go compileStats(reporterCh)

	n := 100
	sema := make(chan bool, n)

	for {
		size := int((0.5+rng.ExpFloat64())*500) * 1 << 10 // At least 250kB
		go func(id uint64, size int) {
			objectName := fmt.Sprintf("object-%d.txt", id)
			object := bucket.Object(objectName)
			w := object.NewWriter(ctx)
			defer func() {
				w.Close()
				<-sema
			}()

			s := "A"
			if id%2 == 0 {
				s = "B"
			}
			n, _ := io.Copy(w, strings.NewReader(strings.Repeat(s, size)))
			reporterCh <- &report{size: n, id: id}
		}(i, size)
		sema <- true

		i += 1
	}
}

type report struct {
	size int64
	id   uint64
}

func compileStats(reports <-chan *report) {
	i := uint64(0)
	startTime := time.Now()
	totalBytes := uint64(0)
	for report := range reports {
		i += 1
		secs := time.Since(startTime).Seconds()
		filesPerSec := float64(i) / secs
		totalBytes += uint64(report.size)
		mbPerSec := float64(totalBytes>>20) / secs
		fmt.Printf("%s: File#: %d Files/sec: %.2ffiles/s %.3fMbps\r", time.Since(startTime).Round(time.Millisecond), i, filesPerSec, mbPerSec)
	}
}

@chessman
Copy link
Author

chessman commented Apr 5, 2019

Thanks for quick response @odeke-em . I've reproduced it with the test program.

It was running for 15 hours on a freshly created instance within the same region. Speed: Files/sec: 334.87files/s 245.181Mbps. I added 10 seconds timeout to a request context but it hasn't been reached. It means that there are no requests that last forever. Nevertheless, there are 5 long-running routines in pprof like this one:

goroutine 15570455 [select, 794 minutes]:
io.(*pipe).Write(0xc0003e2000, 0xc015038aa0, 0x62, 0xa0, 0x0, 0x0, 0x0)
        /usr/lib/go-1.11/src/io/pipe.go:87 +0x1cc
io.(*PipeWriter).Write(0xc000310e78, 0xc015038aa0, 0x62, 0xa0, 0x915fa0, 0x99e520, 0xdf6a01)
        /usr/lib/go-1.11/src/io/pipe.go:153 +0x4c
bytes.(*Buffer).WriteTo(0xc000264770, 0xa3ce40, 0xc000310e78, 0x7f54ecae6f58, 0xc000264770, 0xc0418c1d01)
        /usr/lib/go-1.11/src/bytes/buffer.go:241 +0xb6
io.copyBuffer(0xa3ce40, 0xc000310e78, 0xa3c5a0, 0xc000264770, 0x0, 0x0, 0x0, 0x2, 0xc0418d4000, 0x0)
        /usr/lib/go-1.11/src/io/io.go:384 +0x352
io.Copy(0xa3ce40, 0xc000310e78, 0xa3c5a0, 0xc000264770, 0x0, 0x0, 0x0)
        /usr/lib/go-1.11/src/io/io.go:364 +0x5a
mime/multipart.(*Writer).CreatePart(0xc00496bf50, 0xc039af10e0, 0xc039af10e0, 0xa426e0, 0x9b7d79, 0x9a000)
        /usr/lib/go-1.11/src/mime/multipart/writer.go:115 +0x3e5
github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.newMultipartReader.func1(0xc004107040, 0x2, 0x2, 0xc00496bf50, 0xc000310e78)
        /home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:131 +0xa7
created by github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.newMultipartReader
        /home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:129 +0x22f

Available memory decreased by 800 MB (available in free -m at the beginning and at the end of test). Unfortunately, I didn't examine allocations on the long-running test. I ran the program again and it looks suspicious for me that there are more media buffers than the number of parallel requests. It's interested to see it in several hours.

[eugene@ea-gcs-bug ~]$ curl -s http://localhost:6060/debug/pprof/allocs?debug=1 | head -n 10
heap profile: 116: 876206624 [678461: 2045677594696] @ heap/1048576
104: 872415232 [195038: 1636097327104] @ 0x78fb6d 0x78fd65 0x83652e 0x86fc01 0x45d0e1
#	0x78fb6c	github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.NewMediaBuffer+0x4c			/home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/buffer.go:28
#	0x78fb6c	github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.PrepareUpload+0x4c			/home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:207
#	0x78fd64	github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.NewInfoFromMedia+0x94		/home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:237
#	0x83652d	github.com/datomia/datomia/vendor/google.golang.org/api/storage/v1.(*ObjectsInsertCall).Media+0x8d	/home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/storage/v1/storage-gen.go:9207
#	0x86fc00	github.com/datomia/datomia/vendor/cloud.google.com/go/storage.(*Writer).open.func1+0x1e0		/home/ea/golang/src/github.com/datomia/datomia/vendor/cloud.google.com/go/storage/writer.go:121

@chessman
Copy link
Author

The test was running for a week. 1500 MB memory is leaked. There are 132 media buffers:

132: 1107296256 [206616665: 1733226208952320] @ 0x78fb6d 0x78fd65 0x83652e 0x86fc01 0x45d0e1
#       0x78fb6c        github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.NewMediaBuffer+0x4c                  /home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/buffer.go:28
#       0x78fb6c        github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.PrepareUpload+0x4c                   /home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:207
#       0x78fd64        github.com/datomia/datomia/vendor/google.golang.org/api/gensupport.NewInfoFromMedia+0x94                /home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/gensupport/media.go:237
#       0x83652d        github.com/datomia/datomia/vendor/google.golang.org/api/storage/v1.(*ObjectsInsertCall).Media+0x8d      /home/ea/golang/src/github.com/datomia/datomia/vendor/google.golang.org/api/storage/v1/storage-gen.go:9207
#       0x86fc00        github.com/datomia/datomia/vendor/cloud.google.com/go/storage.(*Writer).open.func1+0x1e0                /home/ea/golang/src/github.com/datomia/datomia/vendor/cloud.google.com/go/storage/writer.go:121

Some routines are hanging too long:

[eugene@ea-gcs-bug ~]$ curl -s http://localhost:6060/debug/pprof/goroutine?debug=2 | grep minutes
goroutine 1151879946 [select, 1090 minutes]:
goroutine 901255293 [select, 3070 minutes]:
goroutine 1182713725 [select, 850 minutes]:
goroutine 1082417196 [select, 1630 minutes]:
goroutine 57737376 [select, 9670 minutes]:
goroutine 351422492 [select, 7390 minutes]:
goroutine 993038365 [select, 2350 minutes]:
goroutine 138887893 [select, 9010 minutes]:
goroutine 1120928800 [select, 1330 minutes]:
goroutine 93743643 [select, 9370 minutes]:
goroutine 1097692033 [select, 1510 minutes]:
goroutine 729605168 [select, 4450 minutes]:
goroutine 841436114 [select, 3550 minutes]:
goroutine 962543336 [select, 2590 minutes]:
goroutine 28796939 [select, 9910 minutes]:
goroutine 335717913 [select, 7510 minutes]:
goroutine 699575842 [select, 4690 minutes]:
goroutine 1243681964 [select, 370 minutes]:
goroutine 1136625125 [select, 1210 minutes]:
goroutine 485775377 [select, 6370 minutes]:
goroutine 162499053 [select, 8830 minutes]:
goroutine 414259089 [select, 6910 minutes]:
goroutine 264895384 [select, 8050 minutes]:
goroutine 438387062 [select, 6730 minutes]:
goroutine 782209677 [select, 4030 minutes]:
goroutine 201483108 [select, 8530 minutes]:
goroutine 64881153 [select, 9610 minutes]:
goroutine 1212747323 [select, 610 minutes]:
goroutine 86552459 [select, 9430 minutes]:

You can find goroutines and allocs dumps in attachment:
logs.zip

@odeke-em
Copy link
Contributor

My apologies for the late reply @chessman! There was an almost 2 week freeze in between but am back now. Let me take a look at the logs. However, goroutines getting stuck on a select is very odd/bizarre perhaps this could be a scheduler problem. Let me examine your logs.

@odeke-em
Copy link
Contributor

  1. Could you try Go 1.12.5 to rule out any net/http 1.11 problems that might have been fixed? If this fixes it, please let us know!
  2. Could you provide your project ID and bucket name so that we can work with the storage team to investigate.
  3. Could you provide more information about your environment? Are you running on GKE, a VM, a docker image, etc?
  4. Could you try running with GODEBUG=http2debug=2 and posting the logs around this failure? (more is better!)

cc @frankyn

@jeanbza jeanbza added the needs more info This issue needs more information from the customer to proceed. label May 10, 2019
@odeke-em
Copy link
Contributor

I took sometime to finish up my repro and hypothesis about perhaps the HTTP/2 frontend stalling reads which would explain why there are goroutines "hanging forever" and you can see that from https://gist.github.com/odeke-em/b62737b89b91e71ffbf0545581976cbc
or inlined source code

package main

import (
	"bytes"
	"context"
	"crypto/tls"
	"fmt"
	"io"
	"io/ioutil"
	"log"
	"mime/multipart"
	"net/http"
	"net/http/httptest"
	"net/textproto"
	"net/url"
	"os"
	"syscall"
	"time"

	"golang.org/x/net/http2"
)

func main() {
	cst := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		// Stalled server that takes forever to read or could be overloaded.
		// It already established a connection though.
		<-time.After(10 * time.Minute)

		slurp, _ := ioutil.ReadAll(r.Body)
		log.Printf("Request payload: %s\n", slurp)
		w.Write(bytes.Repeat([]byte("a"), 3000))
	}))
	if err := http2.ConfigureServer(cst.Config, nil); err != nil {
		log.Fatalf("http2.ConfigureServer: %v", err)
	}
	cst.StartTLS()
	defer cst.Close()

	tlsConfig := &tls.Config{InsecureSkipVerify: true}
	u, _ := url.Parse(cst.URL)
	tlsConn, err := tls.Dial("tcp", u.Host, tlsConfig)
	if err != nil {
		log.Fatalf("Failed to create a tls connection: %v", err)
	}

	prc, pwc := io.Pipe()

	go func() {
		h := make(textproto.MIMEHeader)
		mpw := multipart.NewWriter(pwc)
		w, err := mpw.CreatePart(h)
		if err != nil {
			mpw.Close()
			pwc.CloseWithError(fmt.Errorf("CreatePart failed: %v", err))
			return
		}

		n, _ := pwc.Write(bytes.Repeat([]byte("a"), 39<<20))
		println("read ", n)

		r := bytes.NewReader([]byte(`{"id": "1380", "type": "issue"}`))
		if _, err := io.Copy(w, r); err != nil {
			mpw.Close()
			pwc.CloseWithError(fmt.Errorf("Copy failed: %v", err))
			return
		}

		println("done read in goroutine")
		mpw.Close()
		pwc.Close()
	}()

	tr := &http2.Transport{TLSClientConfig: tlsConfig}
	cc, err := tr.NewClientConn(tlsConn)
	if err != nil {
		log.Fatalf("(*http2.Transport).NewClientConn: %v", err)
	}

	// Find our own process and in the background send ourselves SIGQUIT.
	selfProcess, err := os.FindProcess(os.Getpid())
	if err != nil {
		log.Fatalf("Failed to find own process: %v", err)
	}
	go func() {
		<-time.After(6 * time.Second)
		if err := selfProcess.Signal(syscall.SIGQUIT); err != nil {
			log.Fatalf("Failed to send self SIGQUIT: %v", err)
		}
	}()

	// Send that ping frame and ensure we have an established connection
	// and that the server is one stalled and body reads are stalled.
	if err := cc.Ping(context.Background()); err != nil {
		log.Fatalf("(*http2.ClientConn).Ping: %v", err)
	}

	req, _ := http.NewRequest("GET", cst.URL, prc)
	res, err := cc.RoundTrip(req)
	if err != nil {
		log.Fatalf("http.Transport.Roundtrip error: %v", err)
	}
	defer res.Body.Close()
	blob, _ := ioutil.ReadAll(res.Body)
	log.Printf("%s\n", blob)
}

and that produces pretty much an identical stack trace in there of

goroutine 51 [select]:
io.(*pipe).Write(0xc000166500, 0xc0001ae090, 0x42, 0x82, 0x0, 0x0, 0x0)
	/Users/emmanuelodeke/go/src/go.googlesource.com/go/src/io/pipe.go:87 +0x1dc
io.(*PipeWriter).Write(0xc000162020, 0xc0001ae090, 0x42, 0x82, 0x12f9620, 0x13476e0, 0x10cfc01)
	/Users/emmanuelodeke/go/src/go.googlesource.com/go/src/io/pipe.go:153 +0x4c
bytes.(*Buffer).WriteTo(0xc0000969c0, 0x13be9c0, 0xc000162020, 0x46350b0, 0xc0000969c0, 0x1)
	/Users/emmanuelodeke/go/src/go.googlesource.com/go/src/bytes/buffer.go:242 +0xb5
io.copyBuffer(0x13be9c0, 0xc000162020, 0x13be780, 0xc0000969c0, 0x0, 0x0, 0x0, 0x2, 0x0, 0x0)
	/Users/emmanuelodeke/go/src/go.googlesource.com/go/src/io/io.go:384 +0x33f
io.Copy(...)
	/Users/emmanuelodeke/go/src/go.googlesource.com/go/src/io/io.go:364
mime/multipart.(*Writer).CreatePart(0xc000096990, 0xc000064e48, 0x0, 0x0, 0x0, 0x0)
	/Users/emmanuelodeke/go/src/go.googlesource.com/go/src/mime/multipart/writer.go:121 +0x3fa
main.main.func2(0xc000162020)
	/Users/emmanuelodeke/Desktop/openSrc/bugs/google-cloud-go/1380/main.go:51 +0x11f
created by main.main
	/Users/emmanuelodeke/Desktop/openSrc/bugs/google-cloud-go/1380/main.go:48 +0x3d3

which is pretty identical to that in #1380 (comment) (accounting for code changes between Go1.11 and tip aka Go1.13-upcoming)

goroutine 15570455 [select, 794 minutes]:
io.(*pipe).Write(0xc0003e2000, 0xc015038aa0, 0x62, 0xa0, 0x0, 0x0, 0x0)
        /usr/lib/go-1.11/src/io/pipe.go:87 +0x1cc
io.(*PipeWriter).Write(0xc000310e78, 0xc015038aa0, 0x62, 0xa0, 0x915fa0, 0x99e520, 0xdf6a01)
        /usr/lib/go-1.11/src/io/pipe.go:153 +0x4c
bytes.(*Buffer).WriteTo(0xc000264770, 0xa3ce40, 0xc000310e78, 0x7f54ecae6f58, 0xc000264770, 0xc0418c1d01)
        /usr/lib/go-1.11/src/bytes/buffer.go:241 +0xb6
io.copyBuffer(0xa3ce40, 0xc000310e78, 0xa3c5a0, 0xc000264770, 0x0, 0x0, 0x0, 0x2, 0xc0418d4000, 0x0)
        /usr/lib/go-1.11/src/io/io.go:384 +0x352
io.Copy(0xa3ce40, 0xc000310e78, 0xa3c5a0, 0xc000264770, 0x0, 0x0, 0x0)
        /usr/lib/go-1.11/src/io/io.go:364 +0x5a
mime/multipart.(*Writer).CreatePart(0xc00496bf50, 0xc039af10e0, 0xc039af10e0, 0xa426e0, 0x9b7d79, 0x9a000)
        /usr/lib/go-1.11/src/mime/multipart/writer.go:115 +0x3e5

@frankyn and Google Cloud Storage team, let's also examine the server side. Maybe this could also be an old x/net/http2 bug as well, but that identical repro above might have a clue towards being a server issue.

@chessman
Copy link
Author

Sorry for the late response.

  1. Could you try Go 1.12.5 to rule out any net/http 1.11 problems that might have been fixed? If this fixes it, please let us know!

I've tried it with Go 1.12.5 with the same result.

  1. Could you provide your project ID and bucket name so that we can work with the storage team to investigate.

Project ID: datomia-2
Bucket: ea-gcs-bug2

I've just started the reproduction and will stop it tomorrow.

  1. Could you provide more information about your environment? Are you running on GKE, a VM, a docker image, etc?

I'm using a VM instance on Compute Engine.

  1. Could you try running with GODEBUG=http2debug=2 and posting the logs around this failure? (more is better!)

That's a huge amount of logs and I don't know how to identify a failure.

cc @frankyn

@jeanbza jeanbza removed the needs more info This issue needs more information from the customer to proceed. label May 30, 2019
@jeanbza
Copy link
Member

jeanbza commented Jun 6, 2019

cc @frankyn

@frankyn
Copy link
Member

frankyn commented Jun 18, 2019

Hi @odeke-em,

IIUC, your current hypothesis is that the Google Frontend is stalling HTTP/2 communication due to flooding or similar issues given the example provided in #issuecomment-493830375.

Is your current ask to relay a question on limits to the Storage team for more input?

I appreciate your patience thank you!

@odeke-em
Copy link
Contributor

@frankyn, yes in deed! In that example I simulated a stalled read and when we get a core-dump, we get an almost identical stacktrace to that seen when a read goroutine stalled in the original reporter's issue.

Is your current ask to relay a question on limits to the Storage team for more input?

Perhaps, I wanted to maybe pair up with you and try to figure out how/when GFE could stall reads and if there are such reported cases during flooding.

Thank you for chiming in.

@frankyn
Copy link
Member

frankyn commented Jun 18, 2019

Thanks @odeke-em, I don't have experience debugging requests through the GFE. (at least not yet).

Summoning @BrandonY (GCS API owner), who may have more input on flooding and stalled reads related to the GFE.
Brandon, do you have insight on this failure case w.r.t the GFE?

@odeke-em
Copy link
Contributor

Thank you for following through @frankyn!

If you @frankyn or @BrandonY are anywhere in the Bay Area, I can come in for an in-person or have a meeting about it.

@frankyn
Copy link
Member

frankyn commented Jun 18, 2019

Following up through email.

@jeanbza jeanbza added type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. and removed status: investigating The issue is under investigation, which is determined to be non-trivial. labels Jul 18, 2019
@jeanbza
Copy link
Member

jeanbza commented Jul 18, 2019

cc @frankyn @odeke-em Can we take a minute to see if this is us, GCS, or the standard lib and proceed accordingly?

@odeke-em
Copy link
Contributor

@chessman unfortunately it doesn't seem to be something that this package can do and there is a similar bug that was reported in the net/http library that'll perhaps/hopefully be fixed for Go1.14 and we are following up here golang/go#29246 (comment)

Please follow on the Go net/http bug instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: storage Issues related to the Cloud Storage API. priority: p1 Important issue which blocks shipping the next release. Will be fixed prior to next release. 🚨 This issue needs some love. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

No branches or pull requests

5 participants