Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: s.concurrency stays 0 when you don't use s.ServeConn #780

Merged
merged 2 commits into from
Apr 18, 2020

Conversation

kirillDanshin
Copy link
Collaborator

No description provided.

@erikdubbelboer
Copy link
Collaborator

I'm not sure what this fixes? The only way this can go wrong is if you mix Serve and ServeConn on the same Server. But if you do that after this change you will still go over the Concurrency limit as serveConn doesn't check against this limit as it normally enforces this limit with its worker pool size. If we really always want to properly enforce the limit we should also check for it in serveConn so calling Serve and ServeConn both multiple times will still stick to the limit as well.

@kirillDanshin
Copy link
Collaborator Author

@erikdubbelboer this is a repro I got from the bug reporter:

package main

import (
	"html/template"
	"sync/atomic"

	"github.com/valyala/fasthttp"
)

const tpl = `Active connections: {{.Active}}
server accepts handled requests
{{.Accepted}} {{.Handled}} {{.Requests}}
Reading: {{.Reading}} Writing: {{.Writing}} Waiting: {{.Waiting}}`

// Status like nginx
type Status struct {
	Active   int32  // Current opened connections in total
	Accepted uint64 // All accepted connections
	Handled  uint64 // All handled connections, which normally equals to the total number of accepted connections
	Requests uint64 // Total number of handled requests
	Reading  uint32 // Current reads request headers
	Writing  uint32 // Current reads request bodies, processes requests, or writes responses to a client
	Waiting  uint32 // Current Keep-Alive connections
}

type metrics struct {
	server   *fasthttp.Server
	template *template.Template

	accepted uint64
	handled  uint64
	requests uint64
}

func (s *metrics) status() Status {
	var status Status
	status.Active = s.server.GetOpenConnectionsCount()
	status.Accepted = atomic.LoadUint64(&s.accepted)
	status.Handled = atomic.LoadUint64(&s.handled)
	status.Requests = atomic.LoadUint64(&s.requests)
	status.Reading = 0 // TODO
	status.Writing = s.server.GetCurrentConcurrency()
	status.Waiting = uint32(status.Active) - status.Writing
	return status
}

func (s *metrics) handler(ctx *fasthttp.RequestCtx) {
	ctx.SetContentType("text/plain")
	ctx.SetStatusCode(fasthttp.StatusOK)
	s.template.Execute(ctx, s.status())
}

// StatusMiddleware ...
func StatusMiddleware(s *fasthttp.Server) Middleware {
	m := metrics{
		server:   s,
		template: template.Must(template.New("status").Parse(tpl)),
	}

	handler := s.Handler
	s.Handler = func(ctx *fasthttp.RequestCtx) {
		if "/nginx_status" == string(ctx.Path()) {
			m.handler(ctx)
		} else {
			handler(ctx)
		}
	}

	return func(handler fasthttp.RequestHandler) fasthttp.RequestHandler {
		return func(ctx *fasthttp.RequestCtx) {
			atomic.AddUint64(&m.requests, 1)

			handler(ctx)

			atomic.AddUint64(&m.accepted, 1)
			if ctx.Response.StatusCode() < 500 {
				atomic.AddUint64(&m.handled, 1)
			}
		}
	}
}

and this snippet:

func NewFastHTTPServer(addr string, handler fasthttp.RequestHandler) *FastHTTPServer {
  return &FastHTTPServer{
    addr: addr,
    server: &fasthttp.Server{
      Name:         "nginx",
      Handler:      handler,
      TCPKeepalive: true,
      //GetOnly:      true,
    },
  }
}

This is all additional code I had to write to run the repro:

package main

import "github.com/valyala/fasthttp"

type FastHTTPServer struct {
	addr   string
	server *fasthttp.Server
}

type Middleware func(handler fasthttp.RequestHandler) fasthttp.RequestHandler

func NewFastHTTPServer(addr string, handler fasthttp.RequestHandler) *FastHTTPServer {
	s := &FastHTTPServer{
		addr: addr,
		server: &fasthttp.Server{
			Name:         "nginx",
			Handler:      handler,
			TCPKeepalive: true,
			//GetOnly:      true,
		},
	}

	StatusMiddleware(s.server)

	return s
}

func (s *FastHTTPServer) ListenAndServe() error {
	return s.server.ListenAndServe(s.addr)
}

func main() {
	s := NewFastHTTPServer(":8080", func(ctx *fasthttp.RequestCtx) {
		ctx.WriteString("ok")
	})

	s.ListenAndServe()
}

That person wants to integrate Amplify with fasthttp for whatever reason (Name: "nginx" looks really funny btw), and he's using GetCurrentConcurrency to get the value he uses as Writing stat.

We tried versions 1.9 and 1.1 of fasthttp and saw constant zero in this field. After this fix, it works.

@erikdubbelboer
Copy link
Collaborator

Ok that makes sense.

Maybe we should also add some documentation here:

fasthttp/server.go

Lines 180 to 183 in 69f0e66

// The maximum number of concurrent connections the server may serve.
//
// DefaultConcurrency is used if not set.
Concurrency int

to specify that Concurrency only works if you either call Serve once, or only ServeConn multiple times.

@kirillDanshin
Copy link
Collaborator Author

Yeah, that’s definitely a good idea, let’s do that

Signed-off-by: Kirill Danshin <[email protected]>
@kirillDanshin kirillDanshin merged commit f9ef8fc into master Apr 18, 2020
@erikdubbelboer erikdubbelboer deleted the fix/concurrency-counter-in-serveConn branch April 26, 2020 12:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants