Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Failed to extract ServerMetadata from context #4605

Open
andreslunchbox opened this issue Aug 7, 2024 · 14 comments
Open

ERROR: Failed to extract ServerMetadata from context #4605

andreslunchbox opened this issue Aug 7, 2024 · 14 comments

Comments

@andreslunchbox
Copy link

andreslunchbox commented Aug 7, 2024

After upgrading from:
github.com/grpc-ecosystem/grpc-gateway/v2 v2.18.1

To:
github.com/grpc-ecosystem/grpc-gateway/v2 v2.21.0

I am encountering the following error multiple times even when I'm not making API calls to the gateway:

2024/08/07 17:23:39 ERROR: Failed to extract ServerMetadata from context

gRPC-Gateway version: v2.21.0
Environment: Amazon Elastic Kubernetes Service (EKS) + AWS ALB Ingress Controller

Additional Context
I've observed this error consistently after upgrading the gateway version. It appears when the gateway is behind an AWS ALB Ingress Controller as it does not show up in local development when using an NGINX ingress.

@johanbrandhorst
Copy link
Collaborator

This error message usually appears when a wrapped http.ResponseWriter which doesn't implement the http.ResponseController unwrap (see https://www.alexedwards.net/blog/how-to-use-the-http-responsecontroller-type). I can't tell you why these are only appearing in your AWS deployment, presumably they are from random incoming requests? There's no reason to think they should be happening without any sort of incoming requests. If you think there's a bug here, could you provide a minimal working example?

@andreslunchbox
Copy link
Author

andreslunchbox commented Aug 7, 2024

I took a look at the link you provided, but I don't think we have changed anything around that. In fact or gateway setup has been pretty consistent for over a year now, we always follow the same pattern when exposing new microservices which pretty much is:

import "github.com/grpc-ecosystem/grpc-gateway/v2/runtime"

func main() {
    grpcPort := config.Get("GRPC_PORT")
    httpPort := config.Get("HTTP_PORT")
    
    grpcServerEndpoint := fmt.Sprintf("%s:%s", "", grpcPort)
    httpServerEndpoint := fmt.Sprintf("%s:%s", "", httpPort)
    
    lis, err := net.Listen("tcp", grpcServerEndpoint)
    if err != nil {
        logger.Fatal("failed to listen", "error", err)
    }

    mux := runtime.NewServeMux()
    httpProxyOpts := []grpc.DialOption{grpc.WithTransportCredentials(insecure.NewCredentials())}

    authServer := auth.NewServer()

    authpb.RegisterAuthenticationServiceServer(grpcServer, authServer)
    authpb.RegisterAuthenticationServiceHandlerFromEndpoint(ctx, mux, grpcServerEndpoint, httpProxyOpts)

    http.HandleFunc("/health", func(w http.ResponseWriter, req *http.Request) { fmt.Fprintf(w, "ok") })

    http.ListenAndServe(httpServerEndpoint, mux)
}

Surprisingly, it started throwing that error after upgrading. One important thing to notice is that it is not breaking anything, but it is annoying and makes logs hard to read.

@johanbrandhorst
Copy link
Collaborator

Ah I think I see what's happening. In 2.21 we changed some INFO logs to ERROR level logs: #4327. You can remove these logs by overwriting the output of the grpc logger:

grpclog.SetLoggerV2(grpclog.NewLoggerV2(io.Discard, io.Discard, io.Discard))

The error was probably always happening, but you're only noticing because we changed the log level. As to why it's happening, do you update your generator at the same time as you update the runtime library?

@andreslunchbox
Copy link
Author

Thanks! I'll give that a try.
With generator do you mean these dependencies:

github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway
github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-openapiv2
google.golang.org/protobuf/cmd/protoc-gen-go
google.golang.org/grpc/cmd/protoc-gen-go-grpc

?

@johanbrandhorst
Copy link
Collaborator

Specifically, github.com/grpc-ecosystem/grpc-gateway/v2/protoc-gen-grpc-gateway, yes. But you need to install it to change the generated output. It will depend on how you manage your generation. There are example generation setups in the README.

@andreslunchbox
Copy link
Author

Okay yeah looks like we updated the generator pretty recently, we are currently using v1.5.1 would that be an issue?
For generation we use the Buf CLI with the default mapping to HTTP semantics in our .proto files.

@johanbrandhorst
Copy link
Collaborator

v1.5.1 of which generator? Probably not protoc-gen-grpc-gateway, right? That would be very old.

@andreslunchbox
Copy link
Author

Yeah sorry my bad, we actually moved to Buf Remote Plugins and updated the versions off all the generators, this is or buf.gen.yaml:

version: v2
plugins:
  - remote: buf.build/protocolbuffers/go:v1.34.2
    out: ./gen/go
    opt: paths=source_relative

  - remote: buf.build/grpc/go:v1.5.1
    out: ./gen/go
    opt: paths=source_relative

  - remote: buf.build/grpc-ecosystem/gateway:v2.21.0
    out: ./gen/go
    opt: paths=source_relative

  - remote: buf.build/protocolbuffers/csharp:v27.3
    out: ./gen/csharp
    
  - remote: buf.build/bufbuild/es:v2.0.0
    out: ./gen/ts
    opt:
      - target=ts
      - json_types=true
      - import_extension=ts

I can confirm that the generator version v2.21.0 matches the runtime version and we are still seeing:

ERROR: Failed to extract ServerMetadata from context

@johanbrandhorst
Copy link
Collaborator

OK great, so as I said you can turn off the grpclogger if you want, but if you want to get to the bottom of why this is happening, do you think you could provide a minimal reproducible example? This isn't expected in the general use of the gateway.

@ccampo133
Copy link

ccampo133 commented Aug 21, 2024

Just noticed this error message popping up only when I send an HTTP request to an endpoint that is not found. For example, my server is serving the endpoint GET /v1/ping but not GET /ping. The log message gets printed on requests to the latter. Both buf.build/grpc-ecosystem/gateway and github.com/grpc-ecosystem/grpc-gateway/v2 are the same version (v2.22.0). I will post a working example shortly.

I'm a bit unsure why this is logged as an error at all - it seems like it is checked and logged but then everything carries on as if nothing happened (see https://github.com/grpc-ecosystem/grpc-gateway/blob/main/runtime/errors.go#L135-L138). Doesn't really seem like an error to me.

In any case, standby for an example.

@ccampo133
Copy link

@johanbrandhorst here's an example project which produces this log message when you make API calls to endpoints that aren't handled by the gateway - https://github.com/ccampo133/lambda-grpc-gateway/tree/grpc_gateway_issue_4605 (note the branch grpc_gateway_issue_4605).

You can run the service locally and make calls to it and you'll see the message logged. For example, in some terminal, run:

$ go run cmd/cli/main.go       
{"level":"info","msg":"Application server started","time":"2024-08-26T16:12:27-04:00"}
{"level":"info","msg":"gRPC server listening at [::]:8081","time":"2024-08-26T16:12:27-04:00"}
{"level":"info","msg":"HTTP server listening at [::]:8080","time":"2024-08-26T16:12:27-04:00"}

Then in some other terminal:

$ curl http://localhost:8080/foo
{"code":5,"message":"Not Found","details":[]}

And back in first terminal you'll see the log message:

2024/08/26 16:13:02 ERROR: Failed to extract ServerMetadata from context

My guess is that @andreslunchbox is seeing these logs on his AWS deployment due to miscellaneous web traffic (scrapers, etc.).

@johanbrandhorst
Copy link
Collaborator

Maybe this should just get the same treatment as #4594 and we just return the error instead of logging it?

@ccampo133
Copy link

I think that's probably a fine approach after looking at the code a bit closer. I will probably take a stab at submitting a PR if I stumble upon some free time soon, but please don't wait for me. I appreciate the responses!

@moritzzimmer
Copy link

moritzzimmer commented Oct 31, 2024

we stumbled upon the same problem. Can confirm that this log message only appears for requests which can't be handled (e.g. for the favicon when testing in a browser).

Using the workaround atm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants