Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error found when executing sonobuoy in an offline environment. #1109

Closed
rplanteras opened this issue Apr 23, 2020 · 39 comments
Closed

Error found when executing sonobuoy in an offline environment. #1109

rplanteras opened this issue Apr 23, 2020 · 39 comments

Comments

@rplanteras
Copy link

I created a procedure based on the information i got from "#1028" to use sonobuoy in an offline environment. Now, i receive information from user of the procedure i created that they encountered an error. But i could not point as to what is the cause.

I would like to ask, what could have caused the following errors below?

[root@XXXXX ~]# sonobuoy status --kubeconfig $HOME/bin/config
ERRO[0000] error attempting to run sonobuoy: missing status annotation "sonobuoy.hept.io/status"

[root@XXXXX ~]# sonobuoy logs -d --kubeconfig $HOME/bin/config
namespace="sonobuoy" pod="sonobuoy" container="kube-sonobuoy"
time="2020-04-22T07:22:55Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"
time="2020-04-22T07:22:55Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"
time="2020-04-22T07:22:55Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"
time="2020-04-22T07:22:55Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)"
time="2020-04-22T07:22:55Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist"
time="2020-04-22T07:23:25Z" level=error msg="could not get api group resources: Get https://:443/api?timeout=32s: dial tcp 443: i/o timeout"
@zubron
Copy link
Contributor

zubron commented Apr 24, 2020

Hi @rplanteras. Can you confirm that your kubeconfig file is configured correctly? Looking at the logs, it seems that the server URL is missing, as seen in this log line: Get https://:443/api?timeout=32s. Given that the API server URL is missing, Sonobuoy is unable to perform queries against the cluster.

You can check your kubeconfig file using kubectl as follows:

kubectl cluster-info --kubeconfig $HOME/bin/config

If this is successful, you should see output like the following:

Kubernetes master is running at https://127.0.0.1:32769
KubeDNS is running at https://127.0.0.1:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

@rplanteras
Copy link
Author

rplanteras commented Apr 27, 2020

Hello @zubron . Im sorry for the confusion, in "Get https://[IPv6 Address]:443/api?timeout=32s" , there is an ipv6 address. Sorry it was omitted.

Basically, the environment has no internet connection. We tried to test the k8s cluster with sonobuoy that has no internet connection. When checking the pods, sonobuoy pod has been created and is running but the testing was not completed.

[root@uhn7klrc6rbms001 ~]# kubectl get pods -A

NAMESPACE????? NAME???????????????????????????????????????????????????? READY?? STATUS????????????? RESTARTS?? AGE
...
sonobuoy?????? sonobuoy???????????????????????????????????????????????? 1/1???? Running???????????? 0????????? 19h

[root@uhn7klrc6rbms001 ~]# kubectl get namespace

NAME????????????? STATUS?? AGE

...
sonobuoy????????? Active?? 19h

@rplanteras
Copy link
Author

rplanteras commented Apr 27, 2020

Can you also enlighten me with this error:

[root@uhn7klrc6rbmb001 ~]# sonobuoy status --kubeconfig $HOME/bin/config
ERRO[0000] error attempting to run sonobuoy: missing status annotation "sonobuoy.hept.io/status"
INFO[0000] created object                                name=sonobuoy-master namespace=sonobuoy resource=services
    packet_write_wait: Connection to UNKNOWN port 65535: Broken pipe
  Is this "Broken pipe" error always occuring?

@zubron
Copy link
Contributor

zubron commented Apr 27, 2020

Thanks for the explanation, @rplanteras.

The error about the status annotation is related to an annotation that is placed on the main sonobuoy pod and is updated during the course of a run with information about the current state of the plugins that are being run. Sonobuoy uses that annotation to determine the overall status for a run so if it is missing, then the sonobuoy CLI will not be able to determine the status which is why it resulted in an error. It is not set when the pod is created, it is only added once the initial set up for the sonobuoy aggregation process has finished.

I understand that you omitted the IP address, but even so, that error still indicates that there were issues connecting to the API server. Looking at the logs from your first post, it shows that it happened very early in the running of the sonobuoy process on that pod. Without seeing more logs, my guess is that the connection to the API server failed, and as a result it couldn't create a client to perform actions on the cluster, such as adding the status annotation to the sonobuoy pod. Were there more log entries following the could not get api group resources error where running sonobuoy logs?

I would try deleting and re-running sonobuoy. If you encounter the same error, it's more likely to indicate an issue with your cluster or kubeconfig file you might want to check (perhaps with some simpler non-sonobuoy workloads).

@rplanteras
Copy link
Author

Thank you for your reply, @zubron . Im very sorry for disturbing you. Please see sonobuoy logs output below.

[root@sonobuoy-host ~]# sonobuoy logs -d --kubeconfig $HOME/bin/config
namespace="sonobuoy" pod="sonobuoy" container="kube-sonobuoy"
time="2020-04-22T07:22:55Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"
time="2020-04-22T07:22:55Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"
time="2020-04-22T07:22:55Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"
time="2020-04-22T07:22:55Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)"
time="2020-04-22T07:22:55Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist"
time="2020-04-22T07:23:25Z" level=error msg="could not get api group resources: Get https://[ipv6 address]:443/api?timeout=32s: dial tcp [ipv6 address]:443: i/o timeout"
time="2020-04-22T07:23:25Z" level=info msg="no-exit was specified, sonobuoy is now blocking"

@rplanteras
Copy link
Author

@zubron

"It is not set when the pod is created, it is only added once the initial set up for the sonobuoy aggregation process has finished." -> Does this mean that even though sonobuoy pod was created, it is not an assurance that annotation is set?

@rplanteras
Copy link
Author

@zubron

"Without seeing more logs, my guess is that the connection to the API server failed," -> In our case, the environment is an air-gapped environment. We don't expect our server to connect to the internet.

@zubron
Copy link
Contributor

zubron commented Apr 27, 2020

No need to apologise :) It can be difficult to debug these issues.

"It is not set when the pod is created, it is only added once the initial set up for the sonobuoy aggregation process has finished." -> Does this mean that even though sonobuoy pod was created, it is not an assurance that annotation is set?

Yes, that is correct. The pod is created without the annotation and is only added by the sonobuoy process running in the pod later.

"Without seeing more logs, my guess is that the connection to the API server failed," -> In our case, the environment is an air-gapped environment. We don't expect our server to connect to the internet.

Apologies, when I say API server, I mean the Kubernetes API server in your cluster which should be accessible at the IP address which you omitted in the logs. It is the server in your kubeconfig which is being accessed with the request Get https://[ipv6 address]:443/api?timeout=32s.
I've actually just noticed that you are using an IPv6 address. I'm not aware of any testing that we've done using IPv6 so I'm wondering if that is causing an issue when trying to make requests 😕 Does your set up only support IPv6? Do you have an IPv4 address that you could try instead in the kubeconfig?

@rplanteras
Copy link
Author

No need to apologise :) It can be difficult to debug these issues.

"It is not set when the pod is created, it is only added once the initial set up for the sonobuoy aggregation process has finished." -> Does this mean that even though sonobuoy pod was created, it is not an assurance that annotation is set?

Yes, that is correct. The pod is created without the annotation and is only added by the sonobuoy process running in the pod later.

"Without seeing more logs, my guess is that the connection to the API server failed," -> In our case, the environment is an air-gapped environment. We don't expect our server to connect to the internet.

Apologies, when I say API server, I mean the Kubernetes API server in your cluster which should be accessible at the IP address which you omitted in the logs. It is the server in your kubeconfig which is being accessed with the request Get https://[ipv6 address]:443/api?timeout=32s.
I've actually just noticed that you are using an IPv6 address. I'm not aware of any testing that we've done using IPv6 so I'm wondering if that is causing an issue when trying to make requests 😕 Does your set up only support IPv6? Do you have an IPv4 address that you could try instead in the kubeconfig?

In my understanding, it is sonobuoy pod that makes requests to access Kubernetes API server in the cluster. Is it correct?

@rplanteras
Copy link
Author

The error about the status annotation is related to an annotation that is placed on the main sonobuoy pod and is updated during the course of a run with information about the current state of the plugins that are being run. Sonobuoy uses that annotation to determine the overall status for a run so if it is missing, then the sonobuoy CLI will not be able to determine the status which is why it resulted in an error. It is not set when the pod is created, it is only added once the initial set up for the sonobuoy aggregation process has finished.

What probably caused the annotation to be missing?

@zubron
Copy link
Contributor

zubron commented Apr 28, 2020

In my understanding, it is sonobuoy pod that makes requests to access Kubernetes API server in the cluster. Is it correct?

Yes, that is correct.

What probably caused the annotation to be missing?

The annotation is missing because Sonobuoy could not connect to the Kubernetes API service and so didn't start correctly. It is necessary for Sonobuoy to communicate with the Kubernetes API service to perform its operation so without that it couldn't proceed. It stopped before it could put the annotation on the sonobuoy pod or start any of the the tests.

@rplanteras
Copy link
Author

In my understanding, it is sonobuoy pod that makes requests to access Kubernetes API server in the cluster. Is it correct?

Yes, that is correct.

What probably caused the annotation to be missing?

The annotation is missing because Sonobuoy could not connect to the Kubernetes API service and so didn't start correctly. It is necessary for Sonobuoy to communicate with the Kubernetes API service to perform its operation so without that it couldn't proceed. It stopped before it could put the annotation on the sonobuoy pod or start any of the the tests.

Thank you very much for your answers @zubron .
Im sorry for having so many questions.
I am planning to confirm sonobuoy pod's communication.
What i am thinking to do are the steps below:

  1. connect to sonobuoy pod
  2. execute curl to api servers
    The api server that i will try to connect to is the results of "kubectl cluster-info --kubeconfig $HOME/bin/config"

Is my plan okay? or do you have a suggestion how could i confirm sonobuoy can connect to kubernetes api manually?

@zubron
Copy link
Contributor

zubron commented Apr 28, 2020

Yes, you can verify that you can communicate with the API server from the Sonobuoy pod. curl is not installed in the Sonobuoy container image so you will need to install that first using apt.

Even if you can connect to the API server using curl, you will still need to delete and recreate your Sonobuoy set up after to restart the Sonobuoy process. Once restarted it will try to connect to the API server again. There is no way to attempt the connection again from the Sonobuoy process running in the pod once it has failed.

@rplanteras
Copy link
Author

Hello @zubron. I noticed something in my test environment.
Please see image below.

env

I have private registry where images are pushed for sonobuoy to used. I also have sonobuoy server where i want to execute sonobuoy run (sonobuoy commands).

  1. If i run sonobuoy in sonobuoy server, i could not successfully exeute sonobuoy run.
  2. if i run sonobuoy in private registry, where images are found (since its my private registry), i can successfully execue sonobuoy run.
    Any thoughts?

@zubron
Copy link
Contributor

zubron commented Apr 29, 2020

Hi @rplanteras. In the failing run, you can see again that it's producing the same error as before "could not get api group resources". It's failing to make a request to the API server at 10.96.0.1 and because that step is failing, the sonobuoy run stops and it does not add the status annotation.

How are running sonobuoy on each of these machines? Can you compare the output from sonobuoy gen on both the sonobuoy server, and the private registry machines and check that they are the same?

If they are the same, it is the same workload being deployed on the cluster and so should behave the same. It won't matter which machine it was deployed from. If they are the same, then that suggests an issue with configuration in your cluster, and is not a Sonobuoy issue.

If they are different, it might help us understand what the problem is.

@rplanteras
Copy link
Author

Hi @rplanteras. In the failing run, you can see again that it's producing the same error as before "could not get api group resources". It's failing to make a request to the API server at 10.96.0.1 and because that step is failing, the sonobuoy run stops and it does not add the status annotation.

How are running sonobuoy on each of these machines? Can you compare the output from sonobuoy gen on both the sonobuoy server, and the private registry machines and check that they are the same?

I run sonobuoy gen default-image-config and both servers (sonobuoy server and private registry server) have the same output.

If they are the same, it is the same workload being deployed on the cluster and so should behave the same. It won't matter which machine it was deployed from. If they are the same, then that suggests an issue with configuration in your cluster, and is not a Sonobuoy issue.

Yes, i am expecting they should have the same behavior.

If they are different, it might help us understand what the problem is.

@zubron
Copy link
Contributor

zubron commented Apr 30, 2020

I run sonobuoy gen default-image-config and both servers (sonobuoy server and private registry server) have the same output.

Can you confirm that the output of sonobuoy gen (no other arguments) is the same on both machines? When you run that you will see the kubernetes manifest that sonobuoy creates.

If you look at the namespaces that are created, the one with the label is one that is created as part of the e2e tests. The other is the namespaces created by Sonobuoy. If you look in the private-registry-server logs, there are two namespaces created.

@rplanteras
Copy link
Author

rplanteras commented Apr 30, 2020

I run sonobuoy gen default-image-config and both servers (sonobuoy server and private registry server) have the same output.

Can you confirm that the output of sonobuoy gen (no other arguments) is the same on both machines? When you run that you will see the kubernetes manifest that sonobuoy creates.

If you look at the namespaces that are created, the one with the label is one that is created as part of the e2e tests. The other is the namespaces created by Sonobuoy. If you look in the private-registry-server logs, there are two namespaces created.

It was a mistak @zubron. A copied wrong line in logs. SOrry for that. As for the output of sonobuoy gen in both servers, they have the same output except for the config.json.

@zubron
Copy link
Contributor

zubron commented Apr 30, 2020

they have the same output except for the config.json.

How do the contents of config.json differ between the two outputs?

@rplanteras
Copy link
Author

they have the same output except for the config.json.

How do the contents of config.json differ between the two outputs?

Its only the UUID.
Now i am experiencing the error in both servers (private registry and sonobuoy server).
I did not change any configuration, i just keep on trying to test execute sonbuoy run. :(

@rplanteras
Copy link
Author

Its normal right that when executing sonobuoy run, it will pull images for conformance and sonobuoy (in quick mode) in kubernetes master node.

@zubron
Copy link
Contributor

zubron commented Apr 30, 2020

The default image pull policy is IfNotPresent so if those images are not on the node where sonobuoy is running, then yes they will be pulled.

If you are experiencing the error with both servers, when it previously worked with one, then this seems to be an issue with your cluster configuration rather than Sonobuoy, sorry :(

@zubron
Copy link
Contributor

zubron commented Apr 30, 2020

Are you having issues running any other workloads on your cluster? It might be better to work with something simpler to debug the networking issue.

@rplanteras
Copy link
Author

For my kubernetes cluster, i just setup the environment based on some resources from internet just for testing. I have not tested kubernetes cluster with other workloads. Below are some details of my kubernetes cluster.

[root@master-node ~]# kubectl get pods -A
NAMESPACE                                                     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system                                                   coredns-66bff467f8-r7nqd              1/1     Running   2          2d
kube-system                                                   coredns-66bff467f8-srpmb              1/1     Running   2          2d
kube-system                                                   etcd-master-node                      1/1     Running   2          2d
kube-system                                                   kube-apiserver-master-node            1/1     Running   51         2d
kube-system                                                   kube-controller-manager-master-node   1/1     Running   3          2d
kube-system                                                   kube-proxy-75nlb                      1/1     Running   2          47h
kube-system                                                   kube-proxy-9q24v                      1/1     Running   2          47h
kube-system                                                   kube-proxy-pm7vm                      1/1     Running   2          2d
kube-system                                                   kube-scheduler-master-node            1/1     Running   3          2d
kube-system                                                   weave-net-84dqb                       2/2     Running   8          2d
kube-system                                                   weave-net-9lspj                       2/2     Running   5          47h
kube-system                                                   weave-net-n6zcx                       2/2     Running   5          47h

[root@master-node ~]# kubectl get namespaces
NAME                                                          STATUS   AGE
default                                                       Active   2d
kube-node-lease                                               Active   2d
kube-public                                                   Active   2d
kube-system                                                   Active   2d
[root@master-node ~]# 

@rplanteras
Copy link
Author

then

The default image pull policy is IfNotPresent so if those images are not on the node where sonobuoy is running, then yes they will be pulled.

IN my understanding, he node where sonobuoy is running is referring to the master node of the k8s cluster, right?

@zubron
Copy link
Contributor

zubron commented Apr 30, 2020

No, Sonobuoy runs as a pod so by default, can run on any node where pods can run on a cluster. Some of the plugins sonobuoy runs may run on specific nodes but that has to be configured.

@rplanteras
Copy link
Author

Hello @zubron , do you think running on IPv6 environment affects sonobuoy?

@zubron
Copy link
Contributor

zubron commented May 6, 2020

Hi @rplanteras. I'm not aware of any testing that has been done with an IPv6 environment so unfortunately I don't know.

Sonobuoy uses the kubernetes client-go package for loading the kubeconfig and creating the client to communicate with the API server so hopefully would be able to take advantage of the IPv6 support there.

@rplanteras
Copy link
Author

@zubron

I'm sorry this might be a stupid question, but im not sure about the source code details. "kubernetes client-go package" is included in sonobuoy image? I know client-go is part of kubernetes, but willsonobuoy use it? Basically, i have an air-gapped environment.

@rplanteras
Copy link
Author

@zubron

Is it sonobuoy pod that access the kubernetes api or the sonobuoy application (executable file downloaded to execute sonobuoy commands)?

@rplanteras
Copy link
Author

rplanteras commented May 7, 2020

Error on "could not get api group resources" occurs here at the very early part of the function Run.

  1. https://github.com/vmware-tanzu/sonobuoy/blob/master/pkg/discovery/discovery.go#L65
  2. return nil, errors.Wrap(err, "could not get api group resources")

@zubron
Copy link
Contributor

zubron commented May 7, 2020

Hi @rplanteras!

Yes, client-go is the Golang library provided by the Kubernetes project for interacting with Kubernetes clusters. That's what we use in Sonobuoy for interacting with the cluster that it's running on. It will use it to create the pods and other resources on the cluster, and then update those resources during the run.

Both the CLI application that you run and the pod will make use of this library.

This the same error that you were originally seeing. Are you seeing it when using the CLI or in the pod logs?

@rplanteras
Copy link
Author

rplanteras commented May 7, 2020

I use the following commands in sonobuoy server.

b. sonobuoy logs  ( in sonobuoy server)

[root@uhn7klrc6rbmb001 ~]# sonobuoy logs --kubeconfig /root/bin/config -n sonobuoy-test
namespace="sonobuoy-test" pod="sonobuoy" container="kube-sonobuoy"
time="2020-05-01T06:27:40Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"
time="2020-05-01T06:27:40Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"
time="2020-05-01T06:27:40Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"
time="2020-05-01T06:27:40Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)"
time="2020-05-01T06:27:40Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist"
time="2020-05-01T06:28:10Z" level=error msg="could not get api group resources: Get https://[240b:c0e0:101:5dc0:b464:2:0:8001]:443/api?timeout=32s: dial tcp [240b:c0e0:101:5dc0:b464:2:0:8001]:443: i/o timeout"
time="2020-05-01T06:28:10Z" level=info msg="no-exit was specified, sonobuoy is now blocking"
[root@uhn7klrc6rbmb001 ~]#

c. sonobuoy status ( in sonobuoy server)

[root@uhn7klrc6rbmb001 ~]# sonobuoy status --kubeconfig /root/bin/config -n sonobuoy-test
ERRO[0000] error attempting to run sonobuoy: missing status annotation "sonobuoy.hept.io/status"
[root@uhn7klrc6rbmb001 ~]#
[root@uhn7klrc6rbms001 ~]# kubectl logs sonobuoy -n sonobuoy

time="2020-04-23T10:20:46Z" level=info msg="Scanning plugins in ./plugins.d (pwd: /)"

time="2020-04-23T10:20:46Z" level=info msg="Scanning plugins in /etc/sonobuoy/plugins.d (pwd: /)"

time="2020-04-23T10:20:46Z" level=info msg="Directory (/etc/sonobuoy/plugins.d) does not exist"

time="2020-04-23T10:20:46Z" level=info msg="Scanning plugins in ~/sonobuoy/plugins.d (pwd: /)"

time="2020-04-23T10:20:46Z" level=info msg="Directory (~/sonobuoy/plugins.d) does not exist"

time="2020-04-23T10:21:16Z" level=error msg="could not get api group resources: Get https://[240b:c0e0:101:5dc0:b464:2:0:8001]:443/api?timeout=32s: dial tcp [240b:c0e0:101:5dc0:b464:2:0:8001]:443: i/o timeout"

time="2020-04-23T10:21:16Z" level=info msg="no-exit was specified, sonobuoy is now blocking"

@rplanteras
Copy link
Author

With this error "time="2020-05-01T06:28:10Z" level=error msg="could not get api group resources: Get https://[240b:c0e0:101:5dc0:b464:2:0:8001]:443/api?timeout=32s: dial tcp [240b:c0e0:101:5dc0:b464:2:0:8001]:443: i/o timeout" i would like to determine who ctried to get the api resources, is it sonobuoy cli execution or sonobuoy pod? based on the logs shown above, pod logs also shows the error, meaning the pod tried to access the api server. I dont quite get the flow of the execution of sonobuoy run command.

@zubron
Copy link
Contributor

zubron commented May 7, 2020

When you run sonobuoy logs it is retrieving the logs for any pods that belong to it.

With the error you are seeing, that is coming from the sonobuoy pod.

When you use sonobuoy run, it creates a manifest with the resources to create on the cluster, and uses a client (created using the Kubernetes client-go library) to deploy that manifest and create the resources on the cluster. You can see this manifest that will be used by using the sonobuoy gen command.

Part of this manifest is to create the main sonobuoy pod. The command run on the sonobuoy pod also creates a client using the client-go library to interact with the cluster. It needs this client to start the plugins (creating Pods or DaemonSets), and also perform actions against the resources in the Sonobuoy namespace such as querying data about the pods, and adding labels and annotations to the pods.

@rplanteras
Copy link
Author

creates a client using the client-go library to interact with the cluster
client-go library is found in kubernetes, right? I would like to confirm, where will sonobuoy get this library or is it included in sonobuoy image?

@zubron
Copy link
Contributor

zubron commented May 7, 2020

It is a library provided by the Kubernetes project: https://github.com/kubernetes/client-go

It is a Golang library and we use it in the sonobuoy project. The sonobuoy executable which we build (which is used as the CLI tool and in the sonobuoy image), uses this library and it is built into that executable.

@rplanteras
Copy link
Author

rplanteras commented May 7, 2020

When you use sonobuoy run, it creates a manifest with the resources to create on the cluster, and uses a client (created using the Kubernetes client-go library) to deploy that manifest and create the resources on the cluster.

Is my understanding correct, that sonobuoy pod is included in this resources that will be created in the cluster? Also, in my case, sonobuoy pod was created but was not able to create the necessary resource such as the hept.io/sonobuoy annotation. Is it correct?

You can see this manifest that will be used by using the sonobuoy gen command.

Yes, i tried to generate and got the manifest.

@stale
Copy link

stale bot commented Nov 4, 2020

There has not been much activity here. We'll be closing this issue if there are no follow-ups within 15 days.

@stale stale bot added the misc/wontfix label Nov 4, 2020
@stale stale bot closed this as completed Nov 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants