Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated wasm docs to include wasm workers server #3728

Merged
merged 1 commit into from
Jul 18, 2023

Conversation

ogghead
Copy link
Contributor

@ogghead ogghead commented Jul 15, 2023

What type of PR is this?

/kind documentation

What this PR does / why we need it:

  1. Updates documentation for WASM runtimes to include Wasm Workers Server
  2. Added more explicit instructions for initial setup

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
N/A

Special notes for your reviewer:

TODOs:

  • squashed commits
  • includes documentation
  • adds unit tests

Release note:

Updated wasm docs to include wasm workers server

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/documentation Categorizes issue or PR as related to documentation. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jul 15, 2023
@k8s-ci-robot
Copy link
Contributor

Welcome @ogghead!

It looks like this is your first PR to kubernetes-sigs/cluster-api-provider-azure 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api-provider-azure has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Jul 15, 2023
@k8s-ci-robot
Copy link
Contributor

Hi @ogghead. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mboersma
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jul 15, 2023
@kubernetes-sigs kubernetes-sigs deleted a comment from k8s-ci-robot Jul 17, 2023
@codecov
Copy link

codecov bot commented Jul 17, 2023

Codecov Report

Patch coverage has no change and project coverage change: -0.02 ⚠️

Comparison is base (c87de30) 54.05% compared to head (ee4ae59) 54.04%.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3728      +/-   ##
==========================================
- Coverage   54.05%   54.04%   -0.02%     
==========================================
  Files         186      186              
  Lines       18833    18833              
==========================================
- Hits        10181    10179       -2     
- Misses       8105     8107       +2     
  Partials      547      547              

see 1 file with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@mboersma mboersma requested review from CecileRobertMichon, willie-yao and nawazkh and removed request for mboersma and jackfrancis July 17, 2023 14:24
@nawazkh
Copy link
Member

nawazkh commented Jul 17, 2023

Thank you for the changes @ogghead . The changes look good to me, however I was not able to get any response on curl http://20.237.70.160/hello.

Here is my workload cluster running Azure CNI v1

NAMESPACE     NAME                                                             READY   STATUS    RESTARTS   AGE     IP           NODE                                     NOMINATED NODE   READINESS GATES
default       wasm-spin-6696b4b6b9-24lr7                                       1/1     Running   0          16m     10.1.0.183   azure-cni-v1-28265-md-0-zh56t            <none>           <none>
default       wasm-spin-6696b4b6b9-q9lvh                                       1/1     Running   0          6m29s   10.1.0.100   azure-cni-v1-28265-md-0-tgrsd            <none>           <none>
default       wasm-spin-6696b4b6b9-x9srd                                       1/1     Running   0          6m29s   10.1.0.83    azure-cni-v1-28265-md-0-tgrsd            <none>           <none>
kube-system   azure-cni-72hs8                                                  1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   azure-cni-gqczw                                                  1/1     Running   0          15m     10.1.0.4     azure-cni-v1-28265-md-0-tgrsd            <none>           <none>
kube-system   azure-cni-zcjnf                                                  1/1     Running   0          15m     10.1.0.114   azure-cni-v1-28265-md-0-zh56t            <none>           <none>
kube-system   cloud-controller-manager-6b699d655f-8sppk                        1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   cloud-node-manager-7f9t6                                         1/1     Running   0          15m     10.1.0.114   azure-cni-v1-28265-md-0-zh56t            <none>           <none>
kube-system   cloud-node-manager-hxq5p                                         1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   cloud-node-manager-s54bh                                         1/1     Running   0          15m     10.1.0.4     azure-cni-v1-28265-md-0-tgrsd            <none>           <none>
kube-system   coredns-5d78c9869d-tzzhv                                         1/1     Running   0          17m     10.0.0.111   azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   coredns-5d78c9869d-vw2pt                                         1/1     Running   0          17m     10.0.0.56    azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   etcd-azure-cni-v1-28265-control-plane-djxsx                      1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   kube-apiserver-azure-cni-v1-28265-control-plane-djxsx            1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   kube-controller-manager-azure-cni-v1-28265-control-plane-djxsx   1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   kube-proxy-66dbc                                                 1/1     Running   0          15m     10.1.0.4     azure-cni-v1-28265-md-0-tgrsd            <none>           <none>
kube-system   kube-proxy-dqd4c                                                 1/1     Running   0          15m     10.1.0.114   azure-cni-v1-28265-md-0-zh56t            <none>           <none>
kube-system   kube-proxy-swsk4                                                 1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>
kube-system   kube-scheduler-azure-cni-v1-28265-control-plane-djxsx            1/1     Running   0          17m     10.0.0.4     azure-cni-v1-28265-control-plane-djxsx   <none>           <none>

and output of kg svc -A

❯ kg svc -A
NAMESPACE     NAME         TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)                  AGE
default       kubernetes   ClusterIP      10.96.0.1     <none>          443/TCP                  19m
default       wasm-spin    LoadBalancer   10.97.49.15   20.237.70.160   80:32287/TCP             17m
kube-system   kube-dns     ClusterIP      10.96.0.10    <none>          53/UDP,53/TCP,9153/TCP   19m

and curl http://20.237.70.160/hello results in empty output.

❯ curl "http://20.237.70.160/hello"

I am not sure if it is related to your PR. But if you have any clue, I would love to know more :)

@mboersma
Copy link
Contributor

@nawazkh I think this is due to our Azure development subscriptions being locked down for security reasons: you can't by default hit port 80 (or do SSH).

@ogghead was able to follow the docs as written, but when I've tested the Wasm shims in CAPZ, I've had to hit the app endpoints from within the cluster to avoid our external port blocking. I tested this again after kubernetes-sigs/image-builder#1220 landed and was able to verify all three Wasm runtimes that way, so I think these docs are ok.

@mboersma mboersma added this to the v1.11 milestone Jul 18, 2023
@nawazkh
Copy link
Member

nawazkh commented Jul 18, 2023

@nawazkh I think this is due to our Azure development subscriptions being locked down for security reasons: you can't by default hit port 80 (or do SSH).

^ Makes sense. Thanks for the context!

@nawazkh
Copy link
Member

nawazkh commented Jul 18, 2023

Looks good to me!
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 18, 2023
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 1b6a62521197496d4852a5685c79a52b232ec97d

@willie-yao
Copy link
Contributor

/lgtm

Copy link
Contributor

@CecileRobertMichon CecileRobertMichon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: CecileRobertMichon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 18, 2023
@k8s-ci-robot k8s-ci-robot merged commit 3584a22 into kubernetes-sigs:main Jul 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/documentation Categorizes issue or PR as related to documentation. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

6 participants