Skip to content

Commit

Permalink
Add community information and fix broken links (#382)
Browse files Browse the repository at this point in the history
* Add community information

Signed-off-by: Sivanantham Chinnaiyan <[email protected]>

* Fix broken links

Signed-off-by: Sivanantham Chinnaiyan <[email protected]>

* Update docs/community/get_involved.md

Co-authored-by: Dan Sun <[email protected]>
Signed-off-by: Sivanantham <[email protected]>

* Update docs/community/get_involved.md

Co-authored-by: Dan Sun <[email protected]>
Signed-off-by: Sivanantham <[email protected]>

* Update docs/community/get_involved.md

Co-authored-by: Dan Sun <[email protected]>
Signed-off-by: Sivanantham <[email protected]>

* Update docs/community/get_involved.md

Co-authored-by: Dan Sun <[email protected]>
Signed-off-by: Sivanantham <[email protected]>

* Update docs/community/get_involved.md

Co-authored-by: Dan Sun <[email protected]>
Signed-off-by: Sivanantham <[email protected]>

---------

Signed-off-by: Sivanantham Chinnaiyan <[email protected]>
Signed-off-by: Sivanantham <[email protected]>
Co-authored-by: Dan Sun <[email protected]>
  • Loading branch information
sivanantha321 and yuzisun authored Jul 27, 2024
1 parent 3fa6794 commit f991e85
Show file tree
Hide file tree
Showing 14 changed files with 100 additions and 18 deletions.
81 changes: 81 additions & 0 deletions docs/community/get_involved.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
# How to Get Involved

Welcome to the KServe community!


Feel free to ask questions, engage in discussions, or get involved in the KServe's development. KServe, as an open-source project, thrives on the active participation of its community. Let's work together to make machine learning model serving effortless. Join us!

## How do you want to get involved?

### Ask Questions

For the fastest response, you can ask questions on the `#kserve` channel of the [CNCF
Slack](https://slack.cncf.io/).
To Join the channel, [Create your CNCF Slack account](https://slack.cncf.io/) and Search for the `#kserve` channel or join via [this link](https://cloud-native.slack.com/archives/C06AH2C3K8B).

If you prefer to use GitHub discussions, you can join the [KServe discussions](https://github.com/kserve/kserve/discussions).

### Bug Reports and Feature Requests

We use GitHub Issues to track bug reports and feature requests. Please file your issues and feature requests in the [KServe main repository](https://github.com/kserve/kserve/issues/new/choose).

For Documentation related issues, please use the [KServe Website repository](https://github.com/kserve/website/issues/new/choose).

For Open Inference Protocol (V2) related issues and feature requests, please use [Open Inference Protocol repository](https://github.com/kserve/open-inference-protocol/issues/new)

A good bug report should include:

- Description: Clearly state what you were trying to accomplish and what behavior you observed instead
- Versions: Specify the versions of relevant components
- KServe version
- Knative version (If using Serverless)
- Kubeflow version (If used with Kubeflow)
- Kubernetes version
- Cloud provider details (if using a cloud provider, indicate which one)
- Relevant resource yaml, HTTP requests, or log lines

### Vulnerability Reports

We strongly encourage you to report security vulnerabilities privately, before disclosing them in any public forums. Only the active maintainers and KServe security group members will receive the reported security vulnerabilities and the issues are treated as top priority.

You can use the following ways to report security vulnerabilities privately:

- Using our private security mailing list: [[email protected]](mailto:[email protected]).
- Using the [KServe repository GitHub Security Advisory](https://github.com/kserve/kserve/security/advisories/new)

### Become a Contributor

This is the place to start your journey as a contributor—whether it's enhancing code, improving documentation. KServe welcomes your contribution!

If you're interested in becoming a KServe contributor, you'll want to check out
our [developer guide](../developer/developer.md).


### Communication Channels

Much of the community meets on [the CNCF Slack](https://slack.cncf.io/), using the following channels:

* [#kserve](https://cloud-native.slack.com/archives/C06AH2C3K8B): General discussion about KServe usage
* [#kserve-contributors](https://cloud-native.slack.com/archives/C06KZRPSDS7): General discussion channel for folks contributing to the KServe project in any capacity
* [#kserve-oip-collaboration](https://cloud-native.slack.com/archives/C06P4SYCNRX): Discussion area for Open Inference Protocol and API standardization


### Community Meetings

We have public KServe WG biweekly community meetings on Wed 9AM US/Pacific and a public monthly Open Inference Protocol WG meeting on Wed 10AM US/Pacific.

KServe WG Meeting agendas and notes can be accessed in the [working group document](https://docs.google.com/document/d/1KZUURwr9MnHXqHA08TFbfVbM8EAJSJjmaMhnvstvi-k).
Open Inference Protocol WG meeting minutes from the monthly work group sessions can be accessed in the [working group document](https://docs.google.com/document/d/1f21bja1ejHPrZRmY5ke0UxKVD26j0VntJxx0qGN3fKE).


You can access the meeting recordings on [the community calendar](https://zoom-lfx.platform.linuxfoundation.org/meetings/kserve?view=month) by clicking on the respective date's event details.

Stay tuned for new events by subscribing to the
[community calendar](https://zoom-lfx.platform.linuxfoundation.org/meetings/kserve?view=month) ([iCal export file](https://webcal.prod.itx.linuxfoundation.org/lfx/a092M00001LkOceQAF)).

<iframe src="https://zoom-lfx.platform.linuxfoundation.org/meetings/kserve?view=month" style="border: 0" width="100%" height="800" frameborder="0"></iframe>





2 changes: 1 addition & 1 deletion docs/get_started/swagger_ui.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Swagger UI allows visualizing and interacting with the KServe InferenceService A

Currently, `POST` request only work for `v2` endpoints in the UI.

To enable, simply add an extra argument to the InferenceService YAML example from [First Inference](../first_isvc) chapter:
To enable, simply add an extra argument to the InferenceService YAML example from [First Inference](first_isvc.md) chapter:

```bash hl_lines="9"
kubectl apply -n kserve-test -f - <<EOF
Expand Down
2 changes: 1 addition & 1 deletion docs/help/contributor/github.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ Here's what generally happens after you send the PR for review:
`lgtm` label.
- The
[KServe technical writers](/OWNERS)
[KServe technical writers](../../../OWNERS)
are who provide the `approved` label when the content meets quality,
clarity, and organization standards (see [Style Guide](./../style-guide/style-and-formatting.md)).
Expand Down
6 changes: 3 additions & 3 deletions docs/modelserving/storage/oci.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Serving models with OCI images

KServe's traditional approach for model initialization involves fetching models from sources like [S3 buckets](../s3/s3.md) or [URIs](../uri/uri.md). This process is adequate for small models but becomes a bottleneck for larger ones like used for large language models, significantly delaying startup times in auto-scaling scenarios.
KServe's traditional approach for model initialization involves fetching models from sources like [S3 buckets](./s3/s3.md) or [URIs](./uri/uri.md). This process is adequate for small models but becomes a bottleneck for larger ones like used for large language models, significantly delaying startup times in auto-scaling scenarios.

"Modelcars" is a KServe feature designed to address these challenges. It streamlines model fetching using OCI images, offering several advantages:

Expand Down Expand Up @@ -94,7 +94,7 @@ This means the image would be re-downloaded every time a Pod restarts or scales
## Example
Let's see how modecars work by deploying the [getting started example](../../../../get_started/first_isvc/) by using an OCI image and check how it is different to the startup with a storage-initalizer init-container.
Let's see how modecars work by deploying the [getting started example](../../get_started/first_isvc.md) by using an OCI image and check how it is different to the startup with a storage-initalizer init-container.

Asuming you have setup a namespace `kserve-test` that is KServe enabled, create an `InferenceService` that uses an `oci://` storage URL:

Expand All @@ -113,7 +113,7 @@ spec:
EOF
```

After the `InferenceService` has been deployed successfully, you can follow the [steps of the getting started example](../../../../get_started/first_isvc/) to verify the installation.
After the `InferenceService` has been deployed successfully, you can follow the [steps of the getting started example](../../get_started/first_isvc.md) to verify the installation.

Finally, let's have a brief look under the covers for how this feature works.
Let's first check the runtime pod:
Expand Down
2 changes: 1 addition & 1 deletion docs/modelserving/storage/storagecontainers.md
Original file line number Diff line number Diff line change
Expand Up @@ -115,4 +115,4 @@ In this specific example the `model-registry://iris/v1` model is referring to a

## Spec Attributes

Spec attributes are in [API Reference](/website/reference/api/#serving.kserve.io/v1alpha1.ClusterStorageContainer) doc.
Spec attributes are in [API Reference](../../reference//api.md#serving.kserve.io/v1alpha1.ClusterStorageContainer) doc.
2 changes: 1 addition & 1 deletion docs/modelserving/v1beta1/lightgbm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,7 +224,7 @@ You can see an example payload below. Create a file named `iris-input-v2.json` w
```

Now, assuming that your ingress can be accessed at
`${INGRESS_HOST}:${INGRESS_PORT}` or you can follow [this instruction](/docs/get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports)
`${INGRESS_HOST}:${INGRESS_PORT}` or you can follow [this instruction](../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports)
to find out your ingress IP and port.

You can use `curl` to send the inference request as:
Expand Down
4 changes: 2 additions & 2 deletions docs/modelserving/v1beta1/llm/huggingface/fill_mask/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ kubectl get inferenceservices huggingface-bert

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=bert
Expand Down Expand Up @@ -112,7 +112,7 @@ kubectl get inferenceservices huggingface-bert

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=bert
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ kubectl get inferenceservices huggingface-t5

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
SERVICE_HOSTNAME=$(kubectl get inferenceservice huggingface-t5 -o jsonpath='{.status.url}' | cut -d "/" -f 3)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ kubectl get inferenceservices huggingface-distilbert

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=distilbert
Expand Down Expand Up @@ -112,7 +112,7 @@ kubectl get inferenceservices huggingface-distilbert

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=distilbert
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ kubectl get inferenceservices huggingface-llama3

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=llama3
Expand Down Expand Up @@ -193,7 +193,7 @@ kubectl get inferenceservices huggingface-llama3

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=llama3
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ kubectl get inferenceservices huggingface-bert

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=bert
Expand Down Expand Up @@ -114,7 +114,7 @@ kubectl get inferenceservices huggingface-bert

### Perform Model Inference

The first step is to [determine the ingress IP and ports](../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

```bash
MODEL_NAME=bert
Expand Down
2 changes: 1 addition & 1 deletion docs/modelserving/v1beta1/pmml/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ kubectl apply -f pmml.yaml


### Run a prediction
The first step is to [determine the ingress IP and ports](/docs/get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.
The first step is to [determine the ingress IP and ports](../../../get_started/first_isvc.md#4-determine-the-ingress-ip-and-ports) and set `INGRESS_HOST` and `INGRESS_PORT`.

You can see an example payload below. Create a file named `iris-input.json` with the sample input.
```json
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ if __name__ == "__main__":

### Configuring Logger for Serving Runtime
Kserve allows users to override the default logger configuration of serving runtime and uvicorn server.
You can follow the [logger configuration documentation](../../custom/custom_model/#configuring-logger-for-serving-runtime) to configure the logger.
You can follow the [logger configuration documentation](../../custom/custom_model/README.md#configuring-logger-for-serving-runtime) to configure the logger.

### Build Transformer docker image
Under `kserve/python` directory, build the transformer docker image using [Dockerfile](https://github.com/kserve/kserve/blob/release-0.11/python/custom_transformer.Dockerfile)
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@ nav:
- Articles:
- KFserving Transition: blog/articles/2021-09-27-kfserving-transition.md
- Community:
- How to Get Involved: community/get_involved.md
- Adopters: community/adopters.md
- Demos and Presentations: community/presentations.md

Expand Down

0 comments on commit f991e85

Please sign in to comment.