Spin In Pods (Legacy)
- Spin In Pods (Legacy)
- Next Steps
- Setup Azure AKS for Spin
- Setup Docker Desktop for Spin
- Setup K8s for Spin
- Setup Generic Kubernetes for Spin
- Run a Spin Workload on Kubernetes
Spin In Pods (Legacy)
Warning - this is the legacy experience for running Spin and Kubernetes. For the best experience, please visit SpinKube.
For Kubernetes to run Spin workloads, it needs to be taught about a new runtime class. To do this, there is a shim for containerd. This compiles to a binary that must be placed on the Kubernetes nodes that host Shim pods. That binary then needs to be registered with Kubernetes as a new RuntimeClass. After that, wasm containers can be deployed to Kubernetes using the legacy Spin k8s plugin.
Next Steps
- Use the appropriate guide below to configure Kubernetes cluster for Spin Apps.
- Follow the instructions in the “Run a Spin Workload in Kubernetes” guide below.
Warning: This is legacy content. For a better experience, please visit SpinKube.
Setup Azure AKS for Spin
Introduction
Azure AKS provides a straightforward and officially documented way to use Spin with AKS.
Known Limitations
- Node pools default to 100 max pods. When a node pool is configured it can be increased up to 250 pods.
- Each Pod will constantly run its own HTTP listener, which adds overhead vs Fermyon Cloud.
- You can run containers and wasm modules on the same node, but you can’t run containers and wasm modules on the same pod.
- The WASM/WASI node pools can’t be used for system node pool.
- The os-type for WASM/WASI node pools must be Linux.
- You can’t use the Azure portal to create WASM/WASI node pools.
- AKS uses an older version of Spin for the shim, so you will need to change
spin_manifest_version
tospin_version
inspin.toml
if you are using a template-generated project from the Spin CLI. - The RuntimeClass
wasmtime-spin-v0-5-1
on Azure maps to spin v1.0.0, and thewasmtime-spin-v1
RuntimeClass uses an older shim corresponding to v0.3.0 spin.
Note for Rust
In Cargo.toml, the spin-sdk
dependency should be downgraded to v1.0.0-rc.1
in order to match the lower version running on the AKS shim.
Setup
To get spin working on an AKS cluster, a few setup steps are required. First Add the aks-preview extension:
$ az extension add --name aks-preview
Next update to the latest version:
$ az extension update --name aks-preview
Register the WasmNodePoolPreview feature:
$ az feature register --namespace "Microsoft.ContainerService" --name "WasmNodePoolPreview"
This will take a few minutes to complete. You can verify it’s done when this command returns Registered:
$ az feature show --namespace "Microsoft.ContainerService" --name "WasmNodePoolPreview"
Finally refresh the registration of the ContainerService:
$ az provider register --namespace Microsoft.ContainerService
Once the service is registered, the next step is to add a Wasm/WASI nodepool to an existing AKS cluster. If a cluster doesn’t already exist, follow Azure’s documentation to create a new cluster:
$ az aks nodepool add \
--resource-group myResourceGroup \
--cluster-name myAKSCluster \
--name mywasipool \
--node-count 1 \
--node-vm-size Standard_B2s \
--workload-runtime WasmWasi
You can verify the workloadRuntime using the following command:
$ az aks nodepool show -g myResourceGroup --cluster-name myAKSCluster -n mywasipool --query workloadRuntime
The next set of commands uses kubectl to create the necessary runtimeClass. If you don’t already have kubectl configured with the appropriate credentials, you can retrieve them with this command:
$ az aks get-credentials -n myAKSCluster -g myResourceGroup
Find the name of the nodepool:
$ kubectl get nodes -o wide
Then retrieve detailed information on the appropriate nodepool and verify among it’s labels is “kubernetes.azure.com/wasmtime-spin-v1=true”:
kubectl describe node aks-mywasipool-12456878-vmss000000
Find the wasmtime-spin-v1
RuntimeClass created by AKS:
$ kubectl describe runtimeclass wasmtime-spin-v1
Warning: This is legacy content. For a better experience, please visit SpinKube.
Setup Docker Desktop for Spin
Introduction
Docker Desktop provides both an easy way to run Spin apps in containers directly and its own Kubernetes option.
Known Limitations
- Each Pod will be constantly running it’s own HTTP listener which adds overhead vs Fermyon Cloud.
- You can run containers and wasm modules on the same node, but you can’t run containers and wasm modules on the same pod.
- The Kubernetes commands are only required if you want to use the Kubernetes instance included as Experimental with Docker Desktop. The instructions to use spin with Docker Desktop directly are available at the bottom of this section.
Setup
Install appropriate Preview Version of Docker Desktop+Wasm Technical Preview 2 from here. Then enable containerd for Docker in Settings → Experimental → Use containerd for pulling and storing images.
Next Enable Kubernetes under Settings → Experimental → Enable Kubernetes, then hit “Apply & Restart”.
Create a file wasm-runtimeclass.yml and populate with the following information:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: "wasmtime-spin-v1"
handler: "spin"
Then register the runtime class with the cluster:
$ kubectl apply -f wasm-runtimeclass.yaml
Using Docker Desktop With Spin
Docker Desktop can be used with spin as a Kubernetes target per the instructions in the below in this document. However Docker can also run the containers directly with the following command:
$ docker run --runtime=io.containerd.spin.v1 --platform=wasi/wasm -p <port>:<port> <image>:<version>
If there is not command specified in the Dockerfile, one will need to be passed at the command line. Since Spin doesn’t need this, “/” can be passed.
Warning: This is legacy content. For a better experience, please visit SpinKube.
Setup K8s for Spin
Introduction
K3d is a lightweight Kubernetes installation.
Known Limitations
- Each Pod will be constantly running it’s own HTTP listener which adds overhead vs Fermyon Cloud.
- You can run containers and wasm modules on the same node, but you can’t run containers and wasm modules on the same pod.
Setup
Ensure both Docker and k3d are installed. Then enable containerd for Docker in Settings → Experimental → Use containerd for pulling and storing images.
Deis Labs provides a preconfigured K3d environment that can be run using this command:
$ k3d cluster create wasm-cluster --image ghcr.io/deislabs/containerd-wasm-shims/examples/k3d:v0.10.0 -p "8081:80@loadbalancer" --agents 2 --registry-create mycluster-registry:12345
Create a file wasm-runtimeclass.yml and populate with the following information:
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: "wasmtime-spin"
handler: "spin"
Then register the runtime class with the cluster:
$ kubectl apply -f wasm-runtimeclass.yaml
Warning: This is legacy content. For a better experience, please visit SpinKube.
Setup Generic Kubernetes for Spin
Introduction
These instructions are provided for a self-managed or other Kubernetes service that isn’t documented elsewhere.
Known Limitations
- Each Pod will be constantly running it’s own HTTP listener which adds overhead vs Fermyon Cloud.
- You can run containers and wasm modules on the same node, but you can’t run containers and wasm modules on the same pod.
Setup
We provide the spin-containerd-shim-installer Helm chart that provides an automated method to install and configure the containerd shim for Fermyon Spin in Kubernetes. Please see the README in the installer repository for more information.
The version of the container image and Helm chart directly correlates to the version of the containerd shim. We recommend selecting the shim version that correlates the version of Spin that you use for your application(s). For simplicity, here is a table depicting the version matrix between Spin and the containerd shim.
Spin | containerd-shim-spin |
---|---|
v2.0.1 | v0.10.0 |
v1.4.1 | v0.9.0 |
v1.4.0 | v0.8.0 |
v1.3.0 | v0.7.0 |
v1.1.0 | v0.6.0 |
v1.0.0 | v0.5.1 |
There are several values you may need to configure based on your Kubernetes environment. The installer needs to add a binary to the node’s PATH and edit containerd’s config.toml. The defaults we set are the same defaults for containerd and should work for most Kubernetes environments but you may need to adjust them if your distribution uses non-default paths.
Name | Default | Description |
---|---|---|
installer.hostEtcContainerdPath | /etc/containerd | Directory where containerd’s config.toml is located |
installer.hostBinPath | /usr/local/bin | Directory where the shim binary should be installed to (must be on PATH) |
NOTE: Because it is difficult to cover all Kubernetes environments there are no default values for node selectors or tolerations but they are configurable through values. We recommend that you configure some sensible defaults for your environment:
$ helm install fermyon-spin oci://ghcr.io/fermyon/charts/spin-containerd-shim-installer --version 0.8.0
Run a Spin Workload on Kubernetes
Introduction
This guide demonstrates the commands to run a Spin workload in Kubernetes. It should apply to all Kubernetes variants which have been properly configured using one of the Setup guides.
Concepts
Spin apps are bundled using the OCI format. These packages include the spin.toml file, and the wasm and static files which it references. While Kubernetes also uses OCI repositories, it expects the package to be in a container format.
The containerd shim for Spin allows Kubernetes to appropriately schedule Spin applications as long as they have been wrapped in a lightweight “scratch” container.
Once the Spin App Container has been created, it can be pushed to an OCI repository, and then deployed to an appropriately configured kubernetes cluster.
Requirements
The current Spin k8s plugin relies on Docker and Kubectl under the hood. It’s important that both of these tools be installed, the Docker service be running, and KUBECONFIG environment variable be configured to point to a kubeconfig file for the desired Kubernetes cluster.
Install Plugin
To install this plugin, run:
spin plugin install -u https://raw.githubusercontent.com/chrismatteson/spin-plugin-k8s/main/k8s.json
Warning: This is legacy content. For a better experience, please visit SpinKube. In particular, the SpinKube documentation that introduces the new
spin kube
command.
Workflow
The workflow is very similar to the normal Kubernetes workflow: build and test your application locally, push to a registry, and update the deployment.
If you are using Spin 1.x, or your shim uses Spin 1.x (shim version prior to 0.10.0), follow the Spin 1.x Kubernetes documentation. The workflow here applies only to Spin 2 with shim 0.10.0 and above.
Please see the shim documentation for a full tutorial on deploying Spin 2 applications.
Detailed Explanation of Steps
spin new
An optional command to use a template to create a new Spin app:
$ spin new
spin build
The following command builds a Spin app:
spin build
Create the Deployment Manifest
To create a deployment manifest (deploy.yaml
), you can either:
- Run
spin k8s scaffold
. - Copy and modify the deployment manifest below.
The deploy.yaml
file defines the deployment to the Kubernetes server. As you iterate with new versions of the registry image, you’ll need to update the deployment manifest to match.
In the deploy.yaml
, the runtimeClassName
must be defined as wasmtime-spin
. It’s critical that that is the exact name used when setting up the Kubernetes service. spin k8s scaffold
does this, and the example manifest below contains the correct name.
1. Creating a deployment manifest with the k8s
plugin
The following command creates a deployment manifest (deploy.yaml
) for a Spin app to run on Kubernetes. Scaffold takes in a namespace as a mandatory argument. This can either be a username if using the Docker hub, or can be the entire address if using a separate repository such as ghcr.io
:
spin k8s scaffold
spin k8s scaffold
also creates a Dockerfile. Spin 2.x users do not need the Dockerfile and can safely delete it. (The Dockerfile is created for Spin 1.x deployments.)
2. Copy and modify an existing deployment manifest
You can find a sample manifest in the shim documentation or below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
runtimeClassName: wasmtime-spin
containers:
- name: test
image: chrismatteson/test:0.1.5
command: ["/"]
---
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
selector:
app: test
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: traefik
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
spin registry push
The following command pushes the Spin application to the appropriate repository:
$ spin registry push chrismatteson/test:0.1.5
This requires Spin 2 or above. Spin 1.x uses a slightly different registry format, which is not compatible with the Kubernetes shim. If you are on Spin 1.x, follow the Spin 1.x Kubernetes documentation instead.
Deploy the Application
The following command deploys the application to Kubernetes:
$ kubectl apply -f deploy.yaml
If you are using the
k8s
plugin you can runspin k8s deploy
.
It may take a few seconds for your application to be ready for use.
spin k8s getsvc
The following command retrieves information about the service that gets deployed (such as its external IP):
$ spin k8s getsvc