Build your first AI app using Serverless AI Inferencing
This tutorial will show you how to use Fermyon Serverless AI to quickly build your first AI-enabled serverless application that can run on Fermyon Cloud. In this tutorial we will:
- Install Spin (and dependencies) on your local machine
- Create a ‘Hello World’ Serverless AI application
- Learn about the Serverless AI SDK (in Rust, TypeScript and Python)
Tutorial Prerequisites
Spin
You will need to install the latest version of Spin. Serverless AI is supported on Spin v1.5 and above If you already have Spin installed, check what version you are on and upgrade if required.
Dependencies
Rust The above installation script automatically installs the latest SDKs for Rust, which enables Serverless AI functionality.
TypeScript/Javascript
To enable Serverless AI functionality via TypeScript/Javascript, please ensure you have the latest TypeScript/JavaScript template installed:
$ spin templates install --git <https://github.com/fermyon/spin-js-sdk> --upgrade
Python
To enable Serverless AI functionality via Python, please ensure you have the latest Python plugin and template installed:
$ spin plugins update
$ spin plugins install py2wasm
$ spin templates install --git https://github.com/fermyon/spin-python-sdk --upgrade
Licenses
This tutorial uses Meta AI’s Llama 2, Llama Chat and Code Llama models you will need to visit Meta’s Llama webpage and agree to Meta’s License, Acceptable Use Policy, and to Meta’s privacy policy before fetching and using Llama models.
Serverless AI Inferencing With Spin
Now, let’s write your first Serverless AI application with Spin.
Creating a New Spin Application
The Rust code snippets below are taken from the Fermyon Serverless AI Examples.
$ spin new http-rust
Enter a name for your new application: hello-world
Description: My first Serverless AI app
HTTP base: /
HTTP path: /...
The Python code snippets below are taken from the Fermyon Serverless AI Examples.
$ spin new http-py
Enter a name for your new application: hello-world
Description: My first Serverless AI app
HTTP base: /
HTTP path: /...
The TypeScript code snippets below are taken from the Fermyon Serverless AI Examples.
$ spin new http-ts
Enter a name for your new application: hello-world
Description: My first Serverless AI app
HTTP base: /
HTTP path: /...
Configuring Your Application
The spin.toml
file is the manifest file which tells Spin what events should trigger what components. Configure the [[component]]
section of our application’s manifest explicitly naming our model of choice. For this example, we specify the llama2-chat
value for our ai_models
configuration:
ai_models = ["llama2-chat"]
This is what your spin.toml
file should look like, based on whether you’re using Rust, TypeScript or Python:
spin_manifest_version = "1"
authors = ["Your Name <your-name@example.com>"]
description = ""
name = "hello-world"
trigger = { type = "http", base = "/" }
version = "0.1.0"
[[component]]
id = "hello-world"
source = "target/wasm32-wasi/release/hello-world.wasm"
allowed_http_hosts = []
ai_models = ["llama2-chat"]
key_value_stores = ["default"]
[component.trigger]
route = "/..."
[component.build]
command = "cargo build --target wasm32-wasi --release"
watch = ["src/**/*.rs", "Cargo.toml"]
spin_manifest_version = "1"
authors = ["Your Name <your-name@example.com>"]
description = ""
name = "hello-world"
trigger = { type = "http", base = "/" }
version = "0.1.0"
[[component]]
id = "hello-world"
source = "target/hello-world.wasm"
exclude_files = ["**/node_modules"]
ai_models = ["llama2-chat"]
[component.trigger]
route = "/..."
[component.build]
command = "npm run build"
watch = ["src/index.ts"]
spin_manifest_version = "1"
authors = ["Your Name <your-name@example.com>"]
description = ""
name = "hello-world"
trigger = { type = "http", base = "/" }
version = "0.1.0"
[[component]]
id = "hello-world"
source = "target/hello-world.wasm"
exclude_files = ["**/node_modules"]
ai_models = ["llama2-chat"]
[component.trigger]
route = "/..."
[component.build]
command = "spin py2wasm app -o app.wasm"
watch = ["app.py", "Pipfile"]
Source Code
Now let’s use the Spin SDK to access the model from our app. Executing inference from a LLM is a single line of code. Add the Llm
and the InferencingModels
to your app and use the Llm.infer
to execute an inference. Here’s how the code looks:
use anyhow::{Context, Result};
use spin_sdk::{
http::{Request, Response},
http_component, llm,
};
/// A simple Spin HTTP component.
#[http_component]
fn hello_world(_req: Request) -> Result<Response> {
let model = llm::InferencingModel::Llama2Chat;
let inference = llm::infer(model, "Can you tell me a joke about cats".into());
Ok(http::Response::builder()
.status(200)
.body(Some(format!("{:?}", inference).into()))?)
}
import { Llm, InferencingModels, HandleRequest, HttpRequest, HttpResponse } from "@fermyon/spin-sdk"
const model = InferencingModels.Llama2Chat
export const handleRequest: HandleRequest = async function (request: HttpRequest): Promise<HttpResponse> {
const prompt = "Can you tell me a joke about cats"
const out = Llm.infer(model, prompt)
return {
status: 200,
body: out.text
}
}
from spin_http import Response
from spin_llm import llm_infer
import json
import re
def handle_request(request):
try:
result = llm_infer("llama2-chat", "Can you tell me a joke abut cats")
return Response(200, {"content-type": "text/plain"}, bytes(result.text, "utf-8"))
except Exception as e:
return Response(500, {"content-type": "text/plain"}, bytes(f"Error: {str(e)}", "utf-8"))
Building and Deploying Your Spin Application
Now that you have written your first Serverless AI app, it’s time to build and deploy it. To build your app run the following commands from inside your app’s folder (where the spin.toml
file is located):
$ spin build
$ npm install
$ spin build
$ spin build
Now that your app is built, there are three ways to test your Serverless AI app. One way to test the app is to run inferencing locally. This means running a LLM on your CPU. This is not as optimal compared to deploying to Fermyon’s Serverless AI, which uses high-powered GPUs in the cloud. To know more about this method, including downloading LLMs to your local machine, check out the in-depth tutorial on Building a Sentiment Analysis API using Serverless AI.
Here are the two other methods for testing your app:
Deploy the app to the Fermyon Cloud
You can deploy the app to the cloud by using the spin deploy
command. In case you have not logged into your account before deploying your application, you need to grant access via a one-time token. Follow the instructions in the prompt to complete the auth process.
Once you have logged in and the app is deployed, you will see a URL, upon successful deployment. The app is now deployed and can be accessed by anyone with the URL:
$ spin deploy
>Uploading hello-world version 0.1.0+ra01f74e2...
Deploying...
Waiting for application to become ready...... ready
Available Routes:
hello-world: https://hello-world-XXXXXX.fermyon.app (wildcard)
The app’s manifest file reads the line ai_models = ["llama2-chat"]
and uses that model in the cloud. For any changes to take effect in the app, it needs to be re-deployed to the cloud.
Using the Cloud-GPU plugin to test locally
To avoid having to deploy the app for every change, you can use the Cloud-GPU plugin to deploy locally, with the LLM running in the cloud. While the app is hosted locally (running on localhost
), every inferencing request is sent to the LLM that is running in the cloud. Follow the steps to use the cloud-gpu
plugin.
Note: This plugin works only with spin v1.5.1 and above.
First, install the plugin using the command:
$ spin plugins install -u https://github.com/fermyon/spin-cloud-gpu/releases/download/canary/cloud-gpu.json -y
Let’s initialize the plugin. This command essentially deploys the Spin app to a Cloud GPU proxy and generates a runtime-config:
$ spin cloud-gpu init
[llm_compute]
type = "remote_http"
url = "https://fermyon-cloud-gpu-<AUTO_GENERATED_STRING>.fermyon.app"
auth_token = "<AUTO_GENERATED_TOKEN>"
In the root of your Spin app directory, create a file named runtime-config.toml
and paste the runtime-config generated in the previous step.
Now you are ready to test the Serverless AI app locally, using a GPU that is running in the cloud. To deploy the app locally you can use spin up
(or spin watch
) but with the following flag:
$ spin up --runtime-config-file <path/to/runtime-config.toml>
Logging component stdio to ".spin/logs/"
Serving http://127.0.0.1:3000
Available Routes:
hello-world: http://127.0.0.1:3000 (wildcard)
Next Steps
This was just a small example of what Serverless AI Inferencing can do. To check out more detailed code samples:
- Read our in-depth tutorial on building a Sentiment Analysis API with Serverless AI
- Look at the Serverless AI API Guide
- Try the numerous Serverless AI examples in our GitHub repository called ai-examples.
- Contribute your Serverless AI app to our Spin Hub.
- Ask questions and share your thoughts in our Discord community.