OpenFaaS
OpenFaaS is a platform for serverless functions that makes it simple to deploy both functions and existing code to Kubernetes with a unified experience.
Alternative To
- • AWS Lambda
- • Google Cloud Functions
- • Azure Functions
- • Knative
Difficulty Level
Requires some technical experience. Moderate setup complexity.
Overview
OpenFaaS (Functions as a Service) is an open-source serverless platform that makes it simple to deploy event-driven functions and microservices to Kubernetes without repetitive, boilerplate coding. It simplifies the development, deployment, and scaling of serverless functions across different environments, whether on-premises or in the cloud. OpenFaaS abstracts away the complexity of Kubernetes while providing a developer-friendly experience through its CLI tool, templates, and API.
The project’s tagline, “Serverless Functions, Made Simple,” encapsulates its core mission: to provide a straightforward way to deploy functions that can run anywhere with the same unified experience. OpenFaaS allows developers to package functions as portable OCI (Open Container Initiative) images, making them highly portable across different environments and cloud providers.
Key Features
Feature | Description |
---|---|
Language Agnostic | Write functions in any language (Go, Python, Node.js, Java, C#, Ruby, PHP, etc.) or bring existing microservices |
Portable Deployment | Deploy functions on-premises or in the cloud with portable OCI images |
Auto-scaling | Scale functions automatically to meet demand, and down to zero when idle (with Pro version) |
Templates | Pre-built templates for rapid development in various languages |
Event-driven Architecture | Trigger functions through events from Apache Kafka, AWS SQS, PostgreSQL, Cron, MQTT, and more |
Kubernetes Integration | Seamlessly runs on Kubernetes, enriching it with scaling, queueing, monitoring, and event triggers |
Unified Experience | Consistent developer experience regardless of the underlying infrastructure |
Function Builder API | Turn source code into functions via REST API (Enterprise version) |
Multi-tenancy Support | Isolation through Kubernetes network policies, resource limits, and dedicated namespaces per tenant |
Monitoring & Observability | Built-in Prometheus metrics and Grafana dashboards |
Technical Details
OpenFaaS consists of several core components that work together to provide its serverless capabilities:
- Gateway: The central component that provides the API for function deployment, invocation, and scaling
- Function Watchdog: A tiny HTTP server that wraps functions, enabling them to respond to HTTP requests
- CLI (faas-cli): Command-line interface for building, deploying, and managing functions
- Function Templates: Pre-built templates for various programming languages
- Provider Interface: Abstraction layer that allows OpenFaaS to run on different backends
OpenFaaS follows a microservices architecture and is built primarily in Go. It uses Docker for containerization and can be deployed on Kubernetes or with faasd (a lightweight alternative to Kubernetes).
Versions and Editions
OpenFaaS is available in multiple editions:
Edition | Target Use Case | Key Features |
---|---|---|
Community Edition (CE) | Proof of Concept, experimentation, limited internal use | Core functionality, community support |
OpenFaaS Pro (Standard) | Production environments | Flexible auto-scaling, event-connectors, monitoring dashboards, direct engineering support |
OpenFaaS Enterprise | Multi-tenant production environments | Function Builder API, advanced multi-tenancy, dedicated support |
Why Use OpenFaaS
Compared to Cloud Provider Functions (AWS Lambda, Azure Functions, etc.)
- Avoid Vendor Lock-in: OpenFaaS functions are portable across different environments and cloud providers
- Consistent Experience: Same development and deployment experience regardless of where functions run
- Cost Control: Run on your own infrastructure without unpredictable cloud billing
- Data Privacy: Keep sensitive data within your own infrastructure
- Customization: More flexibility to customize the runtime environment
Compared to Raw Kubernetes
- Developer Experience: Abstracts away Kubernetes complexity with a simple CLI and API
- Productivity: Ship functionality to production within hours instead of days or weeks
- Built-in Features: Scaling, queueing, monitoring, and event triggers without additional configuration
- Function Templates: Reduce boilerplate code with pre-built templates
Compared to Other Serverless Frameworks
- Language Agnostic: Support for any programming language
- Microservices Support: Deploy both functions and existing microservices
- Mature Project: Trusted in production by companies like T-Mobile, LivePerson, and Cognite
- Active Community: Regular updates, extensive documentation, and community support
Installation Guide
There are multiple ways to install and run OpenFaaS. We’ll cover the two most common approaches: deploying on Kubernetes with arkade and using faasd for a lightweight installation.
Prerequisites
- Docker
- Kubernetes cluster (for Kubernetes deployment) or a Linux VM (for faasd)
- kubectl (for Kubernetes deployment)
Installing OpenFaaS on Kubernetes with arkade
arkade is a Kubernetes marketplace that makes it easy to install OpenFaaS and other apps.
- Install arkade:
# Download and install arkade
curl -SLsf https://get.arkade.dev/ | sudo sh
- Install OpenFaaS:
# Create a namespace for OpenFaaS
kubectl create namespace openfaas
kubectl create namespace openfaas-fn
# Install OpenFaaS using arkade
arkade install openfaas
- Get the OpenFaaS gateway URL and login credentials:
# Forward the gateway to your local machine
kubectl port-forward -n openfaas svc/gateway 8080:8080 &
# Get the password
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode)
echo $PASSWORD
# Log in to the OpenFaaS gateway
export OPENFAAS_URL=http://127.0.0.1:8080
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
Installing faasd (OpenFaaS without Kubernetes)
faasd is a lightweight alternative to Kubernetes for running OpenFaaS.
- Install faasd on a Linux VM:
# Install faasd
git clone https://github.com/openfaas/faasd
cd faasd
./hack/install.sh
- Get the login credentials:
# Get the password
sudo cat /var/lib/faasd/secrets/basic-auth-password
# Set the OpenFaaS URL
export OPENFAAS_URL=http://localhost:8080
Installing the faas-cli
The faas-cli is the command-line interface for interacting with OpenFaaS.
# Install the faas-cli
curl -sL https://cli.openfaas.com | sudo sh
Practical Exercise: Creating and Deploying a Function
Let’s create a simple Python function that returns a greeting message.
Step 1: Create a new function from a template
# List available templates
faas-cli template store list
# Pull the Python HTTP template
faas-cli template store pull python3-http
# Create a new function
faas-cli new --lang python3-http hello-python
This creates a new directory structure:
hello-python/
├── hello-python.yml
└── hello-python
└── handler.py
Step 2: Modify the function code
Edit the hello-python/handler.py
file:
def handle(event, context):
name = event.body.decode('utf-8') if event.body else "World"
return {
"statusCode": 200,
"body": f"Hello, {name}! Welcome to OpenFaaS!"
}
Step 3: Build, push, and deploy the function
First, edit the hello-python.yml
file to specify your Docker Hub username (or other registry):
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
hello-python:
lang: python3-http
handler: ./hello-python
image: yourusername/hello-python:latest
Now build, push, and deploy the function:
# Build the function
faas-cli build -f hello-python.yml
# Push the function to Docker Hub (or your registry)
faas-cli push -f hello-python.yml
# Deploy the function
faas-cli deploy -f hello-python.yml
Step 4: Invoke the function
# Invoke the function with curl
curl -X POST http://127.0.0.1:8080/function/hello-python -d "OpenFaaS User"
# Or use the faas-cli
echo "OpenFaaS User" | faas-cli invoke hello-python
You should see the output: Hello, OpenFaaS User! Welcome to OpenFaaS!
Step 5: Create a function with environment variables
Let’s create another function that uses environment variables:
# Create a new function
faas-cli new --lang python3-http env-example
Edit the env-example/handler.py
file:
import os
def handle(event, context):
environment = os.environ.get("ENVIRONMENT", "development")
return {
"statusCode": 200,
"body": f"Running in {environment} environment"
}
Edit the env-example.yml
file to add environment variables:
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
env-example:
lang: python3-http
handler: ./env-example
image: yourusername/env-example:latest
environment:
ENVIRONMENT: production
Build, push, and deploy the function:
faas-cli build -f env-example.yml
faas-cli push -f env-example.yml
faas-cli deploy -f env-example.yml
Invoke the function:
curl http://127.0.0.1:8080/function/env-example
You should see the output: Running in production environment
Step 6: Create an asynchronous function
OpenFaaS supports asynchronous function invocation, which is useful for long-running tasks.
Create a new function:
faas-cli new --lang python3-http async-task
Edit the async-task/handler.py
file:
import time
def handle(event, context):
# Simulate a long-running task
time.sleep(5)
return {
"statusCode": 200,
"body": "Long-running task completed!"
}
Build, push, and deploy the function:
faas-cli build -f async-task.yml
faas-cli push -f async-task.yml
faas-cli deploy -f async-task.yml
Invoke the function asynchronously:
curl -X POST http://127.0.0.1:8080/async-function/async-task
This will return immediately with a task ID, while the function continues to execute in the background.
Advanced Usage
Connecting to External Services
OpenFaaS functions can connect to external services like databases. Here’s an example of a function that connects to a PostgreSQL database:
import os
import psycopg2
def handle(event, context):
# Get database connection details from environment variables
host = os.environ.get("DB_HOST")
port = os.environ.get("DB_PORT", "5432")
user = os.environ.get("DB_USER")
password = os.environ.get("DB_PASSWORD")
dbname = os.environ.get("DB_NAME")
try:
# Connect to the database
conn = psycopg2.connect(
host=host,
port=port,
user=user,
password=password,
dbname=dbname
)
# Create a cursor
cur = conn.cursor()
# Execute a query
cur.execute("SELECT version();")
# Fetch the result
version = cur.fetchone()[0]
# Close the cursor and connection
cur.close()
conn.close()
return {
"statusCode": 200,
"body": f"Database version: {version}"
}
except Exception as e:
return {
"statusCode": 500,
"body": f"Error: {str(e)}"
}
To use this function, you would need to add the PostgreSQL client library to your function’s requirements and set the environment variables in your YAML file.
Setting Up Auto-scaling
OpenFaaS Pro provides advanced auto-scaling capabilities. Here’s how to configure auto-scaling for a function:
version: 1.0
provider:
name: openfaas
gateway: http://127.0.0.1:8080
functions:
auto-scale-example:
lang: python3-http
handler: ./auto-scale-example
image: yourusername/auto-scale-example:latest
labels:
com.openfaas.scale.min: "1"
com.openfaas.scale.max: "10"
com.openfaas.scale.target: "50"
com.openfaas.scale.type: "cpu"
This configuration sets up auto-scaling based on CPU usage, with a minimum of 1 replica, a maximum of 10 replicas, and a target CPU utilization of 50%.
Creating a Custom Template
If the provided templates don’t meet your needs, you can create a custom template:
- Create a template directory structure:
template/
├── template.yml
└── my-custom-template/
├── Dockerfile
├── function/
│ └── handler.py
└── index.py
- Define the template in
template.yml
:
language: my-custom-template
fprocess: python index.py
welcome_message: |
You're using a custom Python template.
To use this template specify --lang my-custom-template when creating a function.
- Create the Dockerfile:
FROM python:3.9-alpine
RUN apk --no-cache add curl \
&& echo "Pulling watchdog binary from GitHub." \
&& curl -sSLf https://github.com/openfaas/of-watchdog/releases/download/0.8.4/of-watchdog > /usr/bin/fwatchdog \
&& chmod +x /usr/bin/fwatchdog
WORKDIR /home/app
COPY index.py .
COPY function function
RUN pip install --no-cache-dir -r function/requirements.txt
ENV fprocess="python index.py"
ENV mode="http"
ENV upstream_url="http://127.0.0.1:8000"
CMD ["fwatchdog"]
- Create the
index.py
file:
import sys
from flask import Flask, request
import function.handler as handler
app = Flask(__name__)
@app.route("/", methods=["POST", "GET"])
def main_route():
data = request.get_data()
ret = handler.handle(data)
return ret
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8000)
- Create a sample handler in
function/handler.py
:
def handle(data):
return f"Processed data: {data.decode('utf-8')}"
- Add your custom template to the CLI:
faas-cli template pull file:///path/to/template
Now you can create functions using your custom template:
faas-cli new --lang my-custom-template my-function
Resources
Official Resources
Community Resources
Learning Resources
Related Projects
- faasd - OpenFaaS without Kubernetes
- arkade - Kubernetes marketplace
- nats-connector - Connect OpenFaaS to NATS for event-driven functions
Conclusion
OpenFaaS provides a powerful yet simple platform for deploying serverless functions and microservices on Kubernetes. Its focus on developer experience, portability, and flexibility makes it an excellent choice for organizations looking to adopt serverless architecture without being locked into a specific cloud provider.
Whether you’re building a small project or a large-scale production system, OpenFaaS offers the tools and features needed to develop, deploy, and scale your applications efficiently. With its active community and commercial support options, OpenFaaS is well-positioned to meet the needs of both individual developers and enterprise teams.
By following the installation guide and practical exercises in this article, you should now have a good understanding of how to get started with OpenFaaS and how to leverage its capabilities for your serverless projects.