K3s
Lightweight Kubernetes ..
K3s
K3s is a lightweight, certified Kubernetes distribution designed for resource-constrained environments, edge computing, IoT devices, and development scenarios. Created by Rancher Labs (now part of SUSE), it packages everything needed to run Kubernetes into a single binary under 100MB.
This makes K3s significantly lighter than standard Kubernetes while maintaining full compatibility with Kubernetes APIs and features. It requires minimal memory (512MB minimum) and offers simplified operations with reduced dependencies.
K3s comes with built-in components like the Traefik ingress controller, local storage provisioner, and service load balancer. It's perfect for development, CI/CD, edge deployments, and ARM devices, yet remains production-ready with high availability capabilities.

K3s Core Components
Control Plane Components
The control plane includes the API Server (Kubernetes API endpoint for cluster management), Controller Manager (manages core control loops for replication, endpoints, and namespaces), and Scheduler (assigns pods to nodes based on resource availability).
K3s can use either etcd or lightweight SQLite as the datastore for cluster state, making it more flexible than standard Kubernetes.
Node Components
Each node runs the Kubelet agent which manages pod lifecycle. Containerd is built-in as the container runtime for running containers.
Kube-proxy manages network proxying and service networking across the cluster.
Built-in Add-ons
K3s includes Traefik Ingress Controller for routing external HTTP/HTTPS traffic to services. The Local Path Provisioner enables dynamic persistent volume provisioning using local storage.
CoreDNS provides cluster DNS for service discovery. The Service Load Balancer manages LoadBalancer-type services without requiring external cloud provider integrations.
Networking
Flannel serves as the default CNI (Container Network Interface) plugin for pod networking. Network Policies control traffic flow between pods and services for enhanced security.

Pentaho Server Pod
Runs Tomcat application server with Pentaho Server 11
Built on Debian Trixie Slim with OpenJDK 21 JRE
Multi-stage Docker build for optimized image size
Exposed internally on port 8080
Includes readiness and liveness probes for health monitoring
Resource limits configured for CPU and memory stability

Provides relational database backend for three critical databases:
Jackrabbit (jcr_user): Java Content Repository storing all Pentaho content (reports, dashboards, data sources, transformations, jobs)
Quartz (pentaho_user): Scheduler managing jobs, triggers, calendars, and execution history
Hibernate (hibuser): Security configuration, audit logging, user sessions, plus two specialized schemas:
pentaho_dilogs: ETL execution logging with job logs, transformation metrics, and step performance data
pentaho_operations_mart: Dimensional data mart for platform analytics with dimension and fact tables
Data persisted through PersistentVolumeClaim to survive restarts
Automated initialization via ConfigMap-mounted SQL scripts


Pentaho Server requires three separate databases, each serving a distinct purpose:
jackrabbit
jcr_user
Java Content Repository (JCR) - Stores all Pentaho content including reports, dashboards, data sources, analysis schemas, and user files. This is the primary content storage for the Pentaho repository.
quartz
pentaho_user
Quartz Scheduler - Manages all scheduled jobs, triggers, and calendars. Contains tables for job definitions (QRTZ6_JOB_DETAILS), triggers (QRTZ6_TRIGGERS), execution history, and cluster coordination locks.
hibernate
hibuser
Hibernate Repository - Hosts security configuration, audit logging, user session data, and contains two additional schemas: pentaho_dilogs (ETL execution logging) and pentaho_operations_mart (analytics data mart).
The hibernate database contains specialized schemas for operational monitoring:
pentaho_dilogs: Captures detailed ETL execution information including job logs, transformation logs, step performance metrics, and error records. Essential for debugging data integration workflows and monitoring pipeline health.
pentaho_operations_mart: A dimensional data mart for analytics on Pentaho usage. Contains dimension tables (DIM_DATE, DIM_TIME, DIM_EXECUTOR) and fact tables (FACT_EXECUTION, FACT_STEP_EXECUTION) for analyzing platform utilization, performance trends, and user activity.
For production deployments, implement regular backups of the repository-data Docker volume. The jackrabbit database is the most critical as it contains all user content. Consider using pg_dump for logical backups or volume snapshots for full recovery options.
Netwoking
Internal Communication:
Both pods run within the
pentahonamespaceClusterIP services provide stable internal DNS names
PostgreSQL accessible at
postgresql.pentaho.svc.cluster.local:5432Pentaho Server accessible at
pentaho-server.pentaho.svc.cluster.local:8080
External Access:
Traefik Ingress Controller routes external traffic to Pentaho Server
Configurable hostname and path-based routing
Optional TLS/SSL termination support
Tomcat manages connection pools defined in context.xml. Each pool serves a specific purpose:
jdbc/Hibernate
repository:5432/hibernate
Security, Users, Roles
jdbc/Quartz
repository:5432/quartz
Job Scheduling
jdbc/jackrabbit
repository:5432/jackrabbit
Content Repository
jdbc/Audit
repository:5432/hibernate
Audit Logging
jdbc/live_logging_info
repository:5432/hibernate
ETL Runtime Logs
jdbc/PDI_Operations_Mart
repository:5432/hibernate
Operations Analytics
Storage
PersistentVolumeClaims (PVCs):
postgres-pvc: PostgreSQL data directory (/var/lib/postgresql/data)pentaho-pvc: Pentaho solutions and data directories
Storage Class:
Uses K3s's built-in
local-pathstorage provisionerProvisions volumes on node's local filesystem
Automatic volume creation and binding
ConfigMaps:
Database initialization scripts (5 SQL files)
Pentaho configuration settings (JVM parameters, Tomcat settings)
Secrets:
PostgreSQL credentials (postgres_password, pentaho_user, pentaho_password)
JDBC connection strings
Base64-encoded for security
Before you begin the K3s deployment, ensure you have completed the Setup: Pentaho Containers
Run through the following steps to deploy Pentaho Server on a single-node K3s with PostgreSQL 15 repository.
Prepare Environment
The "Prepare Environment" section outlines the initial setup steps required before deploying Pentaho Server 11 on K3s:
copying the deployment assets to your home directory,
staging the Pentaho Server Enterprise Edition ZIP file, verifying the file is in place,
confirming K3s is properly installed and running
Ensure you have downloaded: pentaho-server-ee-11.0.0.0-237.zip
Create directory & copy over assets.
Copy over the pentaho-server-ee-11.0.0.0-237.zip /docker/stagedArtefacts directory.
If you have deployed an Archive Pentaho Server then copy from:
/opt/pentaho/software/pentaho-server-ee-version
Otherwise download package from the Pentaho Customer Portal.
Verify that the file.
Verify the K3s installation.

Pentaho Server requires a valid license. The
.envfile contains a LICENSE_URL pointing to the Flexera license server. Ensure your license entitlements are active before deployment.
Without a valid license, Pentaho Server will start but many features will be disabled. Verify your license status before proceeding with production deployments.
Key Differences
Orchestration
Docker Compose
Kubernetes (K3s)
Configuration
.env file + docker-compose.yml
Kubernetes manifests (YAML)
Secrets
Docker secrets or Vault
Kubernetes Secrets
Networking
Docker bridge network
K3s cluster network + Traefik Ingress
Storage
Docker volumes
PersistentVolumeClaims (PVCs)
Scaling
Manual (docker compose up --scale)
Declarative (replicas in deployment)
Health Checks
Docker HEALTHCHECK
Kubernetes readiness/liveness probes
Init Scripts
Volume mount to /docker-entrypoint-initdb.d
ConfigMap mounted to PostgreSQL pod
Directory Layout
This K3s deployment configuration provides several important capabilities:
Completely self-contained Kubernetes deployment on lightweight K3s
Automated database initialization with PostgreSQL SQL scripts
Kubernetes-native health checks and startup ordering
Persistent volume claims for database and Pentaho content
Docker image build process with multi-stage optimization
Resource limits (CPU/memory) for stability
Production-ready Kubernetes manifest templates
PostgreSQL JDBC driver included
Easy backup and restore procedures via utility scripts
Ingress configuration for Traefik routing
Root Directory Files
Documentation Files:
README.md - The main entry point documentation providing project overview, quick start instructions, prerequisites, and general usage information for the K3s deployment workshop.
DEPLOYMENT.md - Detailed deployment guide covering the complete K3s deployment workflow, including pre-requisites, step-by-step instructions, and post-deployment verification procedures.
K3s-INSTALLATION.md - K3s setup and installation instructions covering system preparation, K3s installation, networking configuration, and storage class setup required before deploying Pentaho.
Orchestration & Deployment:
deploy.sh - Automated deployment script that orchestrates the complete K3s deployment workflow including namespace creation, secret generation, manifest application, and service readiness checks.
destroy.sh - Cleanup script that safely removes all K3s resources including deployments, services, persistent volume claims, and namespaces, useful for redeployment or teardown scenarios.
Makefile - Contains convenience command targets for common K3s operations like deploying, destroying, checking status, and viewing logs. Users can run make help to see all available commands.
docker-build
The docker-build/ directory contains all components needed to build the Pentaho Server container image that will be deployed to K3s:
Documentation Files:
README.md - Complete build documentation covering the Docker image build process, multi-stage build architecture, configuration options, troubleshooting, and best practices for building Pentaho Server images for K3s deployment.
QUICK-START.md - Quick build guide providing step-by-step instructions for users who want to quickly build and test the Pentaho image without reading the complete documentation. Includes common build commands and typical workflows.
ENV-CONFIGURATION.md - Comprehensive configuration reference guide for the .env.example file, detailing all available environment variables, their purposes, default values, and how they affect the Docker build process and resulting image.
Build & Configuration Files:
build.sh - Automated build wrapper script that validates prerequisites, checks for required files (Pentaho ZIP in stagedArtifacts/), detects plugins automatically (PAZ, PIR, PDD), confirms the build with the user, executes docker build, shows image info after build, and optionally pushes to a registry with the -p flag.
.env.example - Configuration template file containing all available environment variables for the Docker build process. Users copy this to .env and customize values for their specific deployment needs including Pentaho version, image tags, and build options.
Dockerfile - Multi-stage build definition using debian:trixie-slim as the base image with OpenJDK 21 JRE. Creates optimized images by separating the build environment from the runtime environment, reducing final image size while maintaining all necessary Pentaho components. The multi-stage approach minimizes security vulnerabilities and improves build efficiency.
test-compose.yml - Local Docker Compose testing environment that allows you to test the built Docker image locally before deploying to K3s. This is useful for validating configuration changes, testing custom plugins, or debugging startup issues without the overhead of a full K3s deployment.
Container Startup Scripts:
entrypoint/ - Directory containing container initialization and startup scripts:
docker-entrypoint.sh - Primary container startup script that executes when the container starts. Handles environment variable processing, configuration file customization from
softwareOverride/, database connection validation, health checks, and orchestrates the Pentaho Server startup sequence.start-pentaho-docker.sh - Pentaho-specific startup script that manages Tomcat initialization, JVM configuration, memory settings, and starts the Pentaho Server services. This script is called by
docker-entrypoint.shafter environment preparation is complete.
Configuration Overlays:
softwareOverride/ - Configuration overlays directory that gets baked into the Docker image during the build process. Files are organized in numbered directories and processed in alphabetical order to ensure proper application sequence:
1_drivers/ - PostgreSQL JDBC driver (included by default) for database connectivity. Additional JDBC drivers or data connectors can be placed here.
2_repository/ - Database connection configurations for all Pentaho repositories including Jackrabbit (JCR), Quartz (scheduler), and Hibernate (security/audit). Contains a README.md explaining the repository configuration files and their purposes.
3_security/ - Empty in this K3s deployment since HashiCorp Vault integration is not used. In production environments, this would contain authentication, authorization, and security configuration files.
4_others/ - Modified Tomcat scripts (startup.sh, setenv.sh), server.xml, web.xml, and other application-level configurations. Contains a README.md documenting the custom Tomcat modifications and their purposes.
Staged Artifacts:
stagedArtifacts/ - Staging directory where users place the Pentaho Server installation package (pentaho-server-ee-11.0.0.0-237.zip) before building the Docker image. Contains a README.md with instructions on where to obtain the Pentaho package and how to stage it properly.
db_init_postgres
The db_init_postgres/ directory contains PostgreSQL initialization scripts that create all required Pentaho repository databases. These scripts are mounted into the PostgreSQL container and execute automatically on first startup:
1_create_jcr_postgresql.sql - Creates the Jackrabbit Content Repository (JCR) database. The JCR stores all Pentaho content including reports, dashboards, data sources, analysis schemas, transformations, jobs, and user files. This is the primary content management system for the Pentaho repository.
2_create_quartz_postgresql.sql - Sets up the Quartz Scheduler database. Quartz manages all scheduled jobs, triggers, and calendars within Pentaho Server, including report generation schedules, ETL job executions, and other automated processes. Contains critical tables like QRTZ6_JOB_DETAILS, QRTZ6_TRIGGERS, and execution history.
3_create_repository_postgresql.sql - Creates the Hibernate Repository database. This stores user authentication data, authorization information, roles, permissions, and other security-related information managed by Pentaho's security subsystem.
4_pentaho_logging_postgresql.sql - Establishes the pentaho_dilogs schema within the Hibernate database for audit and Data Integration (DI) logging. Captures detailed ETL execution information including job logs, transformation logs, step performance metrics, and error records. Essential for debugging data integration workflows and monitoring pipeline health.
5_pentaho_mart_postgresql.sql - Creates the pentaho_operations_mart schema within the Hibernate database. This dimensional data mart stores operational analytics about Pentaho Server usage, including dimension tables (DIM_DATE, DIM_TIME, DIM_EXECUTOR) and fact tables (FACT_EXECUTION, FACT_STEP_EXECUTION) for analyzing platform utilization, performance trends, and user activity patterns.
manifests
The manifests/ directory contains all Kubernetes resource definitions organized by functional area. These YAML files define the declarative state of your K3s deployment:
namespace.yaml - Creates the dedicated pentaho namespace to isolate all Pentaho-related resources from other K3s workloads, providing logical separation and resource organization.
configmaps/ - ConfigMap resources for non-sensitive configuration data:
pentaho-config.yaml - Pentaho Server configuration settings like JVM parameters, Tomcat settings, and application properties
postgres-init-scripts.yaml - ConfigMap containing the five PostgreSQL initialization scripts from
db_init_postgres/directory, mounted into the PostgreSQL pod
pentaho/ - Pentaho Server Kubernetes resources:
deployment.yaml - Defines the Pentaho Server deployment including container specifications, resource requests/limits, environment variables, volume mounts, readiness/liveness probes, and replica count
service.yaml - ClusterIP service exposing Pentaho Server on port 8080 within the cluster, providing stable internal DNS and load balancing
postgres/ - PostgreSQL database Kubernetes resources:
deployment.yaml - Defines the PostgreSQL 15 deployment with container specifications, persistent volume claims for data storage, initialization script mounting, and database configuration
service.yaml - ClusterIP service exposing PostgreSQL on port 5432 within the cluster for Pentaho Server database connections
secrets/ - Sensitive credential storage:
secrets.yaml - Kubernetes Secret resource containing base64-encoded credentials for PostgreSQL (
postgres_password,pentaho_user,pentaho_password) and JDBC connection strings. This file is gitignored for security.
storage/ - Persistent storage definitions:
pvc.yaml - PersistentVolumeClaim definitions for both PostgreSQL data (
postgres-pvc) and Pentaho solutions/data (pentaho-pvc), using K3s's local-path storage class for persistent data across pod restarts
ingress/ - External access configuration:
ingress.yaml - Traefik Ingress resource defining external HTTP/HTTPS routing rules to expose Pentaho Server outside the K3s cluster, including hostname, path routing, and TLS configuration if applicable
scripts
The scripts/ directory contains operational and maintenance utilities for managing the K3s Pentaho deployment:
Database Management:
backup-postgres.sh - Automated PostgreSQL backup utility that creates compressed dumps of all Pentaho databases (jackrabbit, quartz, hibernate) using kubectl exec to run pg_dump inside the PostgreSQL pod. Backups are timestamped and compressed with gzip for efficient storage.
restore-postgres.sh - Database restoration utility to recover Pentaho databases from backup files. Useful for disaster recovery, environment cloning, or migrating data between K3s clusters. Handles decompression and restoration via kubectl exec and psql.
Monitoring & Validation:
health-check.sh - Health check script that verifies both PostgreSQL and Pentaho Server are running and responding correctly. Checks pod status, readiness probes, and performs basic connectivity tests.
monitor-resources.sh - Resource monitoring utility that tracks CPU, memory, and storage usage across all Pentaho pods using kubectl top and resource metrics, helping identify resource constraints or optimization opportunities.
monitor-postgres.sh - PostgreSQL-specific monitoring script that checks database connection counts, active queries, table sizes, and database health metrics via SQL queries executed in the PostgreSQL pod.
validate-deployment.sh - Comprehensive deployment validation script that confirms all K3s resources are properly created, pods are running, services are accessible, persistent volumes are bound, and the entire deployment is operational.
verify-k3s.sh - K3s infrastructure verification script that checks K3s installation, node status, storage classes, Traefik ingress controller, and core K3s components before attempting Pentaho deployment.
Key Differences: K3s vs Docker
Orchestration
Docker Compose
Kubernetes (K3s)
Configuration
.env file + docker-compose.yml
Kubernetes manifests (YAML)
Secrets
Docker secrets or Vault
Kubernetes Secrets
Networking
Docker bridge network
K3s cluster network + Traefik Ingress
Storage
Docker volumes
PersistentVolumeClaims (PVCs)
Scaling
Manual (docker compose up --scale)
Declarative (replicas in deployment)
Health Checks
Docker HEALTHCHECK
Kubernetes readiness/liveness probes
Init Scripts
Volume mount to /docker-entrypoint-initdb.d
ConfigMap mounted to PostgreSQL pod
Pre-flight Taks
The Pre-flight Tasks section outlines the essential preparation steps needed before deploying Pentaho Server 11 in K3s containers.
Configure Environment Variables
Edit the .env.example file within the docker-build/ directory with your deployment-specific settings. This includes Pentaho version identifier and Docker image name/tag.
Configure PostgreSQL database credentials and connection parameters. Set JVM memory allocation with minimum heap (default 4GB) and maximum heap (default 8GB).
Add your Enterprise Edition license server URL if applicable. Configure build options including image edition (EE/CE), plugin detection, and registry push settings.
Once configured, copy this template to .env for use by the build process.
softwareOverride Directory
The softwareOverride/ directory within docker-build/ provides a mechanism for customizing Pentaho configurations. These customizations get baked into the Docker image during the build process.
Files are organized in numbered directories and processed in alphabetical order.
The 1_drivers/ directory contains the PostgreSQL JDBC driver (included by default), and you can place additional JDBC drivers here.
The 2_repository/ directory holds database connection configurations for Jackrabbit (JCR), Quartz (scheduler), and Hibernate repositories. The 3_security/ directory is empty in this K3s deployment since there's no Vault integration.
The 4_others/ directory contains modified Tomcat scripts (startup.sh, setenv.sh), server.xml, and other application-level configurations.
You can optionally upgrade the PostgreSQL JDBC driver by downloading from Maven Central or copying from the workshop's database drivers collection. Place the updated driver in softwareOverride/1_drivers/tomcat/lib/.
Configure .env
Edit the .env.template
Enter the following details:
PENTAHO_VERSION
11.0.0.0-237
Pentaho Server version
EDITION
ee
Enterprise version
INCLUDE_DEMO
1
Include demo data
IMAGE_TAG
pentaho/pentaho-server:11.0.0.0-237
Docker image tag
PENTAHO_MIN_MEMORY
4096m
JVM minimum heap size
PENTAHO_MAX_MEMORY
8192m
JVM maximum heap size
PENTAHO_DI_JAVA_OPTIONS
"-Dfile.encoding=utf8 -Djava.awt.headless=true"
PENTAHO_IMAGE_NAME
pentaho/pentaho-server
Docker image name
TZ
America/NY
Time Zone of server
DB_TYPE
postgres
DB_HOST
postgres
DB_PORT
5432
PostgreSQL HTTP port
PUSH_TO REGISTRY
false
Pushes direct to K3s Regsitry
LOAD_INTO_K3S
true
Loads directly to K3s
RUN TESTS
true
LICENSE_URL
(empty)
EE license server URL
Save:
Create .env
softwareOverride
The softwareOverride/ directory provides a powerful mechanism to customize Pentaho Server without modifying the core installation. Files are copied into the Pentaho installation during container startup, processed in alphabetical order by directory name.
The PostgreSQL JDBC driver is included in the Pentaho distribution. If you need to upgrade:
Download from Maven Central
Place in
softwareOverride/1_drivers/tomcat/lib/
Or
Copy from Workshop--Installation/'Database Drivers'/
Build & Push Pentaho Image
The build.sh script is an automated build wrapper that:
Validates prerequisites - Checks Docker is installed
Checks for required files - Verifies Pentaho ZIP exists in stagedArtifacts/
Detects plugins automatically - Finds PAZ, PIR, PDD plugins
Confirms build - Shows what will be built and asks for confirmation
Runs docker build - Executes the build with proper arguments
Shows image info - Displays image size and details after build
Optional: Tests image - Runs basic container test (you can skip this)
Optional: Pushes to registry - Pushes to Docker registry (only with -p flag)
This is the recommended approach for building Pentaho Docker images. It uses a single .env file to configure everything - similar to the Docker Compose deployment.
You can modify the build with the following options:
-v
--version VERSION
Pentaho version (default: 11.0.0.0-237)
-t
--tag TAG
Docker image tag (default: pentaho/pentaho-server:VERSION)
-e
--edition EDITION
ee or ce (default: ee)
-d
--demo
Include demo content (default: no)
-p
--push
Push to registry after build
-h
--help
Push to registry after build
Build & Push the Pentaho Server Image directly into K3s Registry.

1. Docker Image Build: The deployment uses a custom-built Pentaho Server container image:
The build.sh script:
Validates prerequisites and required files
Detects plugins automatically (PAZ, PIR, PDD)
Executes multi-stage Docker build
Optionally pushes to K3s image store
2. Configuration Management:
.envfile contains deployment-specific settings (versions, credentials, JVM memory)softwareOverride/directory provides configuration overlays processed in numbered orderPostgreSQL JDBC driver included by default, with option to upgrade
Custom Tomcat scripts for container startup optimization
Deployment execution
The deploy.sh script automates the entire workflow:
Verifies K3s is running
Creates namespace
Applies all manifests in correct order
Monitors pod startup
Validates service readiness
Provides deployment summary with access URLs
1. Namespace Creation:
Creates isolated logical environment for all Pentaho resources.
2. Secret Generation:
Stores PostgreSQL credentials and JDBC connection strings as Kubernetes Secrets.
3. Storage Provisioning:
Creates PersistentVolumeClaims for PostgreSQL data and Pentaho content.
4. ConfigMap Creation:
Mounts PostgreSQL initialization scripts and Pentaho configuration.
5. PostgreSQL Deployment:
Deploys PostgreSQL pod with:
Mounted init scripts (automatic database creation on first startup)
Persistent volume for data
Health checks and resource limits
ClusterIP service for internal connectivity
6. Pentaho Server Deployment:
Deploys Pentaho pod with:
Custom Docker image
Environment variables from ConfigMap and Secrets
Volume mounts for solutions/data
Readiness/liveness probes
ClusterIP service
7. Ingress Configuration:
Configures Traefik routing for external access.
Run the deploy.sh

This unified script handles the complete deployment process for Pentaho Server on K3s, including:
Docker image import into K3s container runtime
Kubernetes resource creation (namespace, secrets, configmaps, storage)
PostgreSQL database deployment
Pentaho Server deployment
Ingress configuration
Health checks and status reporting
Prerequisites:
K3s installed and running
Docker image built: pentaho/pentaho-server:11.0.0.0-237
kubectl configured to access K3s cluster
sudo access for K3s containerd operations
Quick Commands with Makefile
There's also bunch of scripts that will help validate the deployment:
Comprehensive post-deployment validation script that verifies all components of the Pentaho K3s deployment are properly configured and running correctly. What It Checks (6 Categories)
Namespace
Verifies pentaho namespace exists
Pods
PostgreSQL pod is Running
Pentaho Server pod is Running
Shows current status if not running
Services
PostgreSQL service exists
Pentaho Server service exists
PersistentVolumeClaims (3 PVCs)
postgres-data-pvc (10Gi) - Database files
pentaho-data-pvc (10Gi) - Pentaho
data pentaho-solutions-pvc (5Gi) - Solutions repository
All must be in "Bound" status
ConfigMaps
pentaho-config - Environment variables
postgres-init - Database initialization scripts
Ingress
pentaho-ingress - Traefik routing configuration
Database Connectivity Tests
Connects to PostgreSQL pod
Tests all 3 Pentaho databases:
* jackrabbit - JCR content repository
* quartz - Job scheduler
* hibernate - Configuration repository
Runs SELECT 1 query on each
Run the following
validate-deployment.shscript.

Health Check
Quick health check script for running Pentaho deployment - faster and lighter than full validation, focused on runtime health status.
Namespace
Verifies pentaho namespace exists
Exits immediately if namespace missing (critical)
Pod Readiness
PostgreSQL pod is ready (not just running)
Pentaho Server pod is ready
Checks
containerStatuses[0].ready status
Services
PostgreSQL service exists
Pentaho Server service exists
Database Connectivity
PostgreSQL is responding to queries
All 3 databases exist:
* jackrabbit
* quartz
* hibernate
Uses
psql -lqtto list databases
Web Application Health
Live HTTP test to Pentaho login page
Uses port-forward to access service
Expects HTTP 200 response
Tests: http://localhost:8080/pentaho/Login
Resource Usage
Shows CPU/memory usage via
kubectl top podsGracefully handles missing metrics-server

Port Forward (Recommended for Testing/Development)
The simplest method using kubectl to forward a local port to the Pentaho service:
Access URL: http://localhost:8080/pentaho
You can also use an alternate port if 8080 is busy:
Ingress with Hostname (pentaho.local)
Uses K3s's built-in Traefik ingress controller with DNS-style access:
Setup:
(Replace 10.0.0.1 with your actual node IP)
Access URL: http://pentaho.local/pentaho
Ingress via Direct Node IP (No DNS Required)
Access directly through any cluster node's IP address without configuring /etc/hosts:
Access URL: http://<node-ip>/pentaho
This works because the ingress includes a path-based rule that doesn't require a hostname.
Makefile Convenience Command
The project includes a Makefile shortcut:
This automatically sets up port forwarding to localhost:8080.
Default Credentials
admin
password
⚠️ Change these for production deployments!
Quick Reference
Port Forward
Development/Testing
http://localhost:8080/pentaho
Ingress (hostname)
Production with DNS
http://pentaho.local/pentaho
Ingress (direct IP)
Testing without DNS
http://<node-ip>/pentaho
Last updated
Was this helpful?

