Skip to main content
This cookbook walks through deploying Prowler inside a Kubernetes cluster on a recurring schedule and automatically sending findings to Prowler Cloud via Import Findings. By the end, security scan results from the cluster appear in Prowler Cloud without any manual file uploads.

Prerequisites

  • A Prowler Cloud account with an active subscription (see Prowler Cloud Pricing)
  • A Prowler Cloud API key with the Manage Ingestions permission (see API Keys)
  • Access to a Kubernetes cluster with kubectl configured
  • Permissions to create ServiceAccounts, Roles, RoleBindings, Secrets, and CronJobs in the cluster

Step 1: Create the ServiceAccount and RBAC Resources

Prowler needs a ServiceAccount with read access to cluster resources. Apply the manifests from the kubernetes directory of the Prowler repository:
kubectl apply -f kubernetes/prowler-sa.yaml
kubectl apply -f kubernetes/prowler-role.yaml
kubectl apply -f kubernetes/prowler-rolebinding.yaml
This creates:
  • A prowler-sa ServiceAccount in the prowler-ns namespace
  • A ClusterRole with the read permissions Prowler requires
  • A ClusterRoleBinding linking the ServiceAccount to the role
For more details on these resources, refer to Getting Started with Kubernetes.

Step 2: Store the Prowler Cloud API Key as a Secret

Create a Kubernetes Secret to hold the API key securely:
kubectl create secret generic prowler-cloud-api-key \
  --from-literal=api-key=pk_your_api_key_here \
  --namespace prowler-ns
Replace pk_your_api_key_here with the actual API key from Prowler Cloud.
Avoid embedding the API key directly in the CronJob manifest. Using a Kubernetes Secret keeps credentials out of version control and pod specs.

Step 3: Create the CronJob Manifest

The CronJob runs Prowler on a schedule, scanning the cluster and pushing findings to Prowler Cloud with the --push-to-cloud flag. Create a file named prowler-cronjob.yaml:
apiVersion: batch/v1
kind: CronJob
metadata:
  name: prowler-k8s-scan
  namespace: prowler-ns
spec:
  schedule: "0 2 * * *"  # Runs daily at 02:00 UTC
  concurrencyPolicy: Forbid
  jobTemplate:
    spec:
      backoffLimit: 1
      template:
        metadata:
          labels:
            app: prowler
        spec:
          serviceAccountName: prowler-sa
          containers:
          - name: prowler
            image: toniblyx/prowler:stable
            args:
              - "kubernetes"
              - "--push-to-cloud"
            env:
              - name: PROWLER_CLOUD_API_KEY
                valueFrom:
                  secretKeyRef:
                    name: prowler-cloud-api-key
                    key: api-key
              - name: CLUSTER_NAME
                value: "my-cluster"
            imagePullPolicy: Always
            volumeMounts:
              - name: var-lib-cni
                mountPath: /var/lib/cni
                readOnly: true
              - name: var-lib-etcd
                mountPath: /var/lib/etcd
                readOnly: true
              - name: var-lib-kubelet
                mountPath: /var/lib/kubelet
                readOnly: true
              - name: etc-kubernetes
                mountPath: /etc/kubernetes
                readOnly: true
          hostPID: true
          restartPolicy: Never
          volumes:
            - name: var-lib-cni
              hostPath:
                path: /var/lib/cni
            - name: var-lib-etcd
              hostPath:
                path: /var/lib/etcd
            - name: var-lib-kubelet
              hostPath:
                path: /var/lib/kubelet
            - name: etc-kubernetes
              hostPath:
                path: /etc/kubernetes
Replace my-cluster with a meaningful name for the cluster. This value appears in Prowler Cloud reports and helps identify the source of findings. See the --cluster-name flag documentation in Getting Started with Kubernetes for more details.

Customizing the Schedule

The schedule field uses standard cron syntax. Common examples:
  • "0 2 * * *" — daily at 02:00 UTC
  • "0 */6 * * *" — every 6 hours
  • "0 2 * * 1" — weekly on Mondays at 02:00 UTC

Scanning Specific Namespaces

To limit the scan to specific namespaces, add the --namespace flag to the args array:
args:
  - "kubernetes"
  - "--push-to-cloud"
  - "--namespace"
  - "production,staging"

Step 4: Deploy and Verify

Apply the CronJob to the cluster:
kubectl apply -f prowler-cronjob.yaml
To trigger an immediate test run without waiting for the schedule:
kubectl create job prowler-test-run --from=cronjob/prowler-k8s-scan -n prowler-ns
Monitor the job execution:
kubectl get pods -n prowler-ns -l app=prowler --watch
Check the logs to confirm findings were pushed successfully:
kubectl logs -n prowler-ns -l app=prowler --tail=50
A successful upload produces output similar to:
Pushing findings to Prowler Cloud, please wait...

Findings successfully pushed to Prowler Cloud. Ingestion job: fa8bc8c5-4925-46a0-9fe0-f6575905e094
See more details here: https://cloud.prowler.com/scans

Step 5: View Findings in Prowler Cloud

Once the job completes and findings are pushed:
  1. Navigate to Prowler Cloud
  2. Open the “Scans” section to verify the ingestion job status
  3. Browse findings under the Kubernetes provider
For details on the ingestion workflow and status tracking, refer to the Import Findings documentation.

Tips and Troubleshooting

  • Resource limits: For large clusters, consider setting resources.requests and resources.limits on the container to prevent the scan from consuming excessive cluster resources.
  • Network policies: Ensure the Prowler pod can reach api.prowler.com over HTTPS (port 443). Adjust NetworkPolicies or egress rules if needed.
  • Job history: Kubernetes retains completed and failed jobs by default. Set successfulJobsHistoryLimit and failedJobsHistoryLimit in the CronJob spec to control cleanup:
    spec:
      successfulJobsHistoryLimit: 3
      failedJobsHistoryLimit: 1
    
  • API key rotation: When rotating the API key, update the Secret and restart any running jobs:
    kubectl delete secret prowler-cloud-api-key -n prowler-ns
    kubectl create secret generic prowler-cloud-api-key \
      --from-literal=api-key=pk_new_api_key_here \
      --namespace prowler-ns
    
  • Failed uploads: If the push to Prowler Cloud fails, the scan still completes and findings are saved locally in the container. Check the Import Findings troubleshooting section for common error messages.