Application Development


Aggregate high-level report on resource consumption in a Namespace

For each Namespace, aggregate a rough overview of resource consumption in that Namespace. This could be arbitrarily complex; here we simply aggregate a count of several critical resources in that Namespace.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const report = c.core.v1.Namespace
  .list()
  .flatMap(ns =>
    query.Observable.forkJoin(
      query.Observable.of(ns),
      c.core.v1.Pod.list(ns.metadata.name).toArray(),
      c.core.v1.Secret.list(ns.metadata.name).toArray(),
      c.core.v1.Service.list(ns.metadata.name).toArray(),
      c.core.v1.ConfigMap.list(ns.metadata.name).toArray(),
      c.core.v1.PersistentVolumeClaim.list(ns.metadata.name).toArray(),
    ));

// Print small report.
report.forEach(([ns, pods, secrets, services, configMaps, pvcs]) => {
  console.log(ns.metadata.name);
  console.log(`  Pods:\t\t${pods.length}`);
  console.log(`  Secrets:\t${secrets.length}`);
  console.log(`  Services:\t${services.length}`);
  console.log(`  ConfigMaps:\t${configMaps.length}`);
  console.log(`  PVCs:\t\t${pvcs.length}`);
});

Output
default
  Pods:		9
  Secrets:	1
  Services:	2
  ConfigMaps:	0
  PVCs:		0
kube-public
  Pods:		0
  Secrets:	1
  Services:	0
  ConfigMaps:	0
  PVCs:		0
kube-system
  Pods:		4
  Secrets:	2
  Services:	2
  ConfigMaps:	2
  PVCs:		0

Audit all Certificates, including status, user, and requested usages

Retrieve all CertificateSigningRequests in all namespaces. Group them by status (i.e., "Pending", "Approved" or "Denied"), and then for each, report (1) the status of the request, (2) group information about the requesting user, and (3) the requested usages for the certificate.

Query
import {Client, transform} from "carbonql";
const certificates = transform.certificates;

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const csrs = c.certificates.v1beta1.CertificateSigningRequest
  .list()
  .map(csr => {
    // Get status of the CSR.
    return {
      status: certificates.v1beta1.certificateSigningRequest.getStatus(csr),
      request: csr,
    };
  })
  // Group CSRs by type (one of: `"Approved"`, `"Pending"`, or `"Denied"`).
  .groupBy(csr => csr.status.type);

csrs.forEach(csrs => {
  console.log(csrs.key);
  csrs.forEach(({request}) => {
    const usages = request.spec.usages.sort().join(", ");
    const groups = request.spec.groups.sort().join(", ");
    console.log(`  ${request.spec.username}\t[${usages}]\t[${groups}]`);
  });
});


Output
Denied
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]
Pending
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]

Distinct versions of mysql container in cluster

Search all running Kubernetes Pods for containers that have the string "mysql" in their image name. Report only distinct image names.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const mySqlVersions = c.core.v1.Pod
  .list("default")
  // Obtain all container image names running in all pods.
  .flatMap(pod => pod.spec.containers)
  .map(container => container.image)
  // Filter image names that don't include "mysql", return distinct.
  .filter(imageName => imageName.includes("mysql"))
  .distinct();

// Prints the distinct container image tags.
mySqlVersions.forEach(console.log);
Output
mysql:5.7
mysql:8.0.4
mysql

Find all Pod logs containing "ERROR:"

Retrieve all Pods in the "default" namespace, obtain their logs, and filter down to only the Pods whose logs contain the string "Error:". Return the logs grouped by Pod name.

Query
import {Client, query, transform} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podLogs = c.core.v1.Pod
  .list("default")
  // Retrieve logs for all pods, filter for logs with `ERROR:`.
  .flatMap(pod =>
    transform.core.v1.pod
      .getLogs(c, pod)
      .filter(({logs}) => logs.includes("ERROR:"))
    )
  // Group logs by name, but returns only the `logs` member.
  .groupBy(
    ({pod}) => pod.metadata.name,
    ({logs}) => logs)

// Print all the name of the pod and its logs.
podLogs.subscribe(logs => {
  console.log(logs.key);
  logs.forEach(console.log)
});
Output
nginx-6f8cf9fbc4-qnrhb
ERROR: could not connect to database.

nginx2-687c5bbccd-rzjl5
ERROR: 500

Diff last two rollouts of an application

Search for a Deployment named "nginx", and obtain the last 2 revisions in its rollout history. Then use the jsondiffpatch library to diff these two revisions.

NOTE: a history of rollouts is not retained by default, so you'll need to create the deployment with .spec.revisionHistoryLimit set to a number larger than 2. (See documentation for DeploymentSpec)

Query
import {Client, query, transform} from "carbonql";
const jsondiff = require("jsondiffpatch");

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const history = c.apps.v1beta1.Deployment
  .list()
  // Get last two rollouts in the history of the `nginx` deployment.
  .filter(d => d.metadata.name == "nginx")
  .flatMap(d =>
    transform.apps.v1beta1.deployment
      .getRevisionHistory(c, d)
      .takeLast(2)
      .toArray());

// Diff these rollouts, print.
history.forEach(rollout => {
  jsondiff.console.log(jsondiff.diff(rollout[0], rollout[1]))
});
Output
{
  metadata: {
    annotations: {
      deployment.kubernetes.io/revision: "1" => "2"
    },
    creationTimestamp: "2018-02-28T20:15:32Z" => "2018-03-13T06:34:36Z"
    generation: 7 => 3
    labels: {
      pod-template-hash: "2947959670" => "1264720760"
    },
    name: "nginx-6f8cf9fbc4" => "nginx-56b8c64cb4"
    resourceVersion: "263854" => "263858"
    selfLink:
      59,14 inx-6f8cf9fbc56b8c64cb4
 
    uid: "20c50866-1cc4-11e8-9137-080027cfc4d2" => "9966f685-2688-11e8-adbb-080027cfc4d2"
  },
  spec: {
    replicas: 0 => 3
    selector: {
      matchLabels: {
        pod-template-hash: "2947959670" => "1264720760"
      }
    },
    template: {
      metadata: {
        labels: {
          pod-template-hash: "2947959670" => "1264720760"
        }
      },
      spec: {
        containers: [
          0: {
            image: "nginx:1.7.9" => "nginx:1.9.1"
          }
        ]
      }
    }
  },
  status: {
    availableReplicas: 3
    fullyLabeledReplicas: 3
    observedGeneration: 7 => 3
    readyReplicas: 3
    replicas: 0 => 3
  }
}

List all Namespaces with no hard memory quota specified

Retrieve all Kubernetes Namespaces. Filter this down to a set of namespaces for which there is either (1) no ResourceQuota governing resource use of that Namespace; or (2) a ResourceQuota that does not specify a hard memory limit.

Query
import {Client} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const noQuotas = c.core.v1.Namespace
  .list()
  .flatMap(ns =>
    c.core.v1.ResourceQuota
      .list(ns.metadata.name)
      // Retrieve only ResourceQuotas that (1) apply to this namespace, and (2)
      // specify hard limits on memory.
      .filter(rq => rq.spec.hard["limits.memory"] != null)
      .toArray()
      .flatMap(rqs => rqs.length == 0 ? [ns] : []))

// Print.
noQuotas.forEach(ns => console.log(ns.metadata.name))
Output
kube-system
default
kube-public

List Pods and their ServiceAccount (possibly a unique user) by Secrets they use

Obtain all Secrets. For each of these Secrets, obtain all Pods that use them.

Here we print (1) the name of the Secret, (2) the list of Pods that use it, and (3) the ServiceAccount that the Pod runs as (oftentimes this is allocated to a single user).

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podsByClaim = c.core.v1.Secret
  .list()
  .flatMap(secret =>
    c.core.v1.Pod
      .list()
      .filter(pod =>
        pod.spec
          .volumes
          .filter(vol =>
            vol.secret &&
            vol.secret.secretName == secret.metadata.name)
          .length > 0)
      .toArray()
      .map(pods => {return {secret: secret, pods: pods}}));

// Print.
podsByClaim.forEach(({secret, pods}) => {
  console.log(secret.metadata.name);
  pods.forEach(pod => console.log(`  ${pod.spec.serviceAccountName} ${pod.metadata.namespace}/${pod.metadata.name}`));
});
Output
kubernetes-dashboard-key-holder
default-token-vq5hb
  default kube-system/kube-dns-54cccfbdf8-hwgh8
  default kube-system/kubernetes-dashboard-77d8b98585-gzjgb
  default kube-system/storage-provisioner
default-token-j2bmb
  alex default/mysql-5-66f5b49b8f-5r48g
  alex default/mysql-8-7d4f8d46d7-hrktb
  alex default/mysql-859645bdb9-w29z7
  alex default/nginx-56b8c64cb4-lcjv2
  alex default/nginx-56b8c64cb4-n6prt
  alex default/nginx-56b8c64cb4-v2qj2
  alex default/nginx2-687c5bbccd-dmccm
  alex default/nginx2-687c5bbccd-hrqdl
  alex default/nginx2-687c5bbccd-rzjl5
  alex default/task-pv-pod
default-token-n9vxk

List Pods grouped by PersistentVolumes they use

Obtain all "Bound" PersistentVolumes (PVs). Then, obtain all Pods that use those PVs. Finally, print a small report listing the PV and all Pods that reference it.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podsByClaim = c.core.v1.PersistentVolume
  .list()
  .filter(pv => pv.status.phase == "Bound")
  .flatMap(pv =>
    c.core.v1.Pod
      .list()
      .filter(pod =>
        pod.spec
          .volumes
          .filter(vol =>
            vol.persistentVolumeClaim &&
            vol.persistentVolumeClaim.claimName == pv.spec.claimRef.name)
          .length > 0)
      .toArray()
      .map(pods => {return {pv: pv, pods: pods}}));

// Print.
podsByClaim.forEach(({pv, pods}) => {
  console.log(pv.metadata.name);
  pods.forEach(pod => console.log(`  ${pod.metadata.name}`));
});
Output
devHtmlData
  dev/cgiGateway-1
  dev/cgiGateway-2
prodHtmlData
  prod/cgiGateway-1
  prod/cgiGateway-2

Find all Pods scheduled on nodes with high memory pressure

Search for all Kubernetes Pods scheduled on nodes where status conditions report high memory pressure.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const pressured = c.core.v1.Pod.list()
  // Index pods by node name.
  .groupBy(pod => pod.spec.nodeName)
  .flatMap(group => {
    // Join pods and nodes on node name; filter out everything where mem
    // pressure is not high.
    const nodes = c.core.v1.Node
      .list()
      .filter(node =>
        node.metadata.name == group.key &&
        node.status.conditions
          .filter(c => c.type === "MemoryPressure" && c.status === "True")
          .length >= 1);

    // Return join of {node, pods}
    return group
      .toArray()
      .flatMap(pods => nodes.map(node => {return {node, pods}}))
  })

// Print report.
pressured.forEach(({node, pods}) => {
  console.log(node.metadata.name);
  pods.forEach(pod => console.log(`    ${pod.metadata.name}`));
});
Output
node3
    redis-6f8cf9fbc4-qnrhb
    redis2-687c5bbccd-rzjl5

Pods using the default ServiceAccount

Retrieve all Pods, filtering down to those that are using the "default" ServiceAccount.

Query
import {Client, certificates} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const noServiceAccounts = c.core.v1.Pod
  .list()
  .filter(pod =>
    pod.spec.serviceAccountName == null ||
    pod.spec.serviceAccountName == "default");

noServiceAccounts.forEach(pod => console.log(pod.metadata.name));
Output
mysql-5-66f5b49b8f-5r48g
mysql-8-7d4f8d46d7-hrktb
mysql-859645bdb9-w29z7
nginx-56b8c64cb4-lcjv2
nginx-56b8c64cb4-n6prt
nginx-56b8c64cb4-v2qj2
nginx2-687c5bbccd-dmccm
nginx2-687c5bbccd-hrqdl
nginx2-687c5bbccd-rzjl5
kube-addon-manager-minikube
kube-dns-54cccfbdf8-hwgh8
kubernetes-dashboard-77d8b98585-gzjgb
storage-provisioner

Find Services publicly exposed to the Internet

Kubernetes Services can expose a Pod to Internet traffic by setting the .spec.type to "LoadBalancer" (see documentation for ServiceSpec). Other Service types (such as "ClusterIP") are accessible only from inside the cluster.

This query will find all Services whose type is "LoadBalancer", so they can be audited for access and cost (since a service with .spec.type set to "LoadBalancer" will typically cause the underlying cloud provider to boot up a dedicated load balancer).

Query
import {Client} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const loadBalancers = c.core.v1.Service
  .list()
  // Type services with `.spec.type` set to `"LoadBalancer"` are exposed to the
  // Internet publicly.
  .filter(svc => svc.spec.type == "LoadBalancer");

// Print.
loadBalancers.forEach(
  svc => console.log(`${svc.metadata.namespace}/${svc.metadata.name}`));
Output
default/someSvc
default/otherSvc
prod/apiSvc
dev/apiSvc

Find users and ServiceAccounts with access to Secrets

Inspect every Kubernetes RBAC Role for rules that apply to Secrets. Using this, find every RBAC RoleBinding that references each of these ruels, and list users and ServiceAccounts that they bind to.

NOTE: This query does not query for ClusterRoles, which means that cluster-level roles granting access to secrets are not taken into account in this query.

Query
import {Client, transform} from "carbonql";
const rbac = transform.rbacAuthorization

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const subjectsWithSecretAccess = c.rbacAuthorization.v1beta1.Role
  .list()
  // Find Roles that apply to `core.v1.Secret`. Note the empty string denotes
  // the `core` namespace.
  .filter(role => rbac.v1beta1.role.appliesTo(role, "", "secrets"))
  .flatMap(role => {
    return c.rbacAuthorization.v1beta1.RoleBinding
      .list()
      // Find RoleBindings that apply to `role`. Project to a list of subjects
      // (e.g., Users) `role` is bound to.
      .filter(binding =>
        rbac.v1beta1.roleBinding.referencesRole(binding, role.metadata.name))
      .flatMap(binding => binding.subjects)
  });

// Print subjects.
subjectsWithSecretAccess.forEach(subj => console.log(`${subj.kind}\t${subj.name}`));
Output
User	jane
User	frank
User	susan
User	bill

Aggregate cluster-wide error and warning Events into a report

Search for all Kubernetes Events that are classified as "Warning" or "Error", and report them grouped by the type of Kubernetes object that caused them.

In this example, there are warnings being emitted from both Nodes and from Pods, so we group them together by their place of origin.

Query
import {client, query} from "carbonql";
import * as carbon from "carbonql";

const c = client.Client.fromFile(<string>process.env.KUBECONFIG);
const warningsAndErrors = c.core.v1.Event
  .list()
  // Get warning and error events, group by `kind` that caused them.
  .filter(e => e.type == "Warning" || e.type == "Error")
  .groupBy(e => e.involvedObject.kind);

// Print events.
warningsAndErrors.forEach(events => {
  console.log(`kind: ${events.key}`);
  events.forEach(e =>
    console.log(`  ${e.type}  (x${e.count})  ${e.involvedObject.name}\n  \t   Message: ${e.message}`));
});
Output
kind: Node
  Warning	(1946 times)	minikube	Failed to start node healthz on 0: listen tcp: address 0: missing port in address
kind: Pod
  Warning	(7157 times)	mysql-5-66f5b49b8f-5r48g	Back-off restarting failed container
  Warning	(7153 times)	mysql-8-7d4f8d46d7-hrktb	Back-off restarting failed container
  Warning	(6931 times)	mysql-859645bdb9-w29z7	Back-off restarting failed container

Governance

CIOs and engineering leadership need the ability to quickly understand how their organization is tracking against important metrics like compliance, security patches, and so on. This section contains a collection of useful tools that governance teams can use to understand and enforce policy decisions on an organizational basis.


Aggregate high-level report on resource consumption in a Namespace

For each Namespace, aggregate a rough overview of resource consumption in that Namespace. This could be arbitrarily complex; here we simply aggregate a count of several critical resources in that Namespace.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const report = c.core.v1.Namespace
  .list()
  .flatMap(ns =>
    query.Observable.forkJoin(
      query.Observable.of(ns),
      c.core.v1.Pod.list(ns.metadata.name).toArray(),
      c.core.v1.Secret.list(ns.metadata.name).toArray(),
      c.core.v1.Service.list(ns.metadata.name).toArray(),
      c.core.v1.ConfigMap.list(ns.metadata.name).toArray(),
      c.core.v1.PersistentVolumeClaim.list(ns.metadata.name).toArray(),
    ));

// Print small report.
report.forEach(([ns, pods, secrets, services, configMaps, pvcs]) => {
  console.log(ns.metadata.name);
  console.log(`  Pods:\t\t${pods.length}`);
  console.log(`  Secrets:\t${secrets.length}`);
  console.log(`  Services:\t${services.length}`);
  console.log(`  ConfigMaps:\t${configMaps.length}`);
  console.log(`  PVCs:\t\t${pvcs.length}`);
});

Output
default
  Pods:		9
  Secrets:	1
  Services:	2
  ConfigMaps:	0
  PVCs:		0
kube-public
  Pods:		0
  Secrets:	1
  Services:	0
  ConfigMaps:	0
  PVCs:		0
kube-system
  Pods:		4
  Secrets:	2
  Services:	2
  ConfigMaps:	2
  PVCs:		0

Audit all Certificates, including status, user, and requested usages

Retrieve all CertificateSigningRequests in all namespaces. Group them by status (i.e., "Pending", "Approved" or "Denied"), and then for each, report (1) the status of the request, (2) group information about the requesting user, and (3) the requested usages for the certificate.

Query
import {Client, transform} from "carbonql";
const certificates = transform.certificates;

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const csrs = c.certificates.v1beta1.CertificateSigningRequest
  .list()
  .map(csr => {
    // Get status of the CSR.
    return {
      status: certificates.v1beta1.certificateSigningRequest.getStatus(csr),
      request: csr,
    };
  })
  // Group CSRs by type (one of: `"Approved"`, `"Pending"`, or `"Denied"`).
  .groupBy(csr => csr.status.type);

csrs.forEach(csrs => {
  console.log(csrs.key);
  csrs.forEach(({request}) => {
    const usages = request.spec.usages.sort().join(", ");
    const groups = request.spec.groups.sort().join(", ");
    console.log(`  ${request.spec.username}\t[${usages}]\t[${groups}]`);
  });
});


Output
Denied
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]
Pending
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]

Distinct versions of mysql container in cluster

Search all running Kubernetes Pods for containers that have the string "mysql" in their image name. Report only distinct image names.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const mySqlVersions = c.core.v1.Pod
  .list("default")
  // Obtain all container image names running in all pods.
  .flatMap(pod => pod.spec.containers)
  .map(container => container.image)
  // Filter image names that don't include "mysql", return distinct.
  .filter(imageName => imageName.includes("mysql"))
  .distinct();

// Prints the distinct container image tags.
mySqlVersions.forEach(console.log);
Output
mysql:5.7
mysql:8.0.4
mysql

Find all Pod logs containing "ERROR:"

Retrieve all Pods in the "default" namespace, obtain their logs, and filter down to only the Pods whose logs contain the string "Error:". Return the logs grouped by Pod name.

Query
import {Client, query, transform} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podLogs = c.core.v1.Pod
  .list("default")
  // Retrieve logs for all pods, filter for logs with `ERROR:`.
  .flatMap(pod =>
    transform.core.v1.pod
      .getLogs(c, pod)
      .filter(({logs}) => logs.includes("ERROR:"))
    )
  // Group logs by name, but returns only the `logs` member.
  .groupBy(
    ({pod}) => pod.metadata.name,
    ({logs}) => logs)

// Print all the name of the pod and its logs.
podLogs.subscribe(logs => {
  console.log(logs.key);
  logs.forEach(console.log)
});
Output
nginx-6f8cf9fbc4-qnrhb
ERROR: could not connect to database.

nginx2-687c5bbccd-rzjl5
ERROR: 500

Diff last two rollouts of an application

Search for a Deployment named "nginx", and obtain the last 2 revisions in its rollout history. Then use the jsondiffpatch library to diff these two revisions.

NOTE: a history of rollouts is not retained by default, so you'll need to create the deployment with .spec.revisionHistoryLimit set to a number larger than 2. (See documentation for DeploymentSpec)

Query
import {Client, query, transform} from "carbonql";
const jsondiff = require("jsondiffpatch");

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const history = c.apps.v1beta1.Deployment
  .list()
  // Get last two rollouts in the history of the `nginx` deployment.
  .filter(d => d.metadata.name == "nginx")
  .flatMap(d =>
    transform.apps.v1beta1.deployment
      .getRevisionHistory(c, d)
      .takeLast(2)
      .toArray());

// Diff these rollouts, print.
history.forEach(rollout => {
  jsondiff.console.log(jsondiff.diff(rollout[0], rollout[1]))
});
Output
{
  metadata: {
    annotations: {
      deployment.kubernetes.io/revision: "1" => "2"
    },
    creationTimestamp: "2018-02-28T20:15:32Z" => "2018-03-13T06:34:36Z"
    generation: 7 => 3
    labels: {
      pod-template-hash: "2947959670" => "1264720760"
    },
    name: "nginx-6f8cf9fbc4" => "nginx-56b8c64cb4"
    resourceVersion: "263854" => "263858"
    selfLink:
      59,14 inx-6f8cf9fbc56b8c64cb4
 
    uid: "20c50866-1cc4-11e8-9137-080027cfc4d2" => "9966f685-2688-11e8-adbb-080027cfc4d2"
  },
  spec: {
    replicas: 0 => 3
    selector: {
      matchLabels: {
        pod-template-hash: "2947959670" => "1264720760"
      }
    },
    template: {
      metadata: {
        labels: {
          pod-template-hash: "2947959670" => "1264720760"
        }
      },
      spec: {
        containers: [
          0: {
            image: "nginx:1.7.9" => "nginx:1.9.1"
          }
        ]
      }
    }
  },
  status: {
    availableReplicas: 3
    fullyLabeledReplicas: 3
    observedGeneration: 7 => 3
    readyReplicas: 3
    replicas: 0 => 3
  }
}

List all Namespaces with no hard memory quota specified

Retrieve all Kubernetes Namespaces. Filter this down to a set of namespaces for which there is either (1) no ResourceQuota governing resource use of that Namespace; or (2) a ResourceQuota that does not specify a hard memory limit.

Query
import {Client} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const noQuotas = c.core.v1.Namespace
  .list()
  .flatMap(ns =>
    c.core.v1.ResourceQuota
      .list(ns.metadata.name)
      // Retrieve only ResourceQuotas that (1) apply to this namespace, and (2)
      // specify hard limits on memory.
      .filter(rq => rq.spec.hard["limits.memory"] != null)
      .toArray()
      .flatMap(rqs => rqs.length == 0 ? [ns] : []))

// Print.
noQuotas.forEach(ns => console.log(ns.metadata.name))
Output
kube-system
default
kube-public

List Pods and their ServiceAccount (possibly a unique user) by Secrets they use

Obtain all Secrets. For each of these Secrets, obtain all Pods that use them.

Here we print (1) the name of the Secret, (2) the list of Pods that use it, and (3) the ServiceAccount that the Pod runs as (oftentimes this is allocated to a single user).

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podsByClaim = c.core.v1.Secret
  .list()
  .flatMap(secret =>
    c.core.v1.Pod
      .list()
      .filter(pod =>
        pod.spec
          .volumes
          .filter(vol =>
            vol.secret &&
            vol.secret.secretName == secret.metadata.name)
          .length > 0)
      .toArray()
      .map(pods => {return {secret: secret, pods: pods}}));

// Print.
podsByClaim.forEach(({secret, pods}) => {
  console.log(secret.metadata.name);
  pods.forEach(pod => console.log(`  ${pod.spec.serviceAccountName} ${pod.metadata.namespace}/${pod.metadata.name}`));
});
Output
kubernetes-dashboard-key-holder
default-token-vq5hb
  default kube-system/kube-dns-54cccfbdf8-hwgh8
  default kube-system/kubernetes-dashboard-77d8b98585-gzjgb
  default kube-system/storage-provisioner
default-token-j2bmb
  alex default/mysql-5-66f5b49b8f-5r48g
  alex default/mysql-8-7d4f8d46d7-hrktb
  alex default/mysql-859645bdb9-w29z7
  alex default/nginx-56b8c64cb4-lcjv2
  alex default/nginx-56b8c64cb4-n6prt
  alex default/nginx-56b8c64cb4-v2qj2
  alex default/nginx2-687c5bbccd-dmccm
  alex default/nginx2-687c5bbccd-hrqdl
  alex default/nginx2-687c5bbccd-rzjl5
  alex default/task-pv-pod
default-token-n9vxk

List Pods grouped by PersistentVolumes they use

Obtain all "Bound" PersistentVolumes (PVs). Then, obtain all Pods that use those PVs. Finally, print a small report listing the PV and all Pods that reference it.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podsByClaim = c.core.v1.PersistentVolume
  .list()
  .filter(pv => pv.status.phase == "Bound")
  .flatMap(pv =>
    c.core.v1.Pod
      .list()
      .filter(pod =>
        pod.spec
          .volumes
          .filter(vol =>
            vol.persistentVolumeClaim &&
            vol.persistentVolumeClaim.claimName == pv.spec.claimRef.name)
          .length > 0)
      .toArray()
      .map(pods => {return {pv: pv, pods: pods}}));

// Print.
podsByClaim.forEach(({pv, pods}) => {
  console.log(pv.metadata.name);
  pods.forEach(pod => console.log(`  ${pod.metadata.name}`));
});
Output
devHtmlData
  dev/cgiGateway-1
  dev/cgiGateway-2
prodHtmlData
  prod/cgiGateway-1
  prod/cgiGateway-2

Find all Pods scheduled on nodes with high memory pressure

Search for all Kubernetes Pods scheduled on nodes where status conditions report high memory pressure.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const pressured = c.core.v1.Pod.list()
  // Index pods by node name.
  .groupBy(pod => pod.spec.nodeName)
  .flatMap(group => {
    // Join pods and nodes on node name; filter out everything where mem
    // pressure is not high.
    const nodes = c.core.v1.Node
      .list()
      .filter(node =>
        node.metadata.name == group.key &&
        node.status.conditions
          .filter(c => c.type === "MemoryPressure" && c.status === "True")
          .length >= 1);

    // Return join of {node, pods}
    return group
      .toArray()
      .flatMap(pods => nodes.map(node => {return {node, pods}}))
  })

// Print report.
pressured.forEach(({node, pods}) => {
  console.log(node.metadata.name);
  pods.forEach(pod => console.log(`    ${pod.metadata.name}`));
});
Output
node3
    redis-6f8cf9fbc4-qnrhb
    redis2-687c5bbccd-rzjl5

Pods using the default ServiceAccount

Retrieve all Pods, filtering down to those that are using the "default" ServiceAccount.

Query
import {Client, certificates} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const noServiceAccounts = c.core.v1.Pod
  .list()
  .filter(pod =>
    pod.spec.serviceAccountName == null ||
    pod.spec.serviceAccountName == "default");

noServiceAccounts.forEach(pod => console.log(pod.metadata.name));
Output
mysql-5-66f5b49b8f-5r48g
mysql-8-7d4f8d46d7-hrktb
mysql-859645bdb9-w29z7
nginx-56b8c64cb4-lcjv2
nginx-56b8c64cb4-n6prt
nginx-56b8c64cb4-v2qj2
nginx2-687c5bbccd-dmccm
nginx2-687c5bbccd-hrqdl
nginx2-687c5bbccd-rzjl5
kube-addon-manager-minikube
kube-dns-54cccfbdf8-hwgh8
kubernetes-dashboard-77d8b98585-gzjgb
storage-provisioner

Find Services publicly exposed to the Internet

Kubernetes Services can expose a Pod to Internet traffic by setting the .spec.type to "LoadBalancer" (see documentation for ServiceSpec). Other Service types (such as "ClusterIP") are accessible only from inside the cluster.

This query will find all Services whose type is "LoadBalancer", so they can be audited for access and cost (since a service with .spec.type set to "LoadBalancer" will typically cause the underlying cloud provider to boot up a dedicated load balancer).

Query
import {Client} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const loadBalancers = c.core.v1.Service
  .list()
  // Type services with `.spec.type` set to `"LoadBalancer"` are exposed to the
  // Internet publicly.
  .filter(svc => svc.spec.type == "LoadBalancer");

// Print.
loadBalancers.forEach(
  svc => console.log(`${svc.metadata.namespace}/${svc.metadata.name}`));
Output
default/someSvc
default/otherSvc
prod/apiSvc
dev/apiSvc

Find users and ServiceAccounts with access to Secrets

Inspect every Kubernetes RBAC Role for rules that apply to Secrets. Using this, find every RBAC RoleBinding that references each of these ruels, and list users and ServiceAccounts that they bind to.

NOTE: This query does not query for ClusterRoles, which means that cluster-level roles granting access to secrets are not taken into account in this query.

Query
import {Client, transform} from "carbonql";
const rbac = transform.rbacAuthorization

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const subjectsWithSecretAccess = c.rbacAuthorization.v1beta1.Role
  .list()
  // Find Roles that apply to `core.v1.Secret`. Note the empty string denotes
  // the `core` namespace.
  .filter(role => rbac.v1beta1.role.appliesTo(role, "", "secrets"))
  .flatMap(role => {
    return c.rbacAuthorization.v1beta1.RoleBinding
      .list()
      // Find RoleBindings that apply to `role`. Project to a list of subjects
      // (e.g., Users) `role` is bound to.
      .filter(binding =>
        rbac.v1beta1.roleBinding.referencesRole(binding, role.metadata.name))
      .flatMap(binding => binding.subjects)
  });

// Print subjects.
subjectsWithSecretAccess.forEach(subj => console.log(`${subj.kind}\t${subj.name}`));
Output
User	jane
User	frank
User	susan
User	bill

Aggregate cluster-wide error and warning Events into a report

Search for all Kubernetes Events that are classified as "Warning" or "Error", and report them grouped by the type of Kubernetes object that caused them.

In this example, there are warnings being emitted from both Nodes and from Pods, so we group them together by their place of origin.

Query
import {client, query} from "carbonql";
import * as carbon from "carbonql";

const c = client.Client.fromFile(<string>process.env.KUBECONFIG);
const warningsAndErrors = c.core.v1.Event
  .list()
  // Get warning and error events, group by `kind` that caused them.
  .filter(e => e.type == "Warning" || e.type == "Error")
  .groupBy(e => e.involvedObject.kind);

// Print events.
warningsAndErrors.forEach(events => {
  console.log(`kind: ${events.key}`);
  events.forEach(e =>
    console.log(`  ${e.type}  (x${e.count})  ${e.involvedObject.name}\n  \t   Message: ${e.message}`));
});
Output
kind: Node
  Warning	(1946 times)	minikube	Failed to start node healthz on 0: listen tcp: address 0: missing port in address
kind: Pod
  Warning	(7157 times)	mysql-5-66f5b49b8f-5r48g	Back-off restarting failed container
  Warning	(7153 times)	mysql-8-7d4f8d46d7-hrktb	Back-off restarting failed container
  Warning	(6931 times)	mysql-859645bdb9-w29z7	Back-off restarting failed container

Ops

Operators need quick primitives that will help them get visibility into what is happening in their Kubernetes clusters, especially during a livesite incident.

This section contains a collection of useful tools that operators can use day-to-day in their jobs to help diagnose such things.


Aggregate high-level report on resource consumption in a Namespace

For each Namespace, aggregate a rough overview of resource consumption in that Namespace. This could be arbitrarily complex; here we simply aggregate a count of several critical resources in that Namespace.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const report = c.core.v1.Namespace
  .list()
  .flatMap(ns =>
    query.Observable.forkJoin(
      query.Observable.of(ns),
      c.core.v1.Pod.list(ns.metadata.name).toArray(),
      c.core.v1.Secret.list(ns.metadata.name).toArray(),
      c.core.v1.Service.list(ns.metadata.name).toArray(),
      c.core.v1.ConfigMap.list(ns.metadata.name).toArray(),
      c.core.v1.PersistentVolumeClaim.list(ns.metadata.name).toArray(),
    ));

// Print small report.
report.forEach(([ns, pods, secrets, services, configMaps, pvcs]) => {
  console.log(ns.metadata.name);
  console.log(`  Pods:\t\t${pods.length}`);
  console.log(`  Secrets:\t${secrets.length}`);
  console.log(`  Services:\t${services.length}`);
  console.log(`  ConfigMaps:\t${configMaps.length}`);
  console.log(`  PVCs:\t\t${pvcs.length}`);
});

Output
default
  Pods:		9
  Secrets:	1
  Services:	2
  ConfigMaps:	0
  PVCs:		0
kube-public
  Pods:		0
  Secrets:	1
  Services:	0
  ConfigMaps:	0
  PVCs:		0
kube-system
  Pods:		4
  Secrets:	2
  Services:	2
  ConfigMaps:	2
  PVCs:		0

Audit all Certificates, including status, user, and requested usages

Retrieve all CertificateSigningRequests in all namespaces. Group them by status (i.e., "Pending", "Approved" or "Denied"), and then for each, report (1) the status of the request, (2) group information about the requesting user, and (3) the requested usages for the certificate.

Query
import {Client, transform} from "carbonql";
const certificates = transform.certificates;

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const csrs = c.certificates.v1beta1.CertificateSigningRequest
  .list()
  .map(csr => {
    // Get status of the CSR.
    return {
      status: certificates.v1beta1.certificateSigningRequest.getStatus(csr),
      request: csr,
    };
  })
  // Group CSRs by type (one of: `"Approved"`, `"Pending"`, or `"Denied"`).
  .groupBy(csr => csr.status.type);

csrs.forEach(csrs => {
  console.log(csrs.key);
  csrs.forEach(({request}) => {
    const usages = request.spec.usages.sort().join(", ");
    const groups = request.spec.groups.sort().join(", ");
    console.log(`  ${request.spec.username}\t[${usages}]\t[${groups}]`);
  });
});


Output
Denied
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]
Pending
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]
  minikube-user	[digital signature, key encipherment, server auth]	[system:authenticated, system:masters]

Distinct versions of mysql container in cluster

Search all running Kubernetes Pods for containers that have the string "mysql" in their image name. Report only distinct image names.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const mySqlVersions = c.core.v1.Pod
  .list("default")
  // Obtain all container image names running in all pods.
  .flatMap(pod => pod.spec.containers)
  .map(container => container.image)
  // Filter image names that don't include "mysql", return distinct.
  .filter(imageName => imageName.includes("mysql"))
  .distinct();

// Prints the distinct container image tags.
mySqlVersions.forEach(console.log);
Output
mysql:5.7
mysql:8.0.4
mysql

Find all Pod logs containing "ERROR:"

Retrieve all Pods in the "default" namespace, obtain their logs, and filter down to only the Pods whose logs contain the string "Error:". Return the logs grouped by Pod name.

Query
import {Client, query, transform} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podLogs = c.core.v1.Pod
  .list("default")
  // Retrieve logs for all pods, filter for logs with `ERROR:`.
  .flatMap(pod =>
    transform.core.v1.pod
      .getLogs(c, pod)
      .filter(({logs}) => logs.includes("ERROR:"))
    )
  // Group logs by name, but returns only the `logs` member.
  .groupBy(
    ({pod}) => pod.metadata.name,
    ({logs}) => logs)

// Print all the name of the pod and its logs.
podLogs.subscribe(logs => {
  console.log(logs.key);
  logs.forEach(console.log)
});
Output
nginx-6f8cf9fbc4-qnrhb
ERROR: could not connect to database.

nginx2-687c5bbccd-rzjl5
ERROR: 500

Diff last two rollouts of an application

Search for a Deployment named "nginx", and obtain the last 2 revisions in its rollout history. Then use the jsondiffpatch library to diff these two revisions.

NOTE: a history of rollouts is not retained by default, so you'll need to create the deployment with .spec.revisionHistoryLimit set to a number larger than 2. (See documentation for DeploymentSpec)

Query
import {Client, query, transform} from "carbonql";
const jsondiff = require("jsondiffpatch");

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const history = c.apps.v1beta1.Deployment
  .list()
  // Get last two rollouts in the history of the `nginx` deployment.
  .filter(d => d.metadata.name == "nginx")
  .flatMap(d =>
    transform.apps.v1beta1.deployment
      .getRevisionHistory(c, d)
      .takeLast(2)
      .toArray());

// Diff these rollouts, print.
history.forEach(rollout => {
  jsondiff.console.log(jsondiff.diff(rollout[0], rollout[1]))
});
Output
{
  metadata: {
    annotations: {
      deployment.kubernetes.io/revision: "1" => "2"
    },
    creationTimestamp: "2018-02-28T20:15:32Z" => "2018-03-13T06:34:36Z"
    generation: 7 => 3
    labels: {
      pod-template-hash: "2947959670" => "1264720760"
    },
    name: "nginx-6f8cf9fbc4" => "nginx-56b8c64cb4"
    resourceVersion: "263854" => "263858"
    selfLink:
      59,14 inx-6f8cf9fbc56b8c64cb4
 
    uid: "20c50866-1cc4-11e8-9137-080027cfc4d2" => "9966f685-2688-11e8-adbb-080027cfc4d2"
  },
  spec: {
    replicas: 0 => 3
    selector: {
      matchLabels: {
        pod-template-hash: "2947959670" => "1264720760"
      }
    },
    template: {
      metadata: {
        labels: {
          pod-template-hash: "2947959670" => "1264720760"
        }
      },
      spec: {
        containers: [
          0: {
            image: "nginx:1.7.9" => "nginx:1.9.1"
          }
        ]
      }
    }
  },
  status: {
    availableReplicas: 3
    fullyLabeledReplicas: 3
    observedGeneration: 7 => 3
    readyReplicas: 3
    replicas: 0 => 3
  }
}

List all Namespaces with no hard memory quota specified

Retrieve all Kubernetes Namespaces. Filter this down to a set of namespaces for which there is either (1) no ResourceQuota governing resource use of that Namespace; or (2) a ResourceQuota that does not specify a hard memory limit.

Query
import {Client} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const noQuotas = c.core.v1.Namespace
  .list()
  .flatMap(ns =>
    c.core.v1.ResourceQuota
      .list(ns.metadata.name)
      // Retrieve only ResourceQuotas that (1) apply to this namespace, and (2)
      // specify hard limits on memory.
      .filter(rq => rq.spec.hard["limits.memory"] != null)
      .toArray()
      .flatMap(rqs => rqs.length == 0 ? [ns] : []))

// Print.
noQuotas.forEach(ns => console.log(ns.metadata.name))
Output
kube-system
default
kube-public

List Pods and their ServiceAccount (possibly a unique user) by Secrets they use

Obtain all Secrets. For each of these Secrets, obtain all Pods that use them.

Here we print (1) the name of the Secret, (2) the list of Pods that use it, and (3) the ServiceAccount that the Pod runs as (oftentimes this is allocated to a single user).

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podsByClaim = c.core.v1.Secret
  .list()
  .flatMap(secret =>
    c.core.v1.Pod
      .list()
      .filter(pod =>
        pod.spec
          .volumes
          .filter(vol =>
            vol.secret &&
            vol.secret.secretName == secret.metadata.name)
          .length > 0)
      .toArray()
      .map(pods => {return {secret: secret, pods: pods}}));

// Print.
podsByClaim.forEach(({secret, pods}) => {
  console.log(secret.metadata.name);
  pods.forEach(pod => console.log(`  ${pod.spec.serviceAccountName} ${pod.metadata.namespace}/${pod.metadata.name}`));
});
Output
kubernetes-dashboard-key-holder
default-token-vq5hb
  default kube-system/kube-dns-54cccfbdf8-hwgh8
  default kube-system/kubernetes-dashboard-77d8b98585-gzjgb
  default kube-system/storage-provisioner
default-token-j2bmb
  alex default/mysql-5-66f5b49b8f-5r48g
  alex default/mysql-8-7d4f8d46d7-hrktb
  alex default/mysql-859645bdb9-w29z7
  alex default/nginx-56b8c64cb4-lcjv2
  alex default/nginx-56b8c64cb4-n6prt
  alex default/nginx-56b8c64cb4-v2qj2
  alex default/nginx2-687c5bbccd-dmccm
  alex default/nginx2-687c5bbccd-hrqdl
  alex default/nginx2-687c5bbccd-rzjl5
  alex default/task-pv-pod
default-token-n9vxk

List Pods grouped by PersistentVolumes they use

Obtain all "Bound" PersistentVolumes (PVs). Then, obtain all Pods that use those PVs. Finally, print a small report listing the PV and all Pods that reference it.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const podsByClaim = c.core.v1.PersistentVolume
  .list()
  .filter(pv => pv.status.phase == "Bound")
  .flatMap(pv =>
    c.core.v1.Pod
      .list()
      .filter(pod =>
        pod.spec
          .volumes
          .filter(vol =>
            vol.persistentVolumeClaim &&
            vol.persistentVolumeClaim.claimName == pv.spec.claimRef.name)
          .length > 0)
      .toArray()
      .map(pods => {return {pv: pv, pods: pods}}));

// Print.
podsByClaim.forEach(({pv, pods}) => {
  console.log(pv.metadata.name);
  pods.forEach(pod => console.log(`  ${pod.metadata.name}`));
});
Output
devHtmlData
  dev/cgiGateway-1
  dev/cgiGateway-2
prodHtmlData
  prod/cgiGateway-1
  prod/cgiGateway-2

Find all Pods scheduled on nodes with high memory pressure

Search for all Kubernetes Pods scheduled on nodes where status conditions report high memory pressure.

Query
import {Client, query} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const pressured = c.core.v1.Pod.list()
  // Index pods by node name.
  .groupBy(pod => pod.spec.nodeName)
  .flatMap(group => {
    // Join pods and nodes on node name; filter out everything where mem
    // pressure is not high.
    const nodes = c.core.v1.Node
      .list()
      .filter(node =>
        node.metadata.name == group.key &&
        node.status.conditions
          .filter(c => c.type === "MemoryPressure" && c.status === "True")
          .length >= 1);

    // Return join of {node, pods}
    return group
      .toArray()
      .flatMap(pods => nodes.map(node => {return {node, pods}}))
  })

// Print report.
pressured.forEach(({node, pods}) => {
  console.log(node.metadata.name);
  pods.forEach(pod => console.log(`    ${pod.metadata.name}`));
});
Output
node3
    redis-6f8cf9fbc4-qnrhb
    redis2-687c5bbccd-rzjl5

Pods using the default ServiceAccount

Retrieve all Pods, filtering down to those that are using the "default" ServiceAccount.

Query
import {Client, certificates} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const noServiceAccounts = c.core.v1.Pod
  .list()
  .filter(pod =>
    pod.spec.serviceAccountName == null ||
    pod.spec.serviceAccountName == "default");

noServiceAccounts.forEach(pod => console.log(pod.metadata.name));
Output
mysql-5-66f5b49b8f-5r48g
mysql-8-7d4f8d46d7-hrktb
mysql-859645bdb9-w29z7
nginx-56b8c64cb4-lcjv2
nginx-56b8c64cb4-n6prt
nginx-56b8c64cb4-v2qj2
nginx2-687c5bbccd-dmccm
nginx2-687c5bbccd-hrqdl
nginx2-687c5bbccd-rzjl5
kube-addon-manager-minikube
kube-dns-54cccfbdf8-hwgh8
kubernetes-dashboard-77d8b98585-gzjgb
storage-provisioner

Find Services publicly exposed to the Internet

Kubernetes Services can expose a Pod to Internet traffic by setting the .spec.type to "LoadBalancer" (see documentation for ServiceSpec). Other Service types (such as "ClusterIP") are accessible only from inside the cluster.

This query will find all Services whose type is "LoadBalancer", so they can be audited for access and cost (since a service with .spec.type set to "LoadBalancer" will typically cause the underlying cloud provider to boot up a dedicated load balancer).

Query
import {Client} from "carbonql";

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const loadBalancers = c.core.v1.Service
  .list()
  // Type services with `.spec.type` set to `"LoadBalancer"` are exposed to the
  // Internet publicly.
  .filter(svc => svc.spec.type == "LoadBalancer");

// Print.
loadBalancers.forEach(
  svc => console.log(`${svc.metadata.namespace}/${svc.metadata.name}`));
Output
default/someSvc
default/otherSvc
prod/apiSvc
dev/apiSvc

Find users and ServiceAccounts with access to Secrets

Inspect every Kubernetes RBAC Role for rules that apply to Secrets. Using this, find every RBAC RoleBinding that references each of these ruels, and list users and ServiceAccounts that they bind to.

NOTE: This query does not query for ClusterRoles, which means that cluster-level roles granting access to secrets are not taken into account in this query.

Query
import {Client, transform} from "carbonql";
const rbac = transform.rbacAuthorization

const c = Client.fromFile(<string>process.env.KUBECONFIG);
const subjectsWithSecretAccess = c.rbacAuthorization.v1beta1.Role
  .list()
  // Find Roles that apply to `core.v1.Secret`. Note the empty string denotes
  // the `core` namespace.
  .filter(role => rbac.v1beta1.role.appliesTo(role, "", "secrets"))
  .flatMap(role => {
    return c.rbacAuthorization.v1beta1.RoleBinding
      .list()
      // Find RoleBindings that apply to `role`. Project to a list of subjects
      // (e.g., Users) `role` is bound to.
      .filter(binding =>
        rbac.v1beta1.roleBinding.referencesRole(binding, role.metadata.name))
      .flatMap(binding => binding.subjects)
  });

// Print subjects.
subjectsWithSecretAccess.forEach(subj => console.log(`${subj.kind}\t${subj.name}`));
Output
User	jane
User	frank
User	susan
User	bill

Aggregate cluster-wide error and warning Events into a report

Search for all Kubernetes Events that are classified as "Warning" or "Error", and report them grouped by the type of Kubernetes object that caused them.

In this example, there are warnings being emitted from both Nodes and from Pods, so we group them together by their place of origin.

Query
import {client, query} from "carbonql";
import * as carbon from "carbonql";

const c = client.Client.fromFile(<string>process.env.KUBECONFIG);
const warningsAndErrors = c.core.v1.Event
  .list()
  // Get warning and error events, group by `kind` that caused them.
  .filter(e => e.type == "Warning" || e.type == "Error")
  .groupBy(e => e.involvedObject.kind);

// Print events.
warningsAndErrors.forEach(events => {
  console.log(`kind: ${events.key}`);
  events.forEach(e =>
    console.log(`  ${e.type}  (x${e.count})  ${e.involvedObject.name}\n  \t   Message: ${e.message}`));
});
Output
kind: Node
  Warning	(1946 times)	minikube	Failed to start node healthz on 0: listen tcp: address 0: missing port in address
kind: Pod
  Warning	(7157 times)	mysql-5-66f5b49b8f-5r48g	Back-off restarting failed container
  Warning	(7153 times)	mysql-8-7d4f8d46d7-hrktb	Back-off restarting failed container
  Warning	(6931 times)	mysql-859645bdb9-w29z7	Back-off restarting failed container