Overview
The purpose of this page at this time is to capture requirements related to observability of the EMCO services (https://gitlab.com/groups/project-emco/-/epics/7).
Front-ending the services with Istio provides a useful set of metrics and tracing, and adding the Prometheus library provided collectors to each service expands that with other fundamental metrics. The open question is what additional metrics and tracing will be useful to EMCO operators.
Metrics
The following items are based on Prometheus recommendations for instrumentation.
Queries, errors, and latency
Both client and server side are provided by Istio. https://istio.io/latest/docs/reference/config/metrics/
Istio metrics can be customized to include other attributes from Envoy such as subject field of peer certificate. https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/advanced/attributes
Example PromQL
Service | Type | PromQL | Notes |
---|---|---|---|
HTTP/gRPC* *The request_protocol label can be used to distinguish among HTTP and gRPC. | Queries | sum(irate(istio_requests_total{reporter="destination",destination_workload=~"services-orchestrator"}[5m])) | inbound |
sum(irate(istio_requests_total{reporter="source",source_workload="services-orchestrator"}[5m])) by (destination_workload) | outbound | ||
Errors | sum(irate(istio_requests_total{reporter="destination",destination_workload=~"services-orchestrator",response_code!~"5.*"}[5m])) / sum(irate(istio_requests_total{reporter="destination",destination_workload=~"services-orchestrator"}[5m])) | inbound | |
sum(irate(istio_requests_total{reporter="source",source_workload=~"services-orchestrator",response_code!~"5.*"}[5m])) by (destination_workload) / sum(irate(istio_requests_total{reporter="source",source_workload=~"services-orchestrator"}[5m])) by (destination_workload) | outbound | ||
Latency | histogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter="destination",destination_workload="services-orchestrator"}[1m])) by (le)) / 1000 | P90 | |
Saturation |
Queries, errors, and latencies of resources external to process (network, disk, IPC, etc.)
The prometheus golang library provides builtin collectors for various process and golang metrics: https://pkg.go.dev/github.com/prometheus/client_golang@v1.12.2/prometheus/collectors. A list of metrics provided by cAdvisor is at https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md. Additional K8s specific metrics can be enabled with the https://github.com/kubernetes/kube-state-metrics project.
Example PromQL
Note: some of these require that kube-state-metrics is also deployed.
Pod Resource | Type | PromQL |
---|---|---|
CPU | Utilization | sum(rate(container_cpu_usage_seconds_total{namespace="emco"}[5m])) by (pod) |
Saturation | sum(rate(container_cpu_cfs_throttled_seconds_total{namespace="emco"}[5m])) by (pod) | |
Errors | ||
Memory | Utilization | sum(container_memory_working_set_bytes{namespace="emco"}) by (pod) |
Saturation | sum(container_memory_working_set_bytes{namespace="emco"}) by (pod) / sum(kube_pod_container_resource_limits{namespace="emco",resource="memory",unit="byte"}) by (pod) | |
Errors | ||
Disk | Utilization | sum(irate(container_fs_reads_bytes_total{namespace="emco"}[5m])) by (pod, device) |
sum(irate(container_fs_writes_bytes_total{namespace="emco"}[5m])) by (pod) | ||
Saturation | ||
Errors | ||
Network | Utilization | sum(rate(container_network_receive_bytes_total{namespace="emco"}[1m])) by (pod) |
sum(rate(container_network_transmit_bytes_total{namespace="emco"}[1m])) by (pod) | ||
Saturation | ||
Errors | sum(container_network_receive_errors_total{namespace="emco"}) by (pod) | |
sum(container_network_transmit_errors_total{namespace="emco"}) by (pod) |
Internal errors and latency
Internal errors should be counted. It also desirable to measure success to calculate ratio.
Totals of info/error/warning logs
Unsure if this is a useful metric.
Any general statistics
This bucket includes EMCO specific information such as number of projects, errors and latency of deployment intent group instantiation, etc. Also consider any cache or threadpool metrics. Looking for feedback here on any general metrics of interest to EMCO operators.
Preliminary guidelines:
- Distinguish between resources and actions.
- Action metrics will record requests, errors, and latency similar to general network requests.
- Resource metrics will record creation, deletion, and possible modification.
- Metrics will be labeled with project, composite-app, deployment intent group, etc.
For rsync specifically, measure health/reachability of target clusters.
Also, keep in mind this cautionary note from the Prometheus project
CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.
Unbounded sets of values in the EMCO APIs would include values such as project names, intent names, etc.
Preliminary metrics
This section contains some of the considerations of the guidelines above applied to the orchestrator service.
The actions of a service can be identified from the gRPC requests and HTTP lifecycle requests:
Service | Action |
---|---|
orchestrator | approve |
instantiate | |
migrate | |
rollback | |
stop | |
terminate | |
update | |
StatusRegister | |
StatusDeregister |
The requests, errors, and latency can be modeled after Istio's istio_requests_total and istio_request_duration_milliseconds, with an additional action name label.
The resources of a service can be identified from the HTTP resources. The initial labels can be the URL parameters.
Service | Resource | Labels |
---|---|---|
orchestrator | controller | name |
project | name | |
compositeApp | version, name, project | |
app | name, composite_app_version, composite_app, project | |
dependency | name, app, composite_app_version, composite_app, project | |
compositeProfile | name, composite_app_version, composite_app, project | |
appProfile | name, composite_profile, composite_app_version, composite_app, project | |
deploymentIntentGroup | name, composite_app_version, composite_app, project | |
genericPlacementIntent | name, deployment_intent_group, composite_app_version, composite_app, project | |
genericAppPlacementIntent | name, generic_placement_intent, deployment_intent_group, composite_app_version, composite_app, project | |
groupIntent | name, deployment_intent_group, composite_app_version, composite_app_name, project |
The metrics for these resources should capture the state of the resource, i.e. metrics for creation, deletion, etc. (emco_controller_creation_timestamp, emco_controller_deletion_timestamp, etc.) as described in the guidelines. This approach is suggested as it is unclear how to apply metrics capturing resource utilization to these resources.
The status of a deployment intent group deserves special consideration. The initial idea would be to add metrics describing the contents of the status. This would enable alerting on failed resources for example.
Metric | Labels |
---|---|
deployment_intent_group_resource | name, cluster, cluster_provider, app, deployment_intent_group, composite_profile, composite_app_version, composite_app, project |
It's not clear to me yet whether the rsyncStatus value should be part of the metric name (deployment_intent_group_resource_applied) or a label. Following the kube-state-metrics model would make it part of the metric name. Further complicating the question is the readyStatus field of the cluster.
Tracing
Istio provides a starting point for tracing by creating a trace for each request in the sidecars. But this is insufficient as it does not include the outgoing requests made during an inbound request. What we'd like to see is a complete trace of, for example, an instantiate request to the orchestrator that includes the requests made to any controllers, etc.
In order to do this it is necessary to pass the tracing headers from the inbound request through to any outbound requests. This will be done with the https://opentelemetry.io/ golang libraries.
Logging
Each log message must contain the timestamp and identifying information describing the resource, such as project, composite application, etc. in case of orchestration.
The priority is placed on error logs; logging other significant actions is secondary.