Skip to end of banner
Go to start of banner

Observability

Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 15 Next »

Overview

The purpose of this page at this time is to capture requirements related to observability of the EMCO services (https://gitlab.com/groups/project-emco/-/epics/7).

Front-ending the services with Istio provides a useful set of metrics and tracing, and adding the Prometheus library provided collectors to each service expands that with other fundamental metrics. The open question is what additional metrics and tracing will be useful to EMCO operators.

Metrics

The following items are based on Prometheus recommendations for instrumentation.

Queries, errors, and latency

Both client and server side are provided by Istio. https://istio.io/latest/docs/reference/config/metrics/

Istio metrics can be customized to include other attributes from Envoy such as subject field of peer certificate. https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/advanced/attributes

Example PromQL

ServiceTypePromQLNotes

HTTP/gRPC*

*The request_protocol label can be used to distinguish among HTTP and gRPC.

Queriessum(irate(istio_requests_total{reporter="destination",destination_workload=~"services-orchestrator"}[5m]))inbound
sum(irate(istio_requests_total{reporter="source",source_workload="services-orchestrator"}[5m])) by (destination_workload)outbound
Errorssum(irate(istio_requests_total{reporter="destination",destination_workload=~"services-orchestrator",response_code!~"5.*"}[5m])) / sum(irate(istio_requests_total{reporter="destination",destination_workload=~"services-orchestrator"}[5m]))inbound
sum(irate(istio_requests_total{reporter="source",source_workload=~"services-orchestrator",response_code!~"5.*"}[5m])) by (destination_workload) / sum(irate(istio_requests_total{reporter="source",source_workload=~"services-orchestrator"}[5m])) by (destination_workload)outbound
Latencyhistogram_quantile(0.90, sum(irate(istio_request_duration_milliseconds_bucket{reporter="destination",destination_workload="services-orchestrator"}[1m])) by (le)) / 1000P90
Saturation

Queries, errors, and latencies of resources external to process (network, disk, IPC, etc.)

The prometheus golang library provides builtin collectors for various process and golang metrics: https://pkg.go.dev/github.com/prometheus/client_golang@v1.12.2/prometheus/collectors. A list of metrics provided by cAdvisor is at https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md. Additional K8s specific metrics can be enabled with the https://github.com/kubernetes/kube-state-metrics project.

Example PromQL

Note: some of these require that kube-state-metrics is also deployed.

Pod ResourceTypePromQL
CPUUtilizationsum(rate(container_cpu_usage_seconds_total{namespace="emco"}[5m])) by (pod)
Saturationsum(rate(container_cpu_cfs_throttled_seconds_total{namespace="emco"}[5m])) by (pod)
Errors
MemoryUtilizationsum(container_memory_working_set_bytes{namespace="emco"}) by (pod)
Saturationsum(container_memory_working_set_bytes{namespace="emco"}) by (pod) / sum(kube_pod_container_resource_limits{namespace="emco",resource="memory",unit="byte"}) by (pod)
Errors
DiskUtilizationsum(irate(container_fs_reads_bytes_total{namespace="emco"}[5m])) by (pod, device)
sum(irate(container_fs_writes_bytes_total{namespace="emco"}[5m])) by (pod)
Saturation
Errors
NetworkUtilizationsum(rate(container_network_receive_bytes_total{namespace="emco"}[1m])) by (pod)
sum(rate(container_network_transmit_bytes_total{namespace="emco"}[1m])) by (pod)
Saturation
Errorssum(container_network_receive_errors_total{namespace="emco"}) by (pod)
sum(container_network_transmit_errors_total{namespace="emco"}) by (pod)

Internal errors and latency

Internal errors should be counted.  It also desirable to measure success to calculate ratio.

Totals of info/error/warning logs

Unsure if this is a useful metric.

Any general statistics

This bucket includes EMCO specific information such as number of projects, errors and latency of deployment intent group instantiation, etc. Also consider any cache or threadpool metrics. Looking for feedback here on any general metrics of interest to EMCO operators.

Preliminary guidelines:

  • Distinguish between resources and actions. 
  • Action metrics will record requests, errors, and latency similar to general network requests.
  • Resource metrics will record creation, deletion, and possible modification.  
  • Metrics will be labeled with project, composite-app, deployment intent group, etc.

For rsync specifically, measure health/reachability of target clusters.

Also, keep in mind this cautionary note from the Prometheus project

CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increase the amount of data stored. Do not use labels to store dimensions with high cardinality (many different label values), such as user IDs, email addresses, or other unbounded sets of values.

Unbounded sets of values in the EMCO APIs would include values such as project names, intent names, etc.

Preliminary metrics

This section contains some of the considerations of the guidelines above applied to the orchestrator service.

The actions of a service can be identified from the gRPC requests and HTTP lifecycle requests:

ServiceAction
orchestrator

approve

instantiate
migrate
rollback
stop
terminate
update
StatusRegister
StatusDeregister

The requests, errors, and latency can be modeled after Istio's istio_requests_total and istio_request_duration_milliseconds, with an additional action name label.

The resources of a service can be identified from the HTTP resources.  The initial labels can be the URL parameters.

ServiceResourceLabels
orchestrator









controllername
projectname
compositeAppversion, name, project
appname, composite_app_version, composite_app, project
dependencyname, app, composite_app_version, composite_app, project
compositeProfilename, composite_app_version, composite_app, project
appProfilename, composite_profile, composite_app_version, composite_app, project
deploymentIntentGroupname, composite_app_version, composite_app, project
genericPlacementIntentname, deployment_intent_group, composite_app_version, composite_app, project
genericAppPlacementIntentname, generic_placement_intent, deployment_intent_group, composite_app_version, composite_app, project
groupIntentname, deployment_intent_group, composite_app_version, composite_app_name, project

The metrics for these resources should capture the state of the resource, i.e. metrics for creation, deletion, etc. (emco_controller_creation_timestamp, emco_controller_deletion_timestamp, etc.) as described in the guidelines. This approach is suggested as it is unclear how to apply metrics capturing resource utilization to these resources.

The status of a deployment intent group deserves special consideration. The initial idea would be to add metrics describing the contents of the status. This would enable alerting on failed resources for example.

MetricLabels
deployment_intent_group_resourcename, cluster, cluster_provider, app, deployment_intent_group, composite_profile, composite_app_version, composite_app, project

It's not clear to me yet whether the rsyncStatus value should be part of the metric name (deployment_intent_group_resource_applied) or a label. Following the kube-state-metrics model would make it part of the metric name. Further complicating the question is the readyStatus field of the cluster.

Tracing

Istio provides a starting point for tracing by creating a trace for each request in the sidecars.  But this is insufficient as it does not include the outgoing requests made during an inbound request.  What we'd like to see is a complete trace of, for example, an instantiate request to the orchestrator that includes the requests made to any controllers, etc.

In order to do this it is necessary to pass the tracing headers from the inbound request through to any outbound requests.  This will be done with the https://opentelemetry.io/ golang libraries.

Logging

Each log message must contain the timestamp and identifying information describing the resource, such as project, composite application, etc. in case of orchestration.

The priority is placed on error logs; logging other significant actions is secondary.

  • No labels