Clymene-agent Getting Start

The Clymene-agent is service that collects time series data

The Clymene-agent is service that collects time series data(does not use disks)

How to create a scrape target setting yaml

  1. Config file Option
clymene-agent --config.file=/etc/clymene/clymene.yml
  1. How to write yaml - See Clymene Configuration for more information
global:
   scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.  
   evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.

scrape_configs:
- job_name: 'localhost'  
  static_configs:  
    - targets: [ 'localhost:9100' ]   

- job_name: 'kubernetes-kubelet'  
  scheme: https  
  tls_config:  
  ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt  
  bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token  
  kubernetes_sd_configs:  
    - role: node  
      relabel_configs:  
    - action: labelmap  
      regex: __meta_kubernetes_node_label_(.+)  
    - target_label: __address__  
      replacement: kubernetes.default.svc:443  
    - source_labels: [__meta_kubernetes_node_name]  
      regex: (.+)  
      target_label: __metrics_path__  
      replacement: /api/v1/nodes/${1}/proxy/metrics
  1. config file reload
curl -XPOST http://clymene-agent:15692/api/reload

# check clymene-agent logs
  {"level":"info","ts":1643299385.1000407,"caller":"config/config.go:451","msg":"Loading configuration file","filename":"clymene_agent.yml"}
  {"level":"info","ts":1643299385.1012235,"caller":"config/config.go:468","msg":"Completed loading of configuration file","filename":"clymene_agent.yml"}

How to setting clymene-agent

--admin.http.host-ports string           The host:ports (e.g. 127.0.0.1:15691 or :15691) for the admin server, including health check, /metrics, etc. (default ":15691")
--config.file string                     configuration file path. (default "/etc/clymene/clymene.yml")
--enable.new-service-discovery-manager   use new service discovery manager (default true)
--http.port int                          http port (default 15692)
--log-level string                       Minimal allowed log Level. For more levels see https://github.com/uber-go/zap (default "info")
--metrics-backend string                 Defines which metrics backend to use for metrics reporting: expvar, prometheus, none (default "prometheus")
--metrics-http-route string              Defines the route of HTTP endpoint for metrics backends that support scraping (default "/metrics")
--metric.split.length                    split length for metric transmission (default 1024)

How to set up the Storage Type

1. Setting environmental variables

ElasticSearch

STORAGE_TYPE=elasticsearch

Kafka

STORAGE_TYPE=kafka

prometheus

STORAGE_TYPE=prometheus

cortex

STORAGE_TYPE=cortex

gateway

STORAGE_TYPE=gateway

opentsdb

STORAGE_TYPE=opentsdb

influxdb

STORAGE_TYPE=influxdb

tdengine

STORAGE_TYPE=tdengine

druid

# env setting
STORAGE_TYPE=kafka
# arg
--kafka.producer.encoding=json
--kafka.producer.flatten-for-druid

Several

STORAGE_TYPE=elasticsearch,prometheus  # composite write - Write in multiple databases at the same time.
2. Option description by storage type

Use only agent Architecture

Last modified January 24, 2023: update 2.2.0 release (10be5e5)