Prometheus Email Notification [on hold] Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Come Celebrate our 10 Year Anniversary!Monitor health of prometheus deployment within kubernetes?Stackdriver vs Prometheus - what is the main differences?nginx-ingress controller crashes every few daysPrometheus: no metric node_cpumigrating queries from heapster to prometheus on k8sPrometheus HA setup with InfluxDB backendPrometheus, Alertmanager Alert not triggeredPrometheus alert not firedPrometheus Alert Rule for Absent Discovered TargetKubernetes CrashLoopBackOff after pod create pulling image from Docker Hub
What makes a man succeed?
What is the meaning of 'breadth' in breadth first search?
Is CEO the "profession" with the most psychopaths?
What do you call the main part of a joke?
Maximum summed subsequences with non-adjacent items
How does Belgium enforce obligatory attendance in elections?
Do I really need to have a message in a novel to appeal to readers?
What is the chair depicted in Cesare Maccari's 1889 painting "Cicerone denuncia Catilina"?
The Nth Gryphon Number
Time evolution of a Gaussian wave packet, why convert to k-space?
What is an "asse" in Elizabethan English?
What order were files/directories output in dir?
How to save space when writing equations with cases?
Is there any word for a place full of confusion?
Did Mueller's report provide an evidentiary basis for the claim of Russian govt election interference via social media?
How can I set the aperture on my DSLR when it's attached to a telescope instead of a lens?
Can the Flaming Sphere spell be rammed into multiple Tiny creatures that are in the same 5-foot square?
Girl Hackers - Logic Puzzle
Is multiple magic items in one inherently imbalanced?
How did Fremen produce and carry enough thumpers to use Sandworms as de facto Ubers?
What does Turing mean by this statement?
What does this say in Elvish?
What are the discoveries that have been possible with the rejection of positivism?
In musical terms, what properties are varied by the human voice to produce different words / syllables?
Prometheus Email Notification [on hold]
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Come Celebrate our 10 Year Anniversary!Monitor health of prometheus deployment within kubernetes?Stackdriver vs Prometheus - what is the main differences?nginx-ingress controller crashes every few daysPrometheus: no metric node_cpumigrating queries from heapster to prometheus on k8sPrometheus HA setup with InfluxDB backendPrometheus, Alertmanager Alert not triggeredPrometheus alert not firedPrometheus Alert Rule for Absent Discovered TargetKubernetes CrashLoopBackOff after pod create pulling image from Docker Hub
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;
I have a Prometheus
Operator
running on Kubernetes
that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m
NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m
And i put my AlertManager.yaml
configure in secret
kubectl edit secret alertmanager-kube-prometheus -n monitoring
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque
And my AlertManager.yaml
looks like this
global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'
templates:
- '/etc/alertmanager/template/*.tmpl'
route:
group_by: ['alertname', 'cluster', 'service' , 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: email-me
routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me
- match:
severity: warning
receiver: email-me
- match:
service: database
receiver: email-me
group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true
- match:
severity: warning
receiver: email-me
- match:
severity: front-critical
receiver: email-me
receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'
- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com
My Alerting list in Prometheus
dashboard :
Prometheus Dashboard
email monitoring kubernetes prometheus alertmanager
put on hold as off-topic by Ward♦ yesterday
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
add a comment |
I have a Prometheus
Operator
running on Kubernetes
that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m
NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m
And i put my AlertManager.yaml
configure in secret
kubectl edit secret alertmanager-kube-prometheus -n monitoring
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque
And my AlertManager.yaml
looks like this
global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'
templates:
- '/etc/alertmanager/template/*.tmpl'
route:
group_by: ['alertname', 'cluster', 'service' , 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: email-me
routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me
- match:
severity: warning
receiver: email-me
- match:
service: database
receiver: email-me
group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true
- match:
severity: warning
receiver: email-me
- match:
severity: front-critical
receiver: email-me
receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'
- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com
My Alerting list in Prometheus
dashboard :
Prometheus Dashboard
email monitoring kubernetes prometheus alertmanager
put on hold as off-topic by Ward♦ yesterday
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
Please don't cross-post: stackoverflow.com/questions/55675215/…
– Ward♦
yesterday
add a comment |
I have a Prometheus
Operator
running on Kubernetes
that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m
NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m
And i put my AlertManager.yaml
configure in secret
kubectl edit secret alertmanager-kube-prometheus -n monitoring
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque
And my AlertManager.yaml
looks like this
global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'
templates:
- '/etc/alertmanager/template/*.tmpl'
route:
group_by: ['alertname', 'cluster', 'service' , 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: email-me
routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me
- match:
severity: warning
receiver: email-me
- match:
service: database
receiver: email-me
group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true
- match:
severity: warning
receiver: email-me
- match:
severity: front-critical
receiver: email-me
receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'
- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com
My Alerting list in Prometheus
dashboard :
Prometheus Dashboard
email monitoring kubernetes prometheus alertmanager
I have a Prometheus
Operator
running on Kubernetes
that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m
NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m
And i put my AlertManager.yaml
configure in secret
kubectl edit secret alertmanager-kube-prometheus -n monitoring
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque
And my AlertManager.yaml
looks like this
global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'
templates:
- '/etc/alertmanager/template/*.tmpl'
route:
group_by: ['alertname', 'cluster', 'service' , 'severity']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: email-me
routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me
- match:
severity: warning
receiver: email-me
- match:
service: database
receiver: email-me
group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true
- match:
severity: warning
receiver: email-me
- match:
severity: front-critical
receiver: email-me
receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'
- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com
My Alerting list in Prometheus
dashboard :
Prometheus Dashboard
email monitoring kubernetes prometheus alertmanager
email monitoring kubernetes prometheus alertmanager
asked Apr 14 at 10:45
meisam bahramimeisam bahrami
62
62
put on hold as off-topic by Ward♦ yesterday
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
put on hold as off-topic by Ward♦ yesterday
This question appears to be off-topic. The users who voted to close gave this specific reason:
- "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
Please don't cross-post: stackoverflow.com/questions/55675215/…
– Ward♦
yesterday
add a comment |
Please don't cross-post: stackoverflow.com/questions/55675215/…
– Ward♦
yesterday
Please don't cross-post: stackoverflow.com/questions/55675215/…
– Ward♦
yesterday
Please don't cross-post: stackoverflow.com/questions/55675215/…
– Ward♦
yesterday
add a comment |
0
active
oldest
votes
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Please don't cross-post: stackoverflow.com/questions/55675215/…
– Ward♦
yesterday