Prometheus Email Notification [on hold] Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Come Celebrate our 10 Year Anniversary!Monitor health of prometheus deployment within kubernetes?Stackdriver vs Prometheus - what is the main differences?nginx-ingress controller crashes every few daysPrometheus: no metric node_cpumigrating queries from heapster to prometheus on k8sPrometheus HA setup with InfluxDB backendPrometheus, Alertmanager Alert not triggeredPrometheus alert not firedPrometheus Alert Rule for Absent Discovered TargetKubernetes CrashLoopBackOff after pod create pulling image from Docker Hub

What makes a man succeed?

What is the meaning of 'breadth' in breadth first search?

Is CEO the "profession" with the most psychopaths?

What do you call the main part of a joke?

Maximum summed subsequences with non-adjacent items

How does Belgium enforce obligatory attendance in elections?

Do I really need to have a message in a novel to appeal to readers?

What is the chair depicted in Cesare Maccari's 1889 painting "Cicerone denuncia Catilina"?

The Nth Gryphon Number

Time evolution of a Gaussian wave packet, why convert to k-space?

What is an "asse" in Elizabethan English?

What order were files/directories output in dir?

How to save space when writing equations with cases?

Is there any word for a place full of confusion?

Did Mueller's report provide an evidentiary basis for the claim of Russian govt election interference via social media?

How can I set the aperture on my DSLR when it's attached to a telescope instead of a lens?

Can the Flaming Sphere spell be rammed into multiple Tiny creatures that are in the same 5-foot square?

Girl Hackers - Logic Puzzle

Is multiple magic items in one inherently imbalanced?

How did Fremen produce and carry enough thumpers to use Sandworms as de facto Ubers?

What does Turing mean by this statement?

What does this say in Elvish?

What are the discoveries that have been possible with the rejection of positivism?

In musical terms, what properties are varied by the human voice to produce different words / syllables?



Prometheus Email Notification [on hold]



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)
Come Celebrate our 10 Year Anniversary!Monitor health of prometheus deployment within kubernetes?Stackdriver vs Prometheus - what is the main differences?nginx-ingress controller crashes every few daysPrometheus: no metric node_cpumigrating queries from heapster to prometheus on k8sPrometheus HA setup with InfluxDB backendPrometheus, Alertmanager Alert not triggeredPrometheus alert not firedPrometheus Alert Rule for Absent Discovered TargetKubernetes CrashLoopBackOff after pod create pulling image from Docker Hub



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty height:90px;width:728px;box-sizing:border-box;








0















I have a Prometheus Operator running on Kubernetes that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?



NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m

NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m


And i put my AlertManager.yaml configure in secret



kubectl edit secret alertmanager-kube-prometheus -n monitoring



# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque


And my AlertManager.yaml looks like this



global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'

templates:
- '/etc/alertmanager/template/*.tmpl'


route:

group_by: ['alertname', 'cluster', 'service' , 'severity']

group_wait: 30s

group_interval: 5m

repeat_interval: 1h

receiver: email-me

routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me

- match:
severity: warning
receiver: email-me

- match:
service: database
receiver: email-me

group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true

- match:
severity: warning
receiver: email-me

- match:
severity: front-critical
receiver: email-me


receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'

- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com


My Alerting list in Prometheus dashboard :



Prometheus Dashboard










share|improve this question













put on hold as off-topic by Ward yesterday


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
If this question can be reworded to fit the rules in the help center, please edit the question.
















  • Please don't cross-post: stackoverflow.com/questions/55675215/…

    – Ward
    yesterday

















0















I have a Prometheus Operator running on Kubernetes that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?



NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m

NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m


And i put my AlertManager.yaml configure in secret



kubectl edit secret alertmanager-kube-prometheus -n monitoring



# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque


And my AlertManager.yaml looks like this



global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'

templates:
- '/etc/alertmanager/template/*.tmpl'


route:

group_by: ['alertname', 'cluster', 'service' , 'severity']

group_wait: 30s

group_interval: 5m

repeat_interval: 1h

receiver: email-me

routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me

- match:
severity: warning
receiver: email-me

- match:
service: database
receiver: email-me

group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true

- match:
severity: warning
receiver: email-me

- match:
severity: front-critical
receiver: email-me


receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'

- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com


My Alerting list in Prometheus dashboard :



Prometheus Dashboard










share|improve this question













put on hold as off-topic by Ward yesterday


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
If this question can be reworded to fit the rules in the help center, please edit the question.
















  • Please don't cross-post: stackoverflow.com/questions/55675215/…

    – Ward
    yesterday













0












0








0








I have a Prometheus Operator running on Kubernetes that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?



NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m

NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m


And i put my AlertManager.yaml configure in secret



kubectl edit secret alertmanager-kube-prometheus -n monitoring



# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque


And my AlertManager.yaml looks like this



global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'

templates:
- '/etc/alertmanager/template/*.tmpl'


route:

group_by: ['alertname', 'cluster', 'service' , 'severity']

group_wait: 30s

group_interval: 5m

repeat_interval: 1h

receiver: email-me

routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me

- match:
severity: warning
receiver: email-me

- match:
service: database
receiver: email-me

group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true

- match:
severity: warning
receiver: email-me

- match:
severity: front-critical
receiver: email-me


receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'

- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com


My Alerting list in Prometheus dashboard :



Prometheus Dashboard










share|improve this question














I have a Prometheus Operator running on Kubernetes that works and i can monitor my resources and cluster whit it. But i don't receive email notification with alerts firing.
What should i do to get email ?



NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-0 2/2 Running 0 72m
pod/kube-prometheus-exporter-kube-state-86b466d978-sp24r 2/2 Running 0 161m
pod/kube-prometheus-exporter-node-2zjc6 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-gwxlg 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-ngc5p 1/1 Running 0 162m
pod/kube-prometheus-exporter-node-vcrw4 1/1 Running 0 162m
pod/kube-prometheus-grafana-6c4dffd84d-mfws7 2/2 Running 0 162m
pod/prometheus-kube-prometheus-0 3/3 Running 1 162m
pod/prometheus-operator-545b59ffc9-tpqs5 1/1 Running 0 163m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,6783/TCP 162m
service/kube-prometheus NodePort 10.106.17.176 <none> 9090:31984/TCP 162m
service/kube-prometheus-alertmanager NodePort 10.105.17.59 <none> 9093:30365/TCP 162m
service/kube-prometheus-exporter-kube-state ClusterIP 10.105.149.175 <none> 80/TCP 162m
service/kube-prometheus-exporter-node ClusterIP 10.111.234.174 <none> 9100/TCP 162m
service/kube-prometheus-grafana ClusterIP 10.106.183.201 <none> 80/TCP 162m
service/prometheus-operated ClusterIP None <none> 9090/TCP 162m

NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-exporter-node 4 4 4 4 4 <none> 162m

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-exporter-kube-state 1/1 1 1 162m
deployment.apps/kube-prometheus-grafana 1/1 1 1 162m
deployment.apps/prometheus-operator 1/1 1 1 163m

NAME DESIRED CURRENT READY AGE
replicaset.apps/kube-prometheus-exporter-kube-state-5858d86974 0 0 0 162m
replicaset.apps/kube-prometheus-exporter-kube-state-86b466d978 1 1 1 161m
replicaset.apps/kube-prometheus-grafana-6c4dffd84d 1 1 1 162m
replicaset.apps/prometheus-operator-545b59ffc9 1 1 1 163m

NAME READY AGE
statefulset.apps/alertmanager-kube-prometheus 1/1 162m
statefulset.apps/prometheus-kube-prometheus 1/1 162m


And i put my AlertManager.yaml configure in secret



kubectl edit secret alertmanager-kube-prometheus -n monitoring



# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
alertmanager.yaml: Z2xvYmFsOgogIHNtdHBfc21hcnRob3N0OiAnc210cC5nbWFpbC5jb206NTg3JwogIHNtdHBfZnJvbTogJ3pvay5jbzIyMkBnbWFpbC5jb20nCiAgc210cF9hdXRoX3VzZXJuYW1lOiAnem9rLmNvMjIyQGdtYWlsLmNvbScKICBzbXRwX2F1dGhfcGFzc3dvcmQ6ICdIaWZ0MjIyJwoKdGVtcGxhdGVzOiAKLSAnL2V0Yy9hbGVydG1hbmFnZXIvdGVtcGxhdGUvKi50bXBsJwoKCnJvdXRlOgogIAogIGdyb3VwX2J5OiBbJ2FsZXJ0bmFtZScsICdjbHVzdGVyJywgJ3NlcnZpY2UnICwgJ3NldmVyaXR5J10KCiAgZ3JvdXBfd2FpdDogMzBzCgogIGdyb3VwX2ludGVydmFsOiA1bQoKICByZXBlYXRfaW50ZXJ2YWw6IDFoIAoKICByZWNlaXZlcjogZW1haWwtbWUKCiAgcm91dGVzOgogIC0gbWF0Y2hfcmU6CiAgICAgIHNlcnZpY2U6IF4oZm9vMXxmb28yfGJheikkCiAgICByZWNlaXZlcjogZW1haWwtbWUKICAgIHJvdXRlczoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgLSBtYXRjaDoKICAgICAgICBzZXJ2aWNlOiBmaWxlcwogICAgICByZWNlaXZlcjogZW1haWwtbWUKCiAgICAtIG1hdGNoOgogICAgICAgIHNldmVyaXR5OiB3YXJuaW5nCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQoKICAtIG1hdGNoOgogICAgICBzZXJ2aWNlOiBkYXRhYmFzZQogICAgcmVjZWl2ZXI6IGVtYWlsLW1lCgogICAgZ3JvdXBfYnk6IFthbGVydG5hbWUsIGNsdXN0ZXIsIGRhdGFiYXNlXQogICAgcm91dGVzOgogICAgLSBtYXRjaDoKICAgICAgICBvd25lcjogdGVhbS1YCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZQogICAgICBjb250aW51ZTogdHJ1ZQoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IHdhcm5pbmcKICAgICAgcmVjZWl2ZXI6IGVtYWlsLW1lIAoKICAgIC0gbWF0Y2g6CiAgICAgICAgc2V2ZXJpdHk6IGZyb250LWNyaXRpY2FsCiAgICAgIHJlY2VpdmVyOiBlbWFpbC1tZSAgIAoKCnJlY2VpdmVyczoKLSBuYW1lOiAnZW1haWwtbWUnCiAgZW1haWxfY29uZmlnczoKICAtIHRvOiAnbWVpc2FtLmIyMjJAZ21haWwuY29tJwogCi0gbmFtZTogJ3RlYW0tWS1tYWlscycKICBlbWFpbF9jb25maWdzOgogIC0gdG86ICdtZWlzYW0uYjIyMkBnbWFpbC5jb20n
kind: Secret
metadata:
creationTimestamp: "2019-04-14T07:00:50Z"
labels:
alertmanager: kube-prometheus
app: alertmanager
chart: alertmanager-0.1.7
heritage: Tiller
release: kube-prometheus
name: alertmanager-kube-prometheus
namespace: monitoring
resourceVersion: "598489"
selfLink: /api/v1/namespaces/monitoring/secrets/alertmanager-kube-prometheus
uid: 099ab6d0-5e83-11e9-9f0d-5254001850dc
type: Opaque


And my AlertManager.yaml looks like this



global:
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'zok.co222@gmail.com'
smtp_auth_username: 'zok.co222@gmail.com'
smtp_auth_password: 'Hift222'

templates:
- '/etc/alertmanager/template/*.tmpl'


route:

group_by: ['alertname', 'cluster', 'service' , 'severity']

group_wait: 30s

group_interval: 5m

repeat_interval: 1h

receiver: email-me

routes:
- match_re:
service: ^(foo1|foo2|baz)$
receiver: email-me
routes:
- match:
severity: critical
receiver: email-me
- match:
service: files
receiver: email-me

- match:
severity: warning
receiver: email-me

- match:
service: database
receiver: email-me

group_by: [alertname, cluster, database]
routes:
- match:
owner: team-X
receiver: email-me
continue: true

- match:
severity: warning
receiver: email-me

- match:
severity: front-critical
receiver: email-me


receivers:
- name: 'email-me'
email_configs:
- to: 'meisam.b222@gmail.com'

- name: 'team-Y-mails'
email_configs:
- to: 'meisam.b222@gmail.com


My Alerting list in Prometheus dashboard :



Prometheus Dashboard







email monitoring kubernetes prometheus alertmanager






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 14 at 10:45









meisam bahramimeisam bahrami

62




62




put on hold as off-topic by Ward yesterday


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
If this question can be reworded to fit the rules in the help center, please edit the question.







put on hold as off-topic by Ward yesterday


This question appears to be off-topic. The users who voted to close gave this specific reason:


  • "Questions on Server Fault must be about managing information technology systems in a business environment. Home and end-user computing questions may be asked on Super User, and questions about development, testing and development tools may be asked on Stack Overflow." – Ward
If this question can be reworded to fit the rules in the help center, please edit the question.












  • Please don't cross-post: stackoverflow.com/questions/55675215/…

    – Ward
    yesterday

















  • Please don't cross-post: stackoverflow.com/questions/55675215/…

    – Ward
    yesterday
















Please don't cross-post: stackoverflow.com/questions/55675215/…

– Ward
yesterday





Please don't cross-post: stackoverflow.com/questions/55675215/…

– Ward
yesterday










0






active

oldest

votes

















0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes

Popular posts from this blog

Club Baloncesto Breogán Índice Historia | Pavillón | Nome | O Breogán na cultura popular | Xogadores | Adestradores | Presidentes | Palmarés | Historial | Líderes | Notas | Véxase tamén | Menú de navegacióncbbreogan.galCadroGuía oficial da ACB 2009-10, páxina 201Guía oficial ACB 1992, páxina 183. Editorial DB.É de 6.500 espectadores sentados axeitándose á última normativa"Estudiantes Junior, entre as mellores canteiras"o orixinalHemeroteca El Mundo Deportivo, 16 setembro de 1970, páxina 12Historia do BreogánAlfredo Pérez, o último canoneiroHistoria C.B. BreogánHemeroteca de El Mundo DeportivoJimmy Wright, norteamericano do Breogán deixará Lugo por ameazas de morteResultados de Breogán en 1986-87Resultados de Breogán en 1990-91Ficha de Velimir Perasović en acb.comResultados de Breogán en 1994-95Breogán arrasa al Barça. "El Mundo Deportivo", 27 de setembro de 1999, páxina 58CB Breogán - FC BarcelonaA FEB invita a participar nunha nova Liga EuropeaCharlie Bell na prensa estatalMáximos anotadores 2005Tempada 2005-06 : Tódolos Xogadores da Xornada""Non quero pensar nunha man negra, mais pregúntome que está a pasar""o orixinalRaúl López, orgulloso dos xogadores, presume da boa saúde económica do BreogánJulio González confirma que cesa como presidente del BreogánHomenaxe a Lisardo GómezA tempada do rexurdimento celesteEntrevista a Lisardo GómezEl COB dinamita el Pazo para forzar el quinto (69-73)Cafés Candelas, patrocinador del CB Breogán"Suso Lázare, novo presidente do Breogán"o orixinalCafés Candelas Breogán firma el mayor triunfo de la historiaEl Breogán realizará 17 homenajes por su cincuenta aniversario"O Breogán honra ao seu fundador e primeiro presidente"o orixinalMiguel Giao recibiu a homenaxe do PazoHomenaxe aos primeiros gladiadores celestesO home que nos amosa como ver o Breo co corazónTita Franco será homenaxeada polos #50anosdeBreoJulio Vila recibirá unha homenaxe in memoriam polos #50anosdeBreo"O Breogán homenaxeará aos seus aboados máis veteráns"Pechada ovación a «Capi» Sanmartín e Ricardo «Corazón de González»Homenaxe por décadas de informaciónPaco García volve ao Pazo con motivo do 50 aniversario"Resultados y clasificaciones""O Cafés Candelas Breogán, campión da Copa Princesa""O Cafés Candelas Breogán, equipo ACB"C.B. Breogán"Proxecto social"o orixinal"Centros asociados"o orixinalFicha en imdb.comMario Camus trata la recuperación del amor en 'La vieja música', su última película"Páxina web oficial""Club Baloncesto Breogán""C. B. Breogán S.A.D."eehttp://www.fegaba.com

Vilaño, A Laracha Índice Patrimonio | Lugares e parroquias | Véxase tamén | Menú de navegación43°14′52″N 8°36′03″O / 43.24775, -8.60070

Cegueira Índice Epidemioloxía | Deficiencia visual | Tipos de cegueira | Principais causas de cegueira | Tratamento | Técnicas de adaptación e axudas | Vida dos cegos | Primeiros auxilios | Crenzas respecto das persoas cegas | Crenzas das persoas cegas | O neno deficiente visual | Aspectos psicolóxicos da cegueira | Notas | Véxase tamén | Menú de navegación54.054.154.436928256blindnessDicionario da Real Academia GalegaPortal das Palabras"International Standards: Visual Standards — Aspects and Ranges of Vision Loss with Emphasis on Population Surveys.""Visual impairment and blindness""Presentan un plan para previr a cegueira"o orixinalACCDV Associació Catalana de Cecs i Disminuïts Visuals - PMFTrachoma"Effect of gene therapy on visual function in Leber's congenital amaurosis"1844137110.1056/NEJMoa0802268Cans guía - os mellores amigos dos cegosArquivadoEscola de cans guía para cegos en Mortágua, PortugalArquivado"Tecnología para ciegos y deficientes visuales. Recopilación de recursos gratuitos en la Red""Colorino""‘COL.diesis’, escuchar los sonidos del color""COL.diesis: Transforming Colour into Melody and Implementing the Result in a Colour Sensor Device"o orixinal"Sistema de desarrollo de sinestesia color-sonido para invidentes utilizando un protocolo de audio""Enseñanza táctil - geometría y color. Juegos didácticos para niños ciegos y videntes""Sistema Constanz"L'ocupació laboral dels cecs a l'Estat espanyol està pràcticament equiparada a la de les persones amb visió, entrevista amb Pedro ZuritaONCE (Organización Nacional de Cegos de España)Prevención da cegueiraDescrición de deficiencias visuais (Disc@pnet)Braillín, un boneco atractivo para calquera neno, con ou sen discapacidade, que permite familiarizarse co sistema de escritura e lectura brailleAxudas Técnicas36838ID00897494007150-90057129528256DOID:1432HP:0000618D001766C10.597.751.941.162C97109C0155020