Powershell to ElasticSearch to find ElastAlert

I recently worked on an interesting project where I needed to use a powershell script to query ElasticSearch to find a document that was inserted via ElastAlert.

The purpose of this exercise was to determine whether or not a service had been marked down recently, which would determine whether an operation ran that might take down the passive node in an active/passive HA configuration.

The following script snippet will search ElasticSearch for any entries in the past 1 week with the specified rule name with more than 0 hits and matches.

    $Rule_Name = "Rule name here"

    $Es_Endpoint = "elastic_search_dns_endpoint"
    $Es_Index    = "elastalert_writeback_index"
    $Es_Type     = "elastalert_status"

    $Body = @{
      "query" = @{
        "bool" = @{
          "filter" = @(
              "term" = @{
                "rule_name" = $Rule_Name;
              "range" = @{
                "hits" = @{
                  "gt" = 0 
              "range" = @{
                "matches" = @{
                  "gt" = 0 
              "range" = @{
                "@timestamp" = @{
                  "gt" = "now-1w"

    $Json_Body = $Body | ConvertTo-Json -Depth 10

    # Un-comment as needed for troubleshooting
    # Write-Output $Json_Body

    $Response = Invoke-RestMethod -Method POST -URI https://$Es_Endpoint/$Es_Index/_search  -Body $Json_Body -ContentType 'application/json'

    # Un-comment these as needed for troubleshooting
    # Write-Output ($Response | Format-List | Out-String)
    # Write-Output ($Response.hits.total | Out-String)

    if ($Response.hits.total -gt 0) {
      $Restore = 0 

Once the query returns, the script checks to see if the number of hits exceeds 0, which means at least one entry satisfied the query parameters. Based on this response, action can then be taken on the HA service in question.

ELK + EA — Silencing ElastAlert Alerts

Many shops are realizing the benefit of the ELK stack / Elastic Stack, and the great flexibility that it brings to an infrastructure in the form of centralized logging and reporting which has always been critical when troubleshooting difficult and/or distributed problems. Having many input options (via elastic beats) to choose from, and lots of flexibility in data manipulation (via logstash) has only increased the usefulness of this stack. As a consultant, I’m finding this stack deployed more and more often with clients and it’s enjoyable to work with.

I’ve had the opportunity to implement ElastAlert to provide monitoring and alerting services against an established Elastic Stack deployment. ElastAlert is a Yelp product that is written in Python and is “a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in ElasticSearch”.

With ElastAlert, much of what has traditionally been monitored via Nagios, or similar tool, can now be done against ElasticSearch. ElastAlert also provides many notification options, templates, and formats. Also, there is a fairly straightforward enhancement process where local modifications can be made against the framework without diverting from the main code base when additional processing or manipulation may be desired.

With a very strong background in Nagios and related tools, the one failure (with an existing enhancement request) in ElastAlert is no ability to silence or suppress or acknowledge alerts. They are either alerting or not, relative to the realert setting. This is a huge inconvenience if all alerts are not immediately resolved as ElastAlert will continue to notify the alerting endpoint with alerts that are already being acted upon and require no new action. This is a (IMO) bad way to do business as it may result in missed alerts and poor service to customers.

ElastAlert will send out an alert on every run unless it has an entry in the elastalert_status (or equivalent metadata index) index under the silence type, ie “_type:silence” with an “until” field that has not yet expired, for the rule in question. This is how ElastAlert maintains the proper realert time period for alerts where notifications have already been sent and the run time is more frequent than the realert time. We can add an appropriate entry to this index to silence an alert and provide the same functionality as acknowledging in Nagios or New Relic, or similar behavior in other alerting systems.

To provide an example, start with the rule that has the following configuration:

es_port: 80
aws_region: us-west-2
name: "nginx-web-app error in logs"
index: sales-promos-*
type: any
  - query_string:
    query: "type:log AND source:'/var/log/nginx/sales-promo.log' AND message: *error*"
  minutes: 1
query_key: host
  minutes: 15
 - "sns"
sns_topic_arn: arn:aws:sns:us-west-2:************:duty-pager-alerts

The rule name that we would use is “nginx-web-app error in logs”. The realert time is 15 minutes. This means any time we get errors in the logs, we’ll see an error alert every 15 minutes, as long as the error condition continues. In order to suppress this alert for 1 hours, we’d issue the following curl command (or similar):

$ export ES_ENDPOINT=
$ export ES_INDEX=elastalert_status

$ curl -X POST https://${ES_ENDPOINT}/${ES_INDEX}/silence/ -d '{
  "rule_name": "nginx-web-app error in logs.ip-172-16-0-10",
  "@timestamp": "2017-08-07T16:43:24.000000Z",
  "exponent": 0,
  "until": "2017-08-07T20:43:24.000000Z"

Note also that when using a query_key, the node identified by the query_key can be silenced without silencing the alert in general. This is incredibly helpful as one problem should not disable the entire monitoring system. This example above shows silencing the alert for only the host with a hostname of ip-172-16-0-10. Note that if a query_key is specified in a silence entry when the rule does not have a query_key defined, ElastAlert will fail to run.

To delete an entry, in the event an error has occurred, issue a curl delete, after locating the index ID of the entry to delete, ie:

$ curl -X DELETE https://${ES_ENDPOINT}/${ES_INDEX}/silence/${index_id_of_bad_entry}

It may take a few minutes for ElasticSearch to re-index the data after a delete so the error may not go away immediately.