Kibana Elasticsearch如何按日期统计文档总数

nvbavucw  于 2022-12-09  发布在  Kibana
关注(0)|答案(1)|浏览(239)

作为我的主题,我想按日期统计当天和之前的文档,这是理解图表的样本。

{"index":{"_index":"login-2015.12.23","_type":"logs"}}
{"uid":"1","register_time":"2015-12-23T12:00:00Z","login_time":"2015-12-23T12:00:00Z"}
{"index":{"_index":"login-2015.12.23","_type":"logs"}}
{"uid":"2","register_time":"2015-12-23T12:00:00Z","login_time":"2015-12-23T12:00:00Z"}
{"index":{"_index":"login-2015.12.24","_type":"logs"}}
{"uid":"1","register_time":"2015-12-23T12:00:00Z","login_time":"2015-12-24T12:00:00Z"}
{"index":{"_index":"login-2015.12.25","_type":"logs"}}
{"uid":"1","register_time":"2015-12-23T12:00:00Z","login_time":"2015-12-25T12:00:00Z"}

如您所见,索引login-2015.12.23有两个文档,索引login-2015.12.24有一个文档,索引login-2015.12.23有一个文档。
现在我想知道结果

{
"hits" : {
    "total" : 6282,
    "max_score" : 1.0,
    "hits" : []
  },
  "aggregations" : {
    "group_by_date" : {
      "buckets" : [
        {
          "key_as_string" : "2015-12-23T12:00:00Z",
          "key" : 1662163200000,
          "doc_count" : 2,
        },
        {
          "key_as_string" : "2015-12-24T12:00:00Z",
          "key" : 1662163200000,
          "doc_count" : 3,
        },
        {
          "key_as_string" : "2015-12-25T12:00:00Z",
          "key" : 1662163200000,
          "doc_count" : 4,
        }
      ]
}

如果我计算日期2015-12-24T12:00:00Z,这意味着我必须同时计算日期2015-12-23T12:00:00Z2015-12-24T12:00:00Z。在我的项目中,我有很多这样的索引,我寻找了很多方法来实现这个目标,但没有,这是我的演示:

{
  "query": {"match_all": {}},
  "size": 0,
  "aggs": {
    "group_by_date": {
      "date_histogram": {
        "field": "timestamp", 
        "interval": "day"
      },
      "aggs": {
        "intersect": {
          "scripted_metric": {
            "init_script": "state.inner=[]",
            "map_script": "state.inner.add(params.param1 == 3 ? params.param2 * params.param1 : params.param1 * params.param2)",
            "combine_script": "return state.inner",
            "reduce_script": "return states",
            "params": {
              "param1": 3,
              "param2": 5
            }
          }
        }
      }
    }
  }
}

我想按日期分组,并使用scripted_metric来迭代日期列表,而不是第二次迭代只能在它的桶中,而不是所有的文档,所以有人有更好的想法来解决这个问题吗?

dgjrabp2

dgjrabp21#

您只需使用cumulative sum pipeline aggregation

{
  "query": {"match_all": {}},
  "size": 0,
  "aggs": {
    "group_by_date": {
      "date_histogram": {
        "field": "login_time", 
        "interval": "day"
      },
      "aggs": {
        "cumulative_docs": {
          "cumulative_sum": {
            "buckets_path": "_count" 
          }
        }
      }
    }
  }
}

结果如下所示:

"aggregations" : {
    "group_by_date" : {
      "buckets" : [
        {
          "key_as_string" : "2015-12-23T00:00:00.000Z",
          "key" : 1450828800000,
          "doc_count" : 2,
          "cumulative_docs" : {
            "value" : 2.0
          }
        },
        {
          "key_as_string" : "2015-12-24T00:00:00.000Z",
          "key" : 1450915200000,
          "doc_count" : 1,
          "cumulative_docs" : {
            "value" : 3.0
          }
        },
        {
          "key_as_string" : "2015-12-25T00:00:00.000Z",
          "key" : 1451001600000,
          "doc_count" : 1,
          "cumulative_docs" : {
            "value" : 4.0
          }
        }
      ]
    }
  }

相关问题