Elasticsearch Word Segmentation

> 对于倒排索引来说,很重要的一件事情就是需要对文本进行分词,经过分词可以获取情感、词性、质性、词频等等的数据。


#### Elasticsearch 分词工作原理

在 Elasticsearch 中进行行分词的需要经过分析器的3个模块,字符过滤器将文本进行替换或者删除,在由分词器进行拆分成单词,最后由Token过滤器将一些无用语气助词删掉。

Elasticsearch Word Segmentation


#### 英文分词

在Elasticsearch *支持5种不同的分词模式,在不同的场景下发挥不同的效果。


##### standard (过滤标点符号)

GET /_analyze
{
  "analyzer": "standard",
  "text": "The programmer's holiday is 1024!"
}

Elasticsearch Word Segmentation

##### simple (过滤数字和标点符号)

GET /_analyze
{
  "analyzer": "simple",
  "text": "The programmer's holiday is 1024!"
}

Elasticsearch Word Segmentation

##### whitespace (不过滤,按照空格分隔)

GET /_analyze
{
  "analyzer": "whitespace",
  "text": "The programmer's holiday is 1024!"
}

Elasticsearch Word Segmentation

##### stop (过滤停顿单词及标点符号,例如is are等等)

GET /_analyze
{
  "analyzer": "stop",
  "text": "The programmer's holiday is 1024!"
}

Elasticsearch Word Segmentation

##### keyword (视为一个整体不进行任何处理)

GET /_analyze

{
  "analyzer": "keyword",
  "text": "The programmer's holiday is 1024!"
}

Elasticsearch Word Segmentation


#### 中文分词

因为 Elasticsearch 默认的分词器只能按照单字进行拆分,无法具体分析其语意等,所以我们使用 analysis-icu 来代替默认的分词器。

GET /_analyze
{
  "analyzer": "icu_analyzer",
  "text": "南京市长江大桥"
}

Elasticsearch Word Segmentation


通过命令``./bin/elasticsearch-plugin install analysis-icu``进行安装

Elasticsearch Word Segmentation


GET /_analyze
{
  "analyzer": "icu_analyzer",
  "text": "南京市长江大桥"
}

Elasticsearch Word Segmentation


##### 其他的中文分词器

支持中文分词和词性标注功能

elasticsearch-thulac-plugin

https://github.com/microbun/elasticsearch-thulac-plugin


支持热更新分词字典及自定义词库

elasticsearch-analysis-ik

https://github.com/medcl/elasticsearch-analysis-ik


wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.9.2/elasticsearch-analysis-ik-7.9.2.zip

mkdir analysis-ik

unzip -d analysis-ik elasticsearch-analysis-ik-7.9.2.zip


##### 分词实验

# 先删除原先数据,避免影响实验
DELETE icu
DELETE ik
# 建立 ICU 的 Index
PUT icu
{
  "settings" : {
    "number_of_shards" : 1,
    "number_of_replicas": 1
  },
  "mappings" : {
    "properties" : {
      "description" : { 
        "type" : "text",
        "analyzer": "icu_analyzer",
        "search_analyzer": "icu_analyzer"
      }
    }
  }
}

# 建立 IK 的 Index
PUT ik
{
  "settings" : {
    "number_of_shards" : 1,
    "number_of_replicas": 1
  },
  "mappings" : {
    "properties" : {
      "description" : { 
        "type" : "text",
        "analyzer": "ik_max_word",
        "search_analyzer": "ik_smart"
      }
    }
  }
}


# 批量接口提交测试数据
POST _bulk
{ "index": {"_index":"ik" }}
{"description":"CHINKIANG VINEGAR·GOLD PLUM··12PC·金梅鎮江醋"}
{ "index": {"_index":"ik" }}
{"description":"5YEAR MATUREVINEGAR·SHUITA··24PC·山西五年老陳醋"}
{ "index": {"_index":"ik" }}
{"description":"RED VINEGAR·KOON CHUN··12PC·冠珍大紅浙醋"}
{ "index": {"_index":"ik" }}
{"description":"VINEGAR SEASONING·FUJI··5.28GAL·富士白醋"}
{ "index": {"_index":"ik" }}
{"description":"WHITE VINEGAR·FOUR IN ONE··4PC·四合醋"}
{ "index": {"_index":"ik" }}
{"description":"WHITE VINEGAR·HEAVENLY CHEF··4PC·天廚白醋"}
{ "index": {"_index":"ik" }}
{"description":"CHINKIANG VINEGAR··'24414·24PC·恒順鎮江醋"}
{ "index": {"_index":"ik" }}
{"description":"3YEAR MATUREVINEGAR·SHUITA··24PC·山西三年老陳醋"}
{ "index": {"_index":"ik" }}
{"description":"RICE VINEGAR·KONG YEN·'23709·4PC·工研白醋"}
{ "index": {"_index":"ik" }}
{"description":"CHINKIANG VINEGAR··'24421·24PC·金山鎮江醋"}
{ "index": {"_index":"ik" }}
{"description":"WHITE VINEGAR·CHAMPION··4PC·醋"}
{ "index": {"_index":"ik" }}
{"description":"BLACK VINEGAR·KONG YEN·'23707·4PC·工研烏醋"}
{ "index": {"_index":"ik" }}
{"description":"WHITE VINEGAR·GOLDEN STATE·<50GR>·4PC·醋"}
{ "index": {"_index":"ik" }}
{"description":"WHITE VINEGAR·ACCLAIM··4PC·醋"}
{ "index": {"_index":"ik" }}
{"description":"WHITE VINEGAR·GOLDEN STATE·<100GR>·4PC·醋"}
{ "index": {"_index":"ik" }}
{"description":"RICE VINEGAR·KONG YEN··24PCX10OZ·工研米醋"}
{ "index": {"_index":"ik" }}
{"description":"WHITE VINEGAR·ACCLAIM··12PCX32OZ·醋"}



POST icu/_search
{
  "query" : { 
    "match" : { 
      "description" : "老陳醋" 
    }
  }
}

Elasticsearch Word Segmentation

POST ik/_search
{
  "query" : { 
    "match" : { 
      "description" : "老陳醋" 
    }
  }

}

Elasticsearch Word Segmentation




GET /_analyze
{
  "analyzer": "icu_analyzer",
  "text": "老陳醋"
}

Elasticsearch Word Segmentation


GET /_analyze
{
  "analyzer": "ik_smart",
  "text": "老陳醋"
}

Elasticsearch Word Segmentation


上一篇:Prometheus - Discovery


下一篇:Prometheus - Node Exporter