日志服务数据加工最佳实践: 特定格式文本的加工

![](http://yunlei-statics.cn-hangzhou.log.aliyuncs.com/logstores/blog-tracking/track_ua.gif?APIVersion=0.6.0&blog=日志服务数据加工最佳实践: 特定格式文本的加工&src=yq&author=laiqiang.dlq)
本部分实践案例主要是根据在实际工作中的工单需求产生。接下来将从工单需求,加工编排(解决方案)等几个方面给读者解答如何使用LOG DSL编排解决任务需求。

场景:非标准JSON对象转JSON展开

需要对收集的dict数据进行二次嵌套展开操作。解决方案是先将dict数据转成json数据,然后使用e_json函数进行展开即可。

原始日志

在控制台收集到的日志格式是dict格式,如下所示:

content: {
    'referer': '-',
    'request': 'GET /phpMyAdmin',
    'status': 404,
    'data-1': {
        'aaa': 'Mozilla',
        'bbb': 'asde'
    },
    'data-2': {
        'up_adde': '-',
        'up_host': '-'
    }
}

LOG DSL编排

1、首先是对上述content数据做转json格式数据处理

e_set("content_json",str_replace(ct_str(v("content")),"'",'"'))

此时经过处理后的日志为:

content: {
    'referer': '-',
    'request': 'GET /phpMyAdmin',
    'status': 404,
    'data-1': {
        'aaa': 'Mozilla',
        'bbb': 'asde'
    },
    'data-2': {
        'up_adde': '-',
        'up_host': '-'
    }
}
content_json:  {
    "referer": "-",
    "request": "GET /phpMyAdmin",
    "status": 404,
    "data-1": {
        "aaa": "Mozilla",
        "bbb": "asde"
    },
    "data-2": {
        "up_adde": "-",
        "up_host": "-"
    }
}

2、对经过处理后的标准化的content_json数据进行展开。比如要展开第一层只需要设定JSON中的depth参数为1即可

e_json("content_json",depth=1,fmt='full')

此时的展开的的日志为:

content_json.data-1:  {"aaa": "Mozilla", "bbb": "asde"}
content_json.data-2:  {"up_adde": "-", "up_host": "-"}
content_json.referer:  -
content_json.request:  GET /phpMyAdmin
content_json.status:  404

如果depth设置为2,则展开的日志为:

content_json.data-1.aaa:  Mozilla
content_json.data-1.bbb:  asde
content_json.data-2.up_adde:  -
content_json.data-2.up_host:  -
content_json.referer:  -
content_json.request:  GET /phpMyAdmin
content_json.status:  404

3、综上LOG DSL规则可以如以下形式:

e_set("content_json",str_replace(ct_str(v("content")),"'",'"'))
e_json("content_json",depth=2,fmt='full')

加工后数据

加工后的数据是按照depth为2处理的,具体形式如下:

content:  {
    'referer': '-',
    'request': 'GET /phpMyAdmin',
    'status': 404,
    'data-1': {
        'aaa': 'Mozilla',
        'bbb': 'asde'
    },
    'data-2': {
        'up_adde': '-',
        'up_host': '-'
    }
}
content_json:  {
    "referer": "-",
    "request": "GET /phpMyAdmin",
    "status": 404,
    "data-1": {
        "aaa": "Mozilla",
        "bbb": "asde"
    },
    "data-2": {
        "up_adde": "-",
        "up_host": "-"
    }
}
content_json.data-1.aaa:  Mozilla
content_json.data-1.bbb:  asde
content_json.data-2.up_adde:  -
content_json.data-2.up_host:  -
content_json.referer:  -
content_json.request:  GET /phpMyAdmin
content_json.status:  404

场景:其他格式的文本转JSON格式展开

对于一些非标准的json格式数据,如果进行展开操作可以考虑组合规则的形式进行操作

原始日志

原始日志收集到的格式如以下格式:

content : {
    "pod" => {
        "name" => "crm-learning-follow-7bc48f8b6b-m6kgb"
    }, "node" => {
        "name" => "tw5"
    }, "labels" => {
        "pod-template-hash" => "7bc48f8b6b", "app" => "crm-learning-follow"
    }, "container" => {
        "name" => "crm-learning-follow"
    }, "namespace" => "testing1"
}

LOG DSL编排

1、首先对日志格式进行转换json形式,可以使用str_logtash_config_normalize函数进行转换,操作如下:

e_set("normalize_data",str_logtash_config_normalize(v("content")))

2、展开操作可以使用JSON函数,具体如下:

e_json("normalize_data",depth=1,fmt='full')

3、综上LOG DSL规则可以如以下形式:

e_set("normalize_data",str_logtash_config_normalize(v("content")))
e_json("normalize_data",depth=1,fmt='full')

加工后数据

content : {
    "pod" => {
        "name" => "crm-learning-follow-7bc48f8b6b-m6kgb"
    }, "node" => {
        "name" => "tw5"
    }, "labels" => {
        "pod-template-hash" => "7bc48f8b6b", "app" => "crm-learning-follow"
    }, "container" => {
        "name" => "crm-learning-follow"
    }, "namespace" => "testing1"
}
normalize_data:  {
    "pod": {
        "name": "crm-learning-follow-7bc48f8b6b-m6kgb"
    },
    "node": {
        "name": "tw5"
    },
    "labels": {
        "pod-template-hash": "7bc48f8b6b",
        "app": "crm-learning-follow"
    },
    "container": {
        "name": "crm-learning-follow"
    },
    "namespace": "testing1"
}
normalize_data.container.container:  {"name": "crm-learning-follow"}
normalize_data.labels.labels:  {"pod-template-hash": "7bc48f8b6b", "app": "crm-learning-follow"}
normalize_data.namespace:  testing1
normalize_data.node.node:  {"name": "tw5"}
normalize_data.pod.pod:  {"name": "crm-learning-follow-7bc48f8b6b-m6kgb"}

场景:部分文本特殊编码转换

在真实的工作环境下,总会遇到一些十六进制字符,需要对其解码才能正常阅读。因此,对于一些十六进制字符进行转义操作可是使用str_hex_escape_encode函数。

原始日志

content : "\xe4\xbd\xa0\xe5\xa5\xbd"

LOG DSL编排

e_set("hex_encode",str_hex_escape_encode(v("content")))

加工后数据

content : "\xe4\xbd\xa0\xe5\xa5\xbd"
hex_encode : "你好"

场景:XML字段展开

测试日志

在工作中也会时不时遇到各种类型数据,比如xml数据。如果要展开xml数据可是使用xml_to_json函数处理。

str : <?xmlversion="1.0"?>
<data>
    <countryname="Liechtenstein">
        <rank>1</rank>
        <year>2008</year>
        <gdppc>141100</gdppc>
        <neighborname="Austria"direction="E"/>
        <neighborname="Switzerland"direction="W"/>
    </country>
    <countryname="Singapore">
        <rank>4</rank>
        <year>2011</year>
        <gdppc>59900</gdppc>
        <neighborname="Malaysia"direction="N"/>
    </country>
    <countryname="Panama">
        <rank>68</rank>
        <year>2011</year>
        <gdppc>13600</gdppc>
        <neighborname="Costa Rica"direction="W"/>
        <neighborname="Colombia"direction="E"/>
    </country>
</data>

LOG DSL编排

e_set("str_json",xml_to_json(v("str")))

加工后的日志

str : <?xmlversion="1.0"?>
<data>
    <countryname="Liechtenstein">
        <rank>1</rank>
        <year>2008</year>
        <gdppc>141100</gdppc>
        <neighborname="Austria"direction="E"/>
        <neighborname="Switzerland"direction="W"/>
    </country>
    <countryname="Singapore">
        <rank>4</rank>
        <year>2011</year>
        <gdppc>59900</gdppc>
        <neighborname="Malaysia"direction="N"/>
    </country>
    <countryname="Panama">
        <rank>68</rank>
        <year>2011</year>
        <gdppc>13600</gdppc>
        <neighborname="Costa Rica"direction="W"/>
        <neighborname="Colombia"direction="E"/>
    </country>
</data>
str_dict :{
    "data": {
        "country": [{
            "@name": "Liechtenstein",
            "rank": "1",
            "year": "2008",
            "gdppc": "141100",
            "neighbor": [{
                "@name": "Austria",
                "@direction": "E"
            }, {
                "@name": "Switzerland",
                "@direction": "W"
            }]
        }, {
            "@name": "Singapore",
            "rank": "4",
            "year": "2011",
            "gdppc": "59900",
            "neighbor": {
                "@name": "Malaysia",
                "@direction": "N"
            }
        }, {
            "@name": "Panama",
            "rank": "68",
            "year": "2011",
            "gdppc": "13600",
            "neighbor": [{
                "@name": "Costa Rica",
                "@direction": "W"
            }, {
                "@name": "Colombia",
                "@direction": "E"
            }]
        }]
    }
}

进一步参考

欢迎扫码加入官方钉钉群获得实时更新与阿里云工程师的及时直接的支持:
日志服务数据加工最佳实践: 特定格式文本的加工

上一篇:日志服务数据加工最佳实践: 解析CSV格式的日志


下一篇:日志服务数据加工最佳实践: 使用正则与grok解析Ngnix日志