ELK Logstash Introduction

Logstash Introduction


Logstash is an open source data collection engine with real-time pipelining capabilities. Logstash can dynamically unify data from disparate sources and normalize the data into destinations of your choice. Cleanse and democratize all your data for diverse advanced downstream analytics and visualization use cases.

Logstash是具有实时流水线功能的开源数据收集引擎。Logstash可以动态统一来自不同来源的数据,并将数据标准化到您选择的目标位置。清除所有数据并使其*化,以用于各种高级下游分析和可视化用例。

While Logstash originally drove innovation in log collection, its capabilities extend well beyond that use case. Any type of event can be enriched and transformed with a broad array of input, filter, and output plugins, with many native codecs further simplifying the ingestion process. Logstash accelerates your insights by harnessing a greater volume and variety of data.

虽然Logstash最初推动了日志收集方面的创新,但其功能远远超出了该用例。任何类型的事件都可以通过各种各样的输入,过滤器和输出插件来丰富和转换,许多本机编解码器进一步简化了提取过程。Logstash通过利用更大数量和更多种类的数据来加快您的见解。

 

The Power of Logstash


The ingestion workhorse for Elasticsearch and more

Horizontally scalable data processing pipeline with strong Elasticsearch and Kibana synergy

具有强大的Elasticsearch和Kibana协同功能的水平可扩展数据处理管道

Pluggable pipeline architecture

Mix, match, and orchestrate different inputs, filters, and outputs to play in pipeline harmony

混合,匹配和编排不同的输入,过滤器和输出以协调管道

Community-extensible and developer-friendly plugin ecosystem

Over 200 plugins available, plus the flexibility of creating and contributing your own

超过200个可用的插件,以及创建和贡献自己的灵活性

ELK Logstash Introduction

 

Logstash Loves Data


Collect more, so you can know more. Logstash welcomes data of all shapes and sizes.

Logs and Metrics

Where it all started.

  • Handle all types of logging data

    • Easily ingest a multitude of web logs like Apache, and application logs like log4j for Java
    • Capture many other log formats like syslog, networking and firewall logs, and more
  • Enjoy complementary secure log forwarding capabilities with Filebeat
  • Collect metrics from GangliacollectdNetFlowJMX, and many other infrastructure and application platforms over TCP and UDP

The Web

Unlock the World Wide Web.

  • Transform HTTP requests into events

    • Consume from web service firehoses like Twitter for social sentiment analysis
    • Webhook support for GitHub, HipChat, JIRA, and countless other applications
    • Enables many Watcher alerting use cases
  • Create events by polling HTTP endpoints on demand

    • Universally capture health, performance, metrics, and other types of data from web application interfaces
    • Perfect for scenarios where the control of polling is preferred over receiving

Data Stores and Streams

Discover more value from the data you already own.

  • Better understand your data from any relational database or NoSQL store with a JDBC interface
  • Unify diverse data streams from messaging queues like Apache KafkaRabbitMQ, and Amazon SQS

Sensors and IoT

Explore an expansive breadth of other data.

  • In this age of technological advancement, the massive IoT world unleashes endless use cases through capturing and harnessing data from connected sensors.
  • Logstash is the common event collection backbone for ingestion of data shipped from mobile devices to intelligent homes, connected vehicles, healthcare sensors, and many other industry specific applications.

 

Easily Enrich Everything


The better the data, the better the knowledge. Clean and transform your data during ingestion to gain near real-time insights immediately at index or output time. Logstash comes out-of-box with many aggregations and mutations along with pattern matching, geo mapping, and dynamic lookup capabilities.

  • Grok is the bread and butter of Logstash filters and is used ubiquitously to derive structure out of unstructured data. Enjoy a wealth of integrated patterns aimed to help quickly resolve web, systems, networking, and other types of event formats.(Grok是Logstash过滤器的基础,广泛用于从非结构化数据中导出结构。享受多种旨在帮助快速解决Web,系统,网络和其他类型事件格式的集成模式。)
  • Expand your horizons by deciphering geo coordinates from IP addresses, normalizing date complexity, simplifying key-value pairs and CSV data, fingerprinting(anonymizing) sensitive information, and further enriching your data with local lookups or Elasticsearch queries.(通过从IP地址解密地理坐标,标准化 日期复杂性,简化键值对和 CSV数据,敏感信息进行指纹识别(匿名化),以及通过本地查找或Elasticsearch 查询进一步丰富数据,来扩展您的视野。)
  • Codecs are often used to ease the processing of common event structures like JSON and multiline events.(编解码器通常用于简化对常见事件结构(如JSON 和多行事件)的处理。)

See Transforming Data for an overview of some of the popular data processing plugins.

 

上一篇:python基础教程:Python标准库使用OrderedDict类的实例讲解


下一篇:2020-10-14 吴恩达DL学习-C5 序列模型-W1 循环序列模型(1.5 不同类型的RNN-一对一/一对多/多对一/多对多2种/注意力结构)