大数据笔记之Datax配置Oracle任务将数据写入HDFS(HA)

{ "job": { "content": [ { "reader": { "name": "oraclereader", "parameter": { "connection": [ { "jdbcUrl": [ "jdbc:oracle:thin:@127.0.0.1:1521:test" ], "querySql": [ "select name,card_id from student" ] } ], "password": "123456", "username": "testapp" } }, "writer": { "name": "hdfswriter", "parameter": { "column": [ { "name": "name", "type": "string" }, { "name": "card_id", "type": "string" } ], // TODO core-site.xml里查看 "defaultFS": "hdfs://mytest", "fieldDelimiter": " ", "fileName": "文件名.txt", "fileType": "text", "hadoopConfig": { "dfs.client.failover.proxy.provider.mytest": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider", // TODO 查看hdfs-site.xml "dfs.ha.namenodes.mytest": "nn1,nn2", "dfs.namenode.rpc-address.mytest.nn1": "192.168.1.100:9000", "dfs.namenode.rpc-address.mytest.nn2": "192.168.1.101:9000", "dfs.nameservices": "mytest" }, "path": "/", "writeMode": "append" } } } ], "setting": { "errorLimit": { "percentage": 0.02, "record": 0 }, "speed": { "channel": 1 } } } }
上一篇:大数据-190 Elasticsearch - ELK 日志分析实战 - 配置启动 Filebeat & Logstash


下一篇:开源AI助力医疗革新:OCR系统与知识图谱构建