name |
value |
description
|
hadoop.hdfs.configuration.version |
1 |
配置文件版本
|
dfs.namenode.logging.level |
info |
dfs namenode的日志级别。
其他值包括"dir"(跟踪命名空间的变化),"block"(跟踪复制时的块变化和块创建/删除操作),"all"
|
dfs.namenode.rpc-address |
|
RPC(远程过程调用)地址,处理所有客户端请求。在HA/Federation情景下,存在多个namenode,name的服务id被添加到name后。如,dfs.namenode.rpc-address.ns1。
value的形式为nn-host1:rpc-port
|
dfs.namenode.rpc-bind-host |
|
服务器实际绑定的地址。如果设置该选项,RPC服务器会绑定该地址和dfs.namenode.rpc-address中设定的端口。HA/Federation场景下,可以为每一个name
node指定一个地址。通过设置该值为0.0.0.0可以使name node监听所有的网络接口
|
dfs.namenode.servicerpc-address |
|
HDFS服务通讯的RPC地址。如果设置该值,备份结点、数据结点和其他服务将会连接到指定地址。HA/Federation场景下,name服务id添加到name后面。如,dfs.namenode.servicerpc-address.ns1
value的形式为nn-host1:rpc-port。如果该值未设置,则默认为dfs.namenode.rpc-address
|
dfs.namenode.servicerpc-bind-host |
|
服务器实际绑定的地址。如果设置该选项,提供HDFS服务的RPC服务器会绑定该地址和dfs.namenode.servicerpc-address中设定的端口。
其他含义同dfs.namenode.rpc-bind-host
|
dfs.namenode.secondary.http-address |
0.0.0.0:50090 |
次命名结点http服务器地址和端口
|
dfs.datanode.address |
0.0.0.0:50010 |
数据结点进行数据传输的服务器地址和端口
|
dfs.datanode.http.address |
0.0.0.0:50075 |
数据结点的http服务器地址和端口
|
dfs.datanode.ipc.address |
0.0.0.0:50020 |
数据结点的ipc服务器地址和端口
|
dfs.datanode.handler.count |
10 |
数据结点的服务线程数
|
dfs.namenode.http-address |
0.0.0.0:50070 |
dfs命名结点web ui监听的地址和端口
|
dfs.https.enable |
false |
HDFS是否支持HTTPS(SSL)
|
dfs.client.https.need-auth |
false |
是否需要SSL客户端证书
|
dfs.https.server.keystore.resource |
ssl-server.xml |
SSL服务器keystore信息资源文件
|
dfs.client.https.keystore.resource |
ssl-client.xml |
SSL客户端keystore信息资源文件
|
dfs.datanode.https.address |
0.0.0.0:50475 |
数据结点https服务器地址和端口
|
dfs.namenode.https-address |
0.0.0.0:50470 |
命名结点https服务器地址和端口
|
dfs.datanode.dns.interface |
default |
数据结点上报IP地址的网络接口名称
|
dfs.datanode.dns.nameserver |
default |
DNS主机名或IP地址,供数据结点与命名结点主机名通讯和显示
|
dfs.namenode.backup.address |
0.0.0.0:50100 |
备份结点服务器地址和端口。端口为0时,服务器使用一个空闲端口
|
dfs.namenode.backup.http-address |
0.0.0.0:50105 |
备份结点的http服务器地址和结点。端口为0时,服务器使用一个空闲端口
|
dfs.namenode.replication.considerLoad |
true |
选择目标结点时是否考虑其负载情况
|
dfs.default.chunk.view.size |
32768 |
浏览器显示文件时的显示字节数
|
dfs.datanode.du.reserved |
0 |
每个卷的保存空间(字节数)。常常设置为较大的空间供dfs外使用
|
dfs.namenode.name.dir |
file://${hadoop.tmp.dir}/dfs/name |
DFS存储命名表(fsimage)的本地文件系统路径。以逗号分隔多个路径时,命名表会被复制到所有路径来保证冗余
|
dfs.namenode.name.dir.restore |
false |
设置为true时,命名结点尝试恢复之前失败的dfs.namenode.name.dir。
该选项启用时,在检查点将会尝试恢复所有失败的目录
|
dfs.namenode.fs-limits.max-component-length |
0 |
Defines the maximum number of characters in each component of a path.
A value of 0 will disable the check. |
dfs.namenode.fs-limits.max-directory-items |
0 |
Defines the maximum number of items that a directory may contain. A
value of 0 will disable the check. |
dfs.namenode.fs-limits.min-block-size |
1048576 |
Minimum block size in bytes, enforced by the Namenode at create time.
This prevents the accidental creation of files with tiny block sizes (and
thus many blocks), which can degrade performance. |
dfs.namenode.fs-limits.max-blocks-per-file |
1048576 |
Maximum number of blocks per file, enforced by the Namenode on write.
This prevents the creation of extremely large files which can degrade
performance. |
dfs.namenode.edits.dir |
${dfs.namenode.name.dir} |
Determines where on the local filesystem the DFS name node should
store the transaction (edits) file. If this is a comma-delimited list of
directories then the transaction file is replicated in all of the
directories, for redundancy. Default value is same as
dfs.namenode.name.dir |
dfs.namenode.shared.edits.dir |
|
A directory on shared storage between the multiple namenodes in an HA
cluster. This directory will be written by the active and read by the
standby in order to keep the namespaces synchronized. This directory does
not need to be listed in dfs.namenode.edits.dir above. It should be left
empty in a non-HA cluster. |
dfs.namenode.edits.journal-plugin.qjournal |
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager |
|
dfs.permissions.enabled |
true |
If "true", enable permission checking in HDFS. If "false", permission
checking is turned off, but all other behavior is unchanged. Switching
from one parameter value to the other does not change the mode, owner or
group of files or directories. |
dfs.permissions.superusergroup |
supergroup |
The name of the group of super-users. |
dfs.block.access.token.enable |
false |
If "true", access tokens are used as capabilities for accessing
datanodes. If "false", no access tokens are checked on accessing
datanodes. |
dfs.block.access.key.update.interval |
600 |
Interval in minutes at which namenode updates its access keys. |
dfs.block.access.token.lifetime |
600 |
The lifetime of access tokens in minutes. |
dfs.datanode.data.dir |
file://${hadoop.tmp.dir}/dfs/data |
Determines where on the local filesystem an DFS data node should store
its blocks. If this is a comma-delimited list of directories, then data
will be stored in all named directories, typically on different devices.
Directories that do not exist are ignored. |
dfs.datanode.data.dir.perm |
700 |
Permissions for the directories on on the local filesystem where the
DFS data node store its blocks. The permissions can either be octal or
symbolic. |
dfs.replication |
3 |
Default block replication. The actual number of replications can be
specified when the file is created. The default is used if replication is
not specified in create time. |
dfs.replication.max |
512 |
Maximal block replication. |
dfs.namenode.replication.min |
1 |
Minimal block replication. |
dfs.blocksize |
134217728 |
The default block size for new files, in bytes. You can use the
following suffix (case insensitive): k(kilo), m(mega), g(giga), t(tera),
p(peta), e(exa) to specify the size (such as 128k, 512m, 1g, etc.), Or
provide complete size in bytes (such as 134217728 for 128 MB). |
dfs.client.block.write.retries |
3 |
The number of retries for writing blocks to the data nodes, before we
signal failure to the application. |
dfs.client.block.write.replace-datanode-on-failure.enable |
true |
If there is a datanode/network failure in the write pipeline,
DFSClient will try to remove the failed datanode from the pipeline and
then continue writing with the remaining datanodes. As a result, the
number of datanodes in the pipeline is decreased. The feature is to add
new datanodes to the pipeline. This is a site-wide property to
enable/disable the feature. When the cluster size is extremely small, e.g.
3 nodes or less, cluster administrators may want to set the policy to
NEVER in the default configuration file or disable this feature.
Otherwise, users may experience an unusually high rate of pipeline
failures since it is impossible to find new datanodes for replacement. See
also dfs.client.block.write.replace-datanode-on-failure.policy |
dfs.client.block.write.replace-datanode-on-failure.policy |
DEFAULT |
This property is used only if the value of
dfs.client.block.write.replace-datanode-on-failure.enable is true. ALWAYS:
always add a new datanode when an existing datanode is removed. NEVER:
never add a new datanode. DEFAULT: Let r be the replication number. Let n
be the number of existing datanodes. Add a new datanode only if r is
greater than or equal to 3 and either (1) floor(r/2) is greater than or
equal to n; or (2) r is greater than n and the block is
hflushed/appended. |
dfs.blockreport.intervalMsec |
21600000 |
Determines block reporting interval in milliseconds. |
dfs.blockreport.initialDelay |
0 |
Delay for first block report in seconds. |
dfs.datanode.directoryscan.interval |
21600 |
Interval in seconds for Datanode to scan data directories and
reconcile the difference between blocks in memory and on the disk. |
dfs.datanode.directoryscan.threads |
1 |
How many threads should the threadpool used to compile reports for
volumes in parallel have. |
dfs.heartbeat.interval |
3 |
Determines datanode heartbeat interval in seconds. |
dfs.namenode.handler.count |
10 |
The number of server threads for the namenode. |
dfs.namenode.safemode.threshold-pct |
0.999f |
Specifies the percentage of blocks that should satisfy the minimal
replication requirement defined by dfs.namenode.replication.min. Values
less than or equal to 0 mean not to wait for any particular percentage of
blocks before exiting safemode. Values greater than 1 will make safe mode
permanent. |
dfs.namenode.safemode.min.datanodes |
0 |
Specifies the number of datanodes that must be considered alive before
the name node exits safemode. Values less than or equal to 0 mean not to
take the number of live datanodes into account when deciding whether to
remain in safe mode during startup. Values greater than the number of
datanodes in the cluster will make safe mode permanent. |
dfs.namenode.safemode.extension |
30000 |
Determines extension of safe mode in milliseconds after the threshold
level is reached. |
dfs.datanode.balance.bandwidthPerSec |
1048576 |
Specifies the maximum amount of bandwidth that each datanode can
utilize for the balancing purpose in term of the number of bytes per
second. |
dfs.hosts |
|
Names a file that contains a list of hosts that are permitted to
connect to the namenode. The full pathname of the file must be specified.
If the value is empty, all hosts are permitted. |
dfs.hosts.exclude |
|
Names a file that contains a list of hosts that are not permitted to
connect to the namenode. The full pathname of the file must be specified.
If the value is empty, no hosts are excluded. |
dfs.namenode.max.objects |
0 |
The maximum number of files, directories and blocks dfs supports. A
value of zero indicates no limit to the number of objects that dfs
supports. |
dfs.namenode.decommission.interval |
30 |
Namenode periodicity in seconds to check if decommission is
complete. |
dfs.namenode.decommission.nodes.per.interval |
5 |
The number of nodes namenode checks if decommission is complete in
each dfs.namenode.decommission.interval. |
dfs.namenode.replication.interval |
3 |
The periodicity in seconds with which the namenode computes
repliaction work for datanodes. |
dfs.namenode.accesstime.precision |
3600000 |
The access time for HDFS file is precise upto this value. The default
value is 1 hour. Setting a value of 0 disables access times for
HDFS. |
dfs.datanode.plugins |
|
Comma-separated list of datanode plug-ins to be activated. |
dfs.namenode.plugins |
|
Comma-separated list of namenode plug-ins to be activated. |
dfs.stream-buffer-size |
4096 |
The size of buffer to stream files. The size of this buffer should
probably be a multiple of hardware page size (4096 on Intel x86), and it
determines how much data is buffered during read and write
operations. |
dfs.bytes-per-checksum |
512 |
The number of bytes per checksum. Must not be larger than
dfs.stream-buffer-size |
dfs.client-write-packet-size |
65536 |
Packet size for clients to write |
dfs.client.write.exclude.nodes.cache.expiry.interval.millis |
600000 |
The maximum period to keep a DN in the excluded nodes list at a
client. After this period, in milliseconds, the previously excluded
node(s) will be removed automatically from the cache and will be
considered good for block allocations again. Useful to lower or raise in
situations where you keep a file open for very long periods (such as a
Write-Ahead-Log (WAL) file) to make the writer tolerant to cluster
maintenance restarts. Defaults to 10 minutes. |
dfs.namenode.checkpoint.dir |
file://${hadoop.tmp.dir}/dfs/namesecondary |
Determines where on the local filesystem the DFS secondary name node
should store the temporary images to merge. If this is a comma-delimited
list of directories then the image is replicated in all of the directories
for redundancy. |
dfs.namenode.checkpoint.edits.dir |
${dfs.namenode.checkpoint.dir} |
Determines where on the local filesystem the DFS secondary name node
should store the temporary edits to merge. If this is a comma-delimited
list of directoires then teh edits is replicated in all of the directoires
for redundancy. Default value is same as dfs.namenode.checkpoint.dir |
dfs.namenode.checkpoint.period |
3600 |
The number of seconds between two periodic checkpoints. |
dfs.namenode.checkpoint.txns |
1000000 |
The Secondary NameNode or CheckpointNode will create a checkpoint of
the namespace every ‘dfs.namenode.checkpoint.txns‘ transactions,
regardless of whether ‘dfs.namenode.checkpoint.period‘ has expired. |
dfs.namenode.checkpoint.check.period |
60 |
The SecondaryNameNode and CheckpointNode will poll the NameNode every
‘dfs.namenode.checkpoint.check.period‘ seconds to query the number of
uncheckpointed transactions. |
dfs.namenode.checkpoint.max-retries |
3 |
The SecondaryNameNode retries failed checkpointing. If the failure
occurs while loading fsimage or replaying edits, the number of retries is
limited by this variable. |
dfs.namenode.num.checkpoints.retained |
2 |
The number of image checkpoint files that will be retained by the
NameNode and Secondary NameNode in their storage directories. All edit
logs necessary to recover an up-to-date namespace from the oldest retained
checkpoint will also be retained. |
dfs.namenode.num.extra.edits.retained |
1000000 |
The number of extra transactions which should be retained beyond what
is minimally necessary for a NN restart. This can be useful for audit
purposes or for an HA setup where a remote Standby Node may have been
offline for some time and need to have a longer backlog of retained edits
in order to start again. Typically each edit is on the order of a few
hundred bytes, so the default of 1 million edits should be on the order of
hundreds of MBs or low GBs. NOTE: Fewer extra edits may be retained than
value specified for this setting if doing so would mean that more segments
would be retained than the number configured by
dfs.namenode.max.extra.edits.segments.retained. |
dfs.namenode.max.extra.edits.segments.retained |
10000 |
The maximum number of extra edit log segments which should be retained
beyond what is minimally necessary for a NN restart. When used in
conjunction with dfs.namenode.num.extra.edits.retained, this configuration
property serves to cap the number of extra edits files to a reasonable
value. |
dfs.namenode.delegation.key.update-interval |
86400000 |
The update interval for master key for delegation tokens in the
namenode in milliseconds. |
dfs.namenode.delegation.token.max-lifetime |
604800000 |
The maximum lifetime in milliseconds for which a delegation token is
valid. |
dfs.namenode.delegation.token.renew-interval |
86400000 |
The renewal interval for delegation token in milliseconds. |
dfs.datanode.failed.volumes.tolerated |
0 |
The number of volumes that are allowed to fail before a datanode stops
offering service. By default any volume failure will cause a datanode to
shutdown. |
dfs.image.compress |
false |
Should the dfs image be compressed? |
dfs.image.compression.codec |
org.apache.hadoop.io.compress.DefaultCodec |
If the dfs image is compressed, how should they be compressed? This
has to be a codec defined in io.compression.codecs. |
dfs.image.transfer.timeout |
600000 |
Timeout for image transfer in milliseconds. This timeout and the
related dfs.image.transfer.bandwidthPerSec parameter should be configured
such that normal image transfer can complete within the timeout. This
timeout prevents client hangs when the sender fails during image transfer,
which is particularly important during checkpointing. Note that this
timeout applies to the entirety of image transfer, and is not a socket
timeout. |
dfs.image.transfer.bandwidthPerSec |
0 |
Maximum bandwidth used for image transfer in bytes per second. This
can help keep normal namenode operations responsive during checkpointing.
The maximum bandwidth and timeout in dfs.image.transfer.timeout should be
set such that normal image transfers can complete successfully. A default
value of 0 indicates that throttling is disabled. |
dfs.namenode.support.allow.format |
true |
Does HDFS namenode allow itself to be formatted? You may consider
setting this to false for any production cluster, to avoid any possibility
of formatting a running DFS. |
dfs.datanode.max.transfer.threads |
4096 |
Specifies the maximum number of threads to use for transferring data
in and out of the DN. |
dfs.datanode.readahead.bytes |
4193404 |
While reading block files, if the Hadoop native libraries are
available, the datanode can use the posix_fadvise system call to
explicitly page data into the operating system buffer cache ahead of the
current reader‘s position. This can improve performance especially when
disks are highly contended. This configuration specifies the number of
bytes ahead of the current read position which the datanode will attempt
to read ahead. This feature may be disabled by configuring this property
to 0. If the native libraries are not available, this configuration has no
effect. |
dfs.datanode.drop.cache.behind.reads |
false |
In some workloads, the data read from HDFS is known to be
significantly large enough that it is unlikely to be useful to cache it in
the operating system buffer cache. In this case, the DataNode may be
configured to automatically purge all data from the buffer cache after it
is delivered to the client. This behavior is automatically disabled for
workloads which read only short sections of a block (e.g HBase random-IO
workloads). This may improve performance for some workloads by freeing
buffer cache spage usage for more cacheable data. If the Hadoop native
libraries are not available, this configuration has no effect. |
dfs.datanode.drop.cache.behind.writes |
false |
In some workloads, the data written to HDFS is known to be
significantly large enough that it is unlikely to be useful to cache it in
the operating system buffer cache. In this case, the DataNode may be
configured to automatically purge all data from the buffer cache after it
is written to disk. This may improve performance for some workloads by
freeing buffer cache spage usage for more cacheable data. If the Hadoop
native libraries are not available, this configuration has no
effect. |
dfs.datanode.sync.behind.writes |
false |
If this configuration is enabled, the datanode will instruct the
operating system to enqueue all written data to the disk immediately after
it is written. This differs from the usual OS policy which may wait for up
to 30 seconds before triggering writeback. This may improve performance
for some workloads by smoothing the IO profile for data written to disk.
If the Hadoop native libraries are not available, this configuration has
no effect. |
dfs.client.failover.max.attempts |
15 |
Expert only. The number of client failover attempts that should be
made before the failover is considered failed. |
dfs.client.failover.sleep.base.millis |
500 |
Expert only. The time to wait, in milliseconds, between failover
attempts increases exponentially as a function of the number of attempts
made so far, with a random factor of +/- 50%. This option specifies the
base value used in the failover calculation. The first failover will retry
immediately. The 2nd failover attempt will delay at least
dfs.client.failover.sleep.base.millis milliseconds. And so on. |
dfs.client.failover.sleep.max.millis |
15000 |
Expert only. The time to wait, in milliseconds, between failover
attempts increases exponentially as a function of the number of attempts
made so far, with a random factor of +/- 50%. This option specifies the
maximum value to wait between failovers. Specifically, the time between
two failover attempts will not exceed +/- 50% of
dfs.client.failover.sleep.max.millis milliseconds. |
dfs.client.failover.connection.retries |
0 |
Expert only. Indicates the number of retries a failover IPC client
will make to establish a server connection. |
dfs.client.failover.connection.retries.on.timeouts |
0 |
Expert only. The number of retry attempts a failover IPC client will
make on socket timeout when establishing a server connection. |
dfs.nameservices |
|
Comma-separated list of nameservices. |
dfs.nameservice.id |
|
The ID of this nameservice. If the nameservice ID is not configured or
more than one nameservice is configured for dfs.nameservices it is
determined automatically by matching the local node‘s address with the
configured address. |
dfs.ha.namenodes.EXAMPLENAMESERVICE |
|
The prefix for a given nameservice, contains a comma-separated list of
namenodes for a given nameservice (eg EXAMPLENAMESERVICE). |
dfs.ha.namenode.id |
|
The ID of this namenode. If the namenode ID is not configured it is
determined automatically by matching the local node‘s address with the
configured address. |
dfs.ha.log-roll.period |
120 |
How often, in seconds, the StandbyNode should ask the active to roll
edit logs. Since the StandbyNode only reads from finalized log segments,
the StandbyNode will only be as up-to-date as how often the logs are
rolled. Note that failover triggers a log roll so the StandbyNode will be
up to date before it becomes active. |
dfs.ha.tail-edits.period |
60 |
How often, in seconds, the StandbyNode should check for new finalized
log segments in the shared edits log. |
dfs.ha.automatic-failover.enabled |
false |
Whether automatic failover is enabled. See the HDFS High Availability
documentation for details on automatic HA configuration. |
dfs.support.append |
true |
Does HDFS allow appends to files? |
dfs.client.use.datanode.hostname |
false |
Whether clients should use datanode hostnames when connecting to
datanodes. |
dfs.datanode.use.datanode.hostname |
false |
Whether datanodes should use datanode hostnames when connecting to
other datanodes for data transfer. |
dfs.client.local.interfaces |
|
A comma separated list of network interface names to use for data
transfer between the client and datanodes. When creating a connection to
read from or write to a datanode, the client chooses one of the specified
interfaces at random and binds its socket to the IP of that interface.
Individual names may be specified as either an interface name (eg "eth0"),
a subinterface name (eg "eth0:0"), or an IP address (which may be
specified using CIDR notation to match a range of IPs). |
dfs.namenode.kerberos.internal.spnego.principal |
${dfs.web.authentication.kerberos.principal} |
|
dfs.secondary.namenode.kerberos.internal.spnego.principal |
${dfs.web.authentication.kerberos.principal} |
|
dfs.namenode.avoid.read.stale.datanode |
false |
Indicate whether or not to avoid reading from "stale" datanodes whose
heartbeat messages have not been received by the namenode for more than a
specified time interval. Stale datanodes will be moved to the end of the
node list returned for reading. See
dfs.namenode.avoid.write.stale.datanode for a similar setting for
writes. |
dfs.namenode.avoid.write.stale.datanode |
false |
Indicate whether or not to avoid writing to "stale" datanodes whose
heartbeat messages have not been received by the namenode for more than a
specified time interval. Writes will avoid using stale datanodes unless
more than a configured ratio (dfs.namenode.write.stale.datanode.ratio) of
datanodes are marked as stale. See dfs.namenode.avoid.read.stale.datanode
for a similar setting for reads. |
dfs.namenode.stale.datanode.interval |
30000 |
Default time interval for marking a datanode as "stale", i.e., if the
namenode has not received heartbeat msg from a datanode for more than this
time interval, the datanode will be marked and treated as "stale" by
default. The stale interval cannot be too small since otherwise this may
cause too frequent change of stale states. We thus set a minimum stale
interval value (the default value is 3 times of heartbeat interval) and
guarantee that the stale interval cannot be less than the minimum value. A
stale data node is avoided during lease/block recovery. It can be
conditionally avoided for reads (see
dfs.namenode.avoid.read.stale.datanode) and for writes (see
dfs.namenode.avoid.write.stale.datanode). |
dfs.namenode.write.stale.datanode.ratio |
0.5f |
When the ratio of number stale datanodes to total datanodes marked is
greater than this ratio, stop avoiding writing to stale nodes so as to
prevent causing hotspots. |
dfs.namenode.invalidate.work.pct.per.iteration |
0.32f |
*Note*: Advanced property. Change with caution. This determines the
percentage amount of block invalidations (deletes) to do over a single DN
heartbeat deletion command. The final deletion count is determined by
applying this percentage to the number of live nodes in the system. The
resultant number is the number of blocks from the deletion list chosen for
proper invalidation over a single heartbeat of a single DN. Value should
be a positive, non-zero percentage in float notation (X.Yf), with 1.0f
meaning 100%. |
dfs.namenode.replication.work.multiplier.per.iteration |
2 |
*Note*: Advanced property. Change with caution. This determines the
total amount of block transfers to begin in parallel at a DN, for
replication, when such a command list is being sent over a DN heartbeat by
the NN. The actual number is obtained by multiplying this multiplier with
the total number of live nodes in the cluster. The result number is the
number of blocks to begin transfers immediately for, per DN heartbeat.
This number can be any positive, non-zero integer. |
dfs.webhdfs.enabled |
false |
Enable WebHDFS (REST API) in Namenodes and Datanodes. |
hadoop.fuse.connection.timeout |
300 |
The minimum number of seconds that we‘ll cache libhdfs connection
objects in fuse_dfs. Lower values will result in lower memory consumption;
higher values may speed up access by avoiding the overhead of creating new
connection objects. |
hadoop.fuse.timer.period |
5 |
The number of seconds between cache expiry checks in fuse_dfs. Lower
values will result in fuse_dfs noticing changes to Kerberos ticket caches
more quickly. |
dfs.metrics.percentiles.intervals |
|
Comma-delimited set of integers denoting the desired rollover
intervals (in seconds) for percentile latency metrics on the Namenode and
Datanode. By default, percentile latency metrics are disabled. |
dfs.encrypt.data.transfer |
false |
Whether or not actual block data that is read/written from/to HDFS
should be encrypted on the wire. This only needs to be set on the NN and
DNs, clients will deduce this automatically. |
dfs.encrypt.data.transfer.algorithm |
|
This value may be set to either "3des" or "rc4". If nothing is set,
then the configured JCE default on the system is used (usually 3DES.) It
is widely believed that 3DES is more cryptographically secure, but RC4 is
substantially faster. |
dfs.datanode.hdfs-blocks-metadata.enabled |
false |
Boolean which enables backend datanode-side support for the
experimental DistributedFileSystem#getFileVBlockStorageLocations
API. |
dfs.client.file-block-storage-locations.num-threads |
10 |
Number of threads used for making parallel RPCs in
DistributedFileSystem#getFileBlockStorageLocations(). |
dfs.client.file-block-storage-locations.timeout |
60 |
Timeout (in seconds) for the parallel RPCs made in
DistributedFileSystem#getFileBlockStorageLocations(). |
dfs.journalnode.rpc-address |
0.0.0.0:8485 |
The JournalNode RPC server address and port. |
dfs.journalnode.http-address |
0.0.0.0:8480 |
The address and port the JournalNode web UI listens on. If the port is
0 then the server will start on a free port. |
dfs.namenode.audit.loggers |
default |
List of classes implementing audit loggers that will receive audit
events. These should be implementations of
org.apache.hadoop.hdfs.server.namenode.AuditLogger. The special value
"default" can be used to reference the default audit logger, which uses
the configured log system. Installing custom audit loggers may affect the
performance and stability of the NameNode. Refer to the custom logger‘s
documentation for more details. |
dfs.domain.socket.path |
|
Optional. This is a path to a UNIX domain socket that will be used for
communication between the DataNode and local HDFS clients. If the string
"_PORT" is present in this path, it will be replaced by the TCP port of
the DataNode. |
dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold |
10737418240 |
Only used when the dfs.datanode.fsdataset.volume.choosing.policy is
set to
org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy.
This setting controls how much DN volumes are allowed to differ in terms
of bytes of free disk space before they are considered imbalanced. If the
free space of all the volumes are within this range of each other, the
volumes will be considered balanced and block assignments will be done on
a pure round robin basis. |
dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction |
0.75f |
Only used when the dfs.datanode.fsdataset.volume.choosing.policy is
set to
org.apache.hadoop.hdfs.server.datanode.fsdataset.AvailableSpaceVolumeChoosingPolicy.
This setting controls what percentage of new block allocations will be
sent to volumes with more available disk space than others. This setting
should be in the range 0.0 - 1.0, though in practice 0.5 - 1.0, since
there should be no reason to prefer that volumes with less available disk
space receive more block allocations. |
dfs.namenode.edits.noeditlogchannelflush |
false |
Specifies whether to flush edit log file channel. When set, expensive
FileChannel#force calls are skipped and synchronous disk writes are
enabled instead by opening the edit log file with RandomAccessFile("rws")
flags. This can significantly improve the performance of edit log writes
on the Windows platform. Note that the behavior of the "rws" flags is
platform and hardware specific and might not provide the same level of
guarantees as FileChannel#force. For example, the write will skip the
disk-cache on SAS and SCSI devices while it might not on SATA devices.
This is an expert level setting, change with caution. |
dfs.client.cache.drop.behind.writes |
|
Just like dfs.datanode.drop.cache.behind.writes, this setting causes
the page cache to be dropped behind HDFS writes, potentially freeing up
more memory for other uses. Unlike dfs.datanode.drop.cache.behind.writes,
this is a client-side setting rather than a setting for the entire
datanode. If present, this setting will override the DataNode default. If
the native libraries are not available to the DataNode, this configuration
has no effect. |
dfs.client.cache.drop.behind.reads |
|
Just like dfs.datanode.drop.cache.behind.reads, this setting causes
the page cache to be dropped behind HDFS reads, potentially freeing up
more memory for other uses. Unlike dfs.datanode.drop.cache.behind.reads,
this is a client-side setting rather than a setting for the entire
datanode. If present, this setting will override the DataNode default. If
the native libraries are not available to the DataNode, this configuration
has no effect. |
dfs.client.cache.readahead |
|
Just like dfs.datanode.readahead.bytes, this setting causes the
datanode to read ahead in the block file using posix_fadvise, potentially
decreasing I/O wait times. Unlike dfs.datanode.readahead.bytes, this is a
client-side setting rather than a setting for the entire datanode. If
present, this setting will override the DataNode default. If the native
libraries are not available to the DataNode, this configuration has no
effect. |
dfs.namenode.enable.retrycache |
true |
This enables the retry cache on the namenode. Namenode tracks for
non-idempotent requests the corresponding response. If a client retries
the request, the response from the retry cache is sent. Such operations
are tagged with annotation @AtMostOnce in namenode protocols. It is
recommended that this flag be set to true. Setting it to false, will
result in clients getting failure responses to retried request. This flag
must be enabled in HA setup for transparent fail-overs. The entries in the
cache have expiration time configurable using
dfs.namenode.retrycache.expirytime.millis. |
dfs.namenode.retrycache.expirytime.millis |
600000 |
The time for which retry cache entries are retained. |
dfs.namenode.retrycache.heap.percent |
0.03f |
This parameter configures the heap size allocated for retry cache
(excluding the response cached). This corresponds to approximately 4096
entries for every 64MB of namenode process java heap size. Assuming retry
cache entry expiration time (configured using
dfs.namenode.retrycache.expirytime.millis) of 10 minutes, this enables
retry cache to support 7 operations per second sustained for 10 minutes.
As the heap size is increased, the operation rate linearly increases.
|