real 0m5.284s
user 0m0.003s
sys 0m0.031s
以下是当时做的sar的记录。
07:00:01 AM CPU %user %nice %system %iowait %steal %idle
09:20:01 AM all 10.48 0.11 1.76 2.89 0.00 84.76
09:30:01 AM all 10.59 0.10 1.81 2.45 0.00 85.04
09:40:01 AM all 7.91 0.18 1.61 3.20 0.00 87.10
09:50:01 AM all 7.26 0.07 1.66 3.23 0.00 87.78
10:00:01 AM all 7.54 0.13 1.53 3.67 0.00 87.13
10:10:01 AM all 7.78 0.09 1.76 3.92 0.00 86.45
10:20:01 AM all 8.24 0.09 2.27 3.98 0.00 85.43
10:30:01 AM all 7.38 0.08 1.79 5.18 0.00 85.57
10:40:01 AM all 8.14 0.16 2.01 6.36 0.00 83.33
10:50:02 AM all 7.05 0.10 1.74 4.83 0.00 86.29
11:00:01 AM all 7.61 0.09 2.04 5.43 0.00 84.83
11:10:01 AM all 7.22 0.09 1.70 6.22 0.00 84.76
11:20:01 AM all 6.71 0.12 2.10 7.35 0.00 83.72
11:30:01 AM all 9.36 0.10 2.87 5.03 0.00 82.63
11:40:01 AM all 7.26 0.25 1.76 6.08 0.00 84.65
11:50:01 AM all 7.17 0.12 2.40 5.24 0.00 85.07
12:00:01 PM all 6.30 0.10 2.64 5.27 0.00 85.69
Average: all 10.36 0.26 1.14 3.40 0.00 84.83
一个月前的数据情况
Production statistics 20-June-14:
204+0 records in
204+0 records out
213909504 bytes (214 MB) copied, 1.44182 seconds, 148 MB/s
real 0m1.445s
user 0m0.001s
sys 0m0.039s
测试环境
TEST machine statistics:
204+0 records in
204+0 records out
213909504 bytes (214 MB) copied, 0.550607 seconds, 388 MB/s
real 0m0.595s
user 0m0.001s
sys 0m0.072s
另外一个数据迁移服务器
TEST2 machine statistics:
213909504 bytes (214 MB) copied, 0.320128 seconds, 668 MB/s
real 0m0.43s
user 0m0.01s
sys 0m0.42s
The first two are major
VXFS version
We had IO performance issues with the very same version of VXFS installed in TRUE 6.0.100.000
Eventually we found we were hitting the following bug which is fixed with version 6.0.3 https://sort.symantec.com/patch/detail/8260
this happened at that site – even though it was a fresh install and NOT and upgrade as indicated in the below.
We did see the very same issues of performance degrading when removing the direct mount option
Hence we recommend installing this patch
SYMPTOM:
Performance degradation is seen after upgrade from SF 5.1SP1RP3 to SF 6.0.1 on
Linux
DESCRIPTION:
The degradation in performance is seen because the I/O are not unplugged before
getting delivered to lower layers in the IO path. These I/Os are unplugged by
OS at a default time which 3 milli seconds, which resulted in an additional
overhead in completion of I/Os.
RESOLUTION:
Code Changes made to explicitly unplug the I/Os before sending then to the lower
layer.
* 3254132 (Tracking ID: 3186971)
Power management
Servers should have power management savings disabled set to high performance
Make sure C-state is disabled set to C0
This is executed at the BIOS level and requires a reboot.