最近,公司给了个优化任务,某个耗时的操作,在百亿的交易额下,处理异常缓慢,需要优化,以为每日发息做准备,在这里给大家介绍下我的优化思路,共同探讨下:
代码逻辑:
通过用户id获取用户所在区域id,每次批量处理1千个用户,起20个线程处理。
第一步,加缓存
通过用户id获取用户所在区域id分两步实现(代码中已经标红),第一步通过用户获取城市id,第二部通过城市id获取区域id,使用上篇博客介绍的方法(项目修炼之路(4)aop+注解的自动缓存),给两个方法加入Redis缓存。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
|
@Override public PublicResult<HashMap<Integer, Integer>> getUserAreaFranchiseeIDS(List<Integer> uids) {
PublicResult<HashMap<Integer, Integer>> result = new PublicResult<HashMap<Integer, Integer>>();
HashMap<Integer, Integer> resultMap = new HashMap<Integer, Integer>();
long time;
for (Integer uid :uids){
Integer areaId = Integer.valueOf( 0 );
try {
time=System.currentTimeMillis();
UserAreaFranchisee area =getUserAreaFranchisee(uid).getResult();
LOGGER.info( "=getUserAreaFranchiseeIDS=>--.uid:[" +uid+ "].[get -- wmpsDayInterChange]getUserAreaFranchisee() -------------spen time:" + (System.currentTimeMillis()-time));
time=System.currentTimeMillis();
int id = 0 ;
if (area != null && area.getCityid() != null && area.getCityid().intValue() > 0 ) {
id = area.getCityid().intValue();
tpr = logicTongchengAreaService.getTongchengArea(Integer.valueOf(id));
if (tpr != null && tpr.isSuccess() && tpr.getResult() != null && tpr.getResult().getId() != null && tpr.getResult().getId() > 0 ) {
areaId = tpr.getResult().getId();
}
}
LOGGER.info( "=getUserAreaFranchiseeIDS=>--..uid:[" +uid+ "].[get -- wmpsDayInterChange]getLogicTongchengAreaService() -------------spen time:" + (System.currentTimeMillis()-time));
} catch (Exception e){
LOGGER.error( "=getUserAreaFranchiseeIDS=>" ,e);
}
resultMap.put(uid,areaId);
}
result.setSuccess( true );
result.setResult(resultMap);
return result;
}
|
第二步,合并结果
问题:加入缓存后,发现,当访问频繁时,两次访问加入的缓存不合理:1,value为对象,给每次取值增加反序列化过程,实际只需id即可;2,两次操作,最终只需一个结果,造成资源浪费。
优化后:二次缓存变为一次缓存,key与value均为简单string与Intege
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
|
@Override public PublicResult<String> getUserAreaFranchiseeIDS(ArrayList<Integer> uids) {
PublicResult<String> result = new PublicResult<String>();
HashMap<Integer, Integer> resultMap = new HashMap<Integer, Integer>();
long time;
for (Integer uid :uids){
Integer areaId = Integer.valueOf( 0 );
try {
time=System.currentTimeMillis();
areaId = userAreaFranchiseeService.getUserAreaIdByUid(uid);
LOGGER.info( "=getUserAreaFranchiseeIDS=>--.uid:[" + uid + "].[get -- wmpsDayInterChange]getUserAreaIdByUid() -------------spen time:" + (System.currentTimeMillis() - time));
} catch (Exception e){
LOGGER.error( "=getUserAreaFranchiseeIDS=>" ,e);
}
resultMap.put(uid,areaId);
}
result.setSuccess( true );
result.setResult(JSON.toJSONString(resultMap));
return result;
}
|
第三步:批量读取
问题:redis为单线程,批量数据访问时,单个从redis拿数据的时间被延长,造成时间上的浪费,而且,浪费在网络上的时间比读数据时间要长
优化后:批量从redis获取一次获取,多次io改为一次io,拿不到的数据,才从数据库中读取,同时缓存到redis。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
|
@Override public PublicResult<String> getUserAreaFranchiseeIDS(ArrayList<Integer> uids) {
PublicResult<String> result = new PublicResult<String>();
HashMap<Integer, Integer> resultMap = new HashMap<Integer, Integer>();
long time;
ArrayList<String> uidKeys = new ArrayList<String>();
for ( int i= 0 ;i<uids.size();i++){
uidKeys.add(i,RedisKeyUtils.USER_AREA_ID+ uids.get(i));
}
List<Integer> listAreas = RedisUtils.mget(uidKeys.toArray(),Integer. class );
for ( int i= 0 ;i<uids.size();i++){
Integer uid = uids.get(i);
Integer areaId = Integer.valueOf( 0 );
if (listAreas.get(i)== null ){
try {
time=System.currentTimeMillis();
areaId = userAreaFranchiseeService.getUserAreaIdByUid(uid);
LOGGER.info( "=getUserAreaFranchiseeIDS=>--.uid:[" + uid + "].[get -- wmpsDayInterChange]getUserAreaIdByUid() -------------spen time:" + (System.currentTimeMillis() - time));
} catch (Exception e){
LOGGER.error( "=getUserAreaFranchiseeIDS=>error uid:[" +uid+ "]" ,e);
}
listAreas.set(i,areaId);
}
areaId = listAreas.get(i);
resultMap.put(uid,areaId);
}
result.setSuccess( true );
result.setResult(JSON.toJSONString(resultMap));
return result;
}
|
第四步:批量添加
问题:设置缓存周期后,每隔一段时间,读取数据几乎全从数据库读取,加上增加到redis的时间,会造成周期性读取缓慢。
优化后:时间限制拉长,判断是否能从redis获取一半的数据,如果不能,批量将数据缓存到redis(一次io),再走逻辑
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
|
@Override public PublicResult<String> getUserAreaFranchiseeIDS(ArrayList<Integer> uids) {
PublicResult<String> result = new PublicResult<String>();
HashMap<Integer, Integer> resultMap = new HashMap<Integer, Integer>();
long time;
ArrayList<String> uidKeys = new ArrayList<String>();
for ( int i= 0 ;i<uids.size();i++){
uidKeys.add(i,RedisKeyUtils.USER_AREA_ID+ uids.get(i));
}
List<Integer> listAreas = RedisUtils.mget(uidKeys.toArray(),Integer. class );
try {
if (ListUtil.countNullNumber(listAreas) > listAreas.size() / 2 ) {
initRedisByUids(uids);
listAreas = RedisUtils.mget(uidKeys.toArray(), Integer. class );
}
} catch (Exception e){
LOGGER.error( "=getUserAreaFranchiseeIDS=>initRedisByUids error" ,e);
}
for ( int i= 0 ;i<uids.size();i++){
Integer uid = uids.get(i);
Integer areaId = Integer.valueOf( 0 );
if (listAreas.get(i)== null ){
try {
time=System.currentTimeMillis();
areaId = userAreaFranchiseeService.getUserAreaIdByUid(uid);
LOGGER.info( "=getUserAreaFranchiseeIDS=>--.uid:[" + uid + "].[get -- wmpsDayInterChange]getUserAreaIdByUid() -------------spen time:" + (System.currentTimeMillis() - time));
} catch (Exception e){
LOGGER.error( "=getUserAreaFranchiseeIDS=>error uid:[" +uid+ "]" ,e);
}
listAreas.set(i,areaId);
}
areaId = listAreas.get(i);
resultMap.put(uid,areaId);
}
result.setSuccess( true );
result.setResult(JSON.toJSONString(resultMap));
return result;
}
private boolean initRedisByUids(ArrayList<Integer> uids){
boolean isSuccess = false ;
HashMap<String, Integer> resultMap = null ;
try {
resultMap = ListUtil.getMaxAndMinInterger(uids);
if (resultMap!= null && !resultMap.isEmpty()){
List<UserAreaUidVo> listResult = userAreaFranchiseeService.getUserAreaIdPageByUid(resultMap.get(ListUtil.minNumKey), resultMap.get(ListUtil.maxNumKey));
if (listResult!= null && !listResult.isEmpty()){
HashMap<String ,List> hashMapForUid =uidToRedisKeyAndVlues(listResult);
RedisUtils.mset(hashMapForUid.get(RedisKeys).toArray(),hashMapForUid.get(RedisValues).toArray(),RedisKeyUtils.USER_AREA_ID_TIME);
isSuccess= true ;
}
}
} catch (Exception e){
LOGGER.error( "=initRedisByUids=>" ,e);
}
return isSuccess;
}
private HashMap<String ,List> uidToRedisKeyAndVlues(List<UserAreaUidVo> listUserArea){
HashMap<String ,List> hashMapForUid = new HashMap<String ,List>();
List<String> keys = new ArrayList<String>(listUserArea.size());
List<Integer> values = new ArrayList<Integer>(listUserArea.size());
for ( int i= 0 ;i<listUserArea.size();i++){
keys.add( RedisKeyUtils.USER_AREA_ID + listUserArea.get(i).getUid());
values.add(listUserArea.get(i).getAreaid() == null ? 0 : listUserArea.get(i).getAreaid());
}
hashMapForUid.put(RedisKeys,keys);
hashMapForUid.put(RedisValues,values);
return hashMapForUid;
}
|
总结:
在工作中,我们会遇到各种难题,实际这些难题,帮助我们提升了自己的解决问题能力外,还帮助我们制造了一种奇妙的东西,叫思路,或者叫框架,就是再有类似问题时,我们会映射过来,我是不是解决过,不仅仅局限在代码端,在生活和处理社会问题时,实际是相通的!
所以,代码积累的不仅仅是工作经验,还有生活经验!
附录:工具类:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
|
public class ListUtil {
public static String maxNumKey = "max" ;
public static String minNumKey = "min" ;
/**
* 按照某大小对list分页
* @param targe
* @param size
* @return
*/ public static List<List> splitList(List targe, int size) {
List<List> listArr = new ArrayList<List>();
//获取被拆分的数组个数
int arrSize = targe.size()%size== 0 ?targe.size()/size:targe.size()/size+ 1 ;
for ( int i= 0 ;i<arrSize;i++) {
List sub = new ArrayList();
//把指定索引数据放入到list中
for ( int j=i*size;j<=size*(i+ 1 )- 1 ;j++) {
if (j<=targe.size()- 1 ) {
sub.add(targe.get(j));
}
}
listArr.add(sub);
}
return listArr;
}
/**
* 统计list中为null的元素个数
* @param listTest
* @return
*/ public static long countNullNumber(List listTest){
long count= 0 ;
for ( int i= 0 ;i<listTest.size();i++){
if (listTest.get(i)== null ){
count++;
}
}
return count;
}
/**
* 统计list中为null的元素个数
* @param listTest
* @return
*/ public static HashMap getMaxAndMinInterger(List<Integer> listTest) throws Exception{
if (listTest== null || listTest.isEmpty()){
throw new Exception( "=ListUtil.getMaxAndMinInterger=> listTest is null" );
}
HashMap<String,Integer> result = new HashMap<String,Integer>();
Integer maxNum= null ;
Integer minNum= null ;
for ( int i= 0 ;i<listTest.size();i++){
if (!(listTest.get(i)== null )){
if (maxNum== null ){
maxNum=listTest.get(i);
}
if (maxNum<listTest.get(i)){
maxNum=listTest.get(i);
}
if (minNum== null ){
minNum=listTest.get(i);
}
if (minNum>listTest.get(i)){
minNum=listTest.get(i);
}
}
}
if (maxNum== null || minNum == null ){
throw new Exception( "=ListUtil.getMaxAndMinInterger=> listTest is null" );
}
result.put(maxNumKey,maxNum);
result.put(minNumKey,minNum);
return result;
}
} |
本文转自yunlielai51CTO博客,原文链接:http://blog.51cto.com/4925054/1920485,如需转载请自行联系原作者