python实现使用词云展示图片

  记录瞬间

首先,要安装一些第三方包

pip install scipy
Collecting scipy
Downloading https://files.pythonhosted.org/packages/f1/b8/800d98339427199305f8b4a7f02827ec9bfea438d677aecbe0bd297092d5/scipy-1.2.0-cp37-cp37m-win_amd64.whl (31.7MB)
100% |████████████████████████████████| 31.7MB 167kB/s
Requirement already satisfied: numpy>=1.8.2 in d:\systemtools\python37\lib\site-packages (from scipy) (1.15.4)
Installing collected packages: scipy
Successfully installed scipy-1.2.0

pip install jieba
Collecting jieba
Downloading https://files.pythonhosted.org/packages/71/46/c6f9179f73b818d5827202ad1c4a94e371a29473b7f043b736b4dab6b8cd/jieba-0.39.zip (7.3MB)
100% |████████████████████████████████| 7.3MB 513kB/s
Building wheels for collected packages: jieba
Running setup.py bdist_wheel for jieba ... done
Stored in directory: C:\Users\sunzhongan\AppData\Local\pip\Cache\wheels\c9\c7\63\a9ec0322ccc7c365fd51e475942a82395807186e94f0522243
Successfully built jieba
Installing collected packages: jieba
Successfully installed jieba-0.39

pip install matplotlib
Collecting matplotlib
Downloading https://files.pythonhosted.org/packages/5c/ee/efaf04efc763709f6840cd8d08865d194f7453f43e98d042c92755cdddec/matplotlib-3.0.2-cp37-cp37m-win_amd64.whl (8.9MB)
100% |████████████████████████████████| 8.9MB 93kB/s
Collecting kiwisolver>=1.0.1 (from matplotlib)
Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/kiwisolver/
Downloading https://files.pythonhosted.org/packages/7c/be/7ae355b45699460e369ebf88d86058fca26827933974cc3f6b6b7800a324/kiwisolver-1.0.1-cp37-none-win_amd64.whl (57kB)
100% |████████████████████████████████| 61kB 123kB/s
Collecting python-dateutil>=2.1 (from matplotlib)
Downloading https://files.pythonhosted.org/packages/74/68/d87d9b36af36f44254a8d512cbfc48369103a3b9e474be9bdfe536abfc45/python_dateutil-2.7.5-py2.py3-none-any.whl (225kB)
100% |████████████████████████████████| 235kB 153kB/s
Requirement already satisfied: numpy>=1.10.0 in d:\systemtools\python37\lib\site-packages (from matplotlib) (1.15.4)
Collecting cycler>=0.10 (from matplotlib)
Downloading https://files.pythonhosted.org/packages/f7/d2/e07d3ebb2bd7af696440ce7e754c59dd546ffe1bbe732c8ab68b9c834e61/cycler-0.10.0-py2.py3-none-any.whl
Collecting pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 (from matplotlib)
Downloading https://files.pythonhosted.org/packages/de/0a/001be530836743d8be6c2d85069f46fecf84ac6c18c7f5fb8125ee11d854/pyparsing-2.3.1-py2.py3-none-any.whl (61kB)
100% |████████████████████████████████| 71kB 139kB/s
Requirement already satisfied: setuptools in d:\systemtools\python37\lib\site-packages (from kiwisolver>=1.0.1->matplotlib) (39.0.1)
Requirement already satisfied: six>=1.5 in d:\systemtools\python37\lib\site-packages (from python-dateutil>=2.1->matplotlib) (1.11.0)
Installing collected packages: kiwisolver, python-dateutil, cycler, pyparsing, matplotlib
Successfully installed cycler-0.10.0 kiwisolver-1.0.1 matplotlib-3.0.2 pyparsing-2.3.1 python-dateutil-2.7.5

pip install wordcloud
Collecting wordcloud
Downloading https://files.pythonhosted.org/packages/23/4e/1254d26ce5d36facdcbb5820e7e434328aed68e99938c75c9d4e2fee5efb/wordcloud-1.5.0-cp37-cp37m-win_amd64.whl (153kB)
100% |████████████████████████████████| 163kB 543kB/s
Requirement already satisfied: numpy>=1.6.1 in d:\systemtools\python37\lib\site-packages (from wordcloud) (1.15.4)
Collecting pillow (from wordcloud)
Downloading https://files.pythonhosted.org/packages/20/59/edb6fe64a608afc9fd1faf3470774b4131b4be9d40c496b0c144033e249a/Pillow-5.4.1-cp37-cp37m-win_amd64.whl (2.0MB)
100% |████████████████████████████████| 2.0MB 1.6MB/s
Installing collected packages: pillow, wordcloud
The script wordcloud_cli.exe is installed in 'd:\systemtools\python37\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pillow-5.4.1 wordcloud-1.5.0

之后是程序

# coding: utf-8
import jieba
from scipy.misc import imread # 这是一个处理图像的函数
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
import matplotlib.pyplot as plt
import numpy as np back_color = imread('daxiang.jpg') # 解析该图片 wc = WordCloud(background_color='white', # 背景颜色
max_words=1000, # 最大词数
mask=back_color, # 以该参数值作图绘制词云,这个参数不为空时,width和height会被忽略
max_font_size=100, # 显示字体的最大值
stopwords=STOPWORDS.add('哇卡拉'), # 使用内置的屏蔽词,再添加 '哇卡拉'
font_path="C:/Windows/Fonts/STFANGSO.ttf", # 解决显示口字型乱码问题,可进入C:/Windows/Fonts/目录更换字体
#此目录下必须要有对应的ttf文件,否则报错:OSError: cannot open resource
random_state=42, # 为每个词返回一个PIL颜色
# width=1000, # 图片的宽
# height=860 #图片的长
) # 添加自己的词库分词,比如添加'你知道难道别人不知道'到jieba词库后,当你处理的文本中含有这个词时,
# 就会直接将其当作一个词,而不会得到'知道'或'不知道'这样的词
jieba.add_word('你知道难道别人不知道') # 打开词源的文本文件
text = open('cnword.txt').read() # 该函数的作用就是把屏蔽词去掉,使用这个函数就不用在WordCloud参数中添加stopwords参数了
# 把你需要屏蔽的词全部放入一个stopwords文本文件里即可
def stop_words(texts):
words_list = []
word_generator = jieba.cut(texts, cut_all=False) # 返回的是一个迭代器
with open('stopwords.txt') as f:
str_text = f.read()
unicode_text = unicode(str_text, 'utf-8') # 把str格式转成unicode格式
f.close() # stopwords文本中词的格式是'一词一行'
for word in word_generator:
if word.strip() not in unicode_text:
words_list.append(word)
return ' '.join(words_list) # 注意是空格 text = stop_words(text) wc.generate(text)
# 基于彩色图像生成相应彩色
image_colors = ImageColorGenerator(back_color)
# 显示图片
plt.imshow(wc)
# 关闭坐标轴
plt.axis('off')
# 绘制词云
plt.figure()
plt.imshow(wc.recolor(color_func=image_colors))
plt.axis('off')
# 保存图片
wc.to_file('xixixi.png')
词源文件:cnword.txt即上一篇中的地市的名称
屏蔽词源:stopwords.txt 随便写了几个不需要展示的城市

python实现使用词云展示图片

咔咔一顿转换就成了下图:

python实现使用词云展示图片

最后,授之以鱼不如授之以渔!

WordCloud各含义参数如下

 
font_path : string  #字体路径,需要展现什么字体就把该字体路径+后缀名写上,如:font_path = '黑体.ttf'

width : int (default=400) #输出的画布宽度,默认为400像素

height : int (default=200) #输出的画布高度,默认为200像素

prefer_horizontal : float (default=0.90) #词语水平方向排版出现的频率,默认 0.9 (所以词语垂直方向排版出现频率为 0.1 )

mask : nd-array or None (default=None) #如果参数为空,则使用二维遮罩绘制词云。如果 mask 非空,设置的宽高值将被忽略,遮罩形状被 mask 取代。除全白(#FFFFFF)的部分将不会绘制,其余部分会用于绘制词云。如:bg_pic = imread('读取一张图片.png'),背景图片的画布一定要设置为白色(#FFFFFF),然后显示的形状为不是白色的其他颜色。可以用ps工具将自己要显示的形状复制到一个纯白色的画布上再保存,就ok了。

scale : float (default=1) #按照比例进行放大画布,如设置为1.5,则长和宽都是原来画布的1.5倍

min_font_size : int (default=4) #显示的最小的字体大小

font_step : int (default=1) #字体步长,如果步长大于1,会加快运算但是可能导致结果出现较大的误差

max_words : number (default=200) #要显示的词的最大个数

stopwords : set of strings or None #设置需要屏蔽的词,如果为空,则使用内置的STOPWORDS

background_color : color value (default=”black”) #背景颜色,如background_color='white',背景颜色为白色

max_font_size : int or None (default=None) #显示的最大的字体大小

mode : string (default=”RGB”) #当参数为“RGBA”并且background_color不为空时,背景为透明

relative_scaling : float (default=.5) #词频和字体大小的关联性

color_func : callable, default=None #生成新颜色的函数,如果为空,则使用 self.color_func

regexp : string or None (optional) #使用正则表达式分隔输入的文本

collocations : bool, default=True #是否包括两个词的搭配

colormap : string or matplotlib colormap, default=”viridis” #给每个单词随机分配颜色,若指定color_func,则忽略该方法

random_state : int or None  #为每个单词返回一个PIL颜色

fit_words(frequencies)  #根据词频生成词云
generate(text) #根据文本生成词云
generate_from_frequencies(frequencies[, ...]) #根据词频生成词云
generate_from_text(text) #根据文本生成词云
process_text(text) #将长文本分词并去除屏蔽词(此处指英语,中文分词还是需要自己用别的库先行实现,使用上面的 fit_words(frequencies) )
recolor([random_state, color_func, colormap]) #对现有输出重新着色。重新上色会比重新生成整个词云快很多
to_array() #转化为 numpy array
to_file(filename) #输出到文件

 

引用:https://www.cnblogs.com/delav/p/7845539.html

========================================================

上一篇:概念数据模型CDM基础


下一篇:PowerDesigner 概念数据模型