【转】Python编程: 多个PDF文件合并以及网页上自动下载PDF文件

1. 多个PDF文件合并
1.1 需求描述
有时候,我们下载了多个PDF文件, 但希望能把它们合并成一个PDF文件。例如:你下载的数个PDF文件资料或者电子发票,你可以使用python程序合并成一个PDF文件,无论是阅读或是打印都更方便些。

1.2. 技术分析
首先,我们要读取某一个目录(为了简化,我们假设Python代码和PDF文件都放在此目录下)的所有PDF文件,然后调用 PdfFileMerger 库进行合并,最后打印输出文件完成。

1.3. 代码实现
remove_pdf_file(file): 删除一个pdf 文件,主要用来删除合并生成的pdf文件
get_all_pdf_files(path): 读取路径path下所有的pdf文件,返回一个列表
merge_pdf_file(pdfs): 把在列表pdfs里包含的多个pdf文件合并成一个pdf文件 merged_pdf_file.pdf
Python 代码

# merge_pdf_files.py

from PyPDF2 import PdfFileMerger
import os, sys

def remove_pdf_file(file):
os.remove(file)

def get_all_pdf_files(path):
pdfs = [ file for file in os.listdir(path) if '.pdf' in file ]
return pdfs

def merge_pdf_file(pdfs):
pdf_file_merger = PdfFileMerger()
merged_pdf = 'merged_pdf_file.pdf'

for pdf in pdfs:
if merged_pdf == pdf:
remove_pdf_file(pdf)
try:
pdf_file_merger.append(open(pdf, 'rb'))
except:
print("merging pdf file %s failed." % pdf)

with open(merged_pdf, 'wb') as fout:
pdf_file_merger.write(fout)

return merged_pdf

def main():
pdfs = get_all_pdf_files(sys.path[0]) # current path

print('The file', merge_pdf_file(pdfs), 'is generated.')

if __name__ == "__main__":
main()
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
2. 网页上自动下载多个PDF文件并合并PDF文件
2.1 需求描述
如果有一个新需求是从某个网页上自动下载多个PDF文件,最后把多个PDF文件合并成一个PDF文件,该如何实现呢?

2.2 技术分析
我们使用requests库来抓取网页,使用正则表达式分析网页上的所有PDF文件,然后使用requests库把所有的PDF文件自动下载下来,最后把多个PDF文件合并生成一个文件merged_pdf_file.pdf

2.3 代码实现
get_page_source(url): 读取网页的内容
get_all_pdfs_url(html): 分析网页内容找到所有PDF文件的列表
download_all_pdf_files(url, pdfs): 从网上下载所有PDF文件到当前目录
remove_pdf_file(file): 删除一个pdf 文件,主要用来删除合并生成的pdf文件
remove_pdf_file(pdfs): 删除列表里的pdf 文件,主要用来删除临时下载的pdf文件
merge_pdf_file(pdfs): 把在列表pdfs里包含的多个pdf文件合并成一个pdf文件merged_pdf_file.pdf
Python 代码

# download_merge_pdf_files.py

import re
import requests
import sys
import os
from PyPDF2 import PdfFileMerger

def get_page_source(url):
r = requests.get(url)
return r.text

def get_all_pdfs_url(html):
all_files = re.findall('<li><a href=(.*?)class="title">', html, re.S)

return [item.strip()[1:-1] for item in all_files if "pdf" in item]

def download_all_pdf_files(url, pdfs):
print("The following pdf files are downloaded from", url)
for index, pdf in enumerate(pdfs, 1):
print("%d. %s" %(index, pdf))

response = requests.get(url + pdf , stream=True)

try:
new_pdf_file = str(index)+'. '+pdf
with open(new_pdf_file, 'wb') as handle:
for block in response.iter_content(1024):
handle.write(block)
except:
print("writing pdf file %s failed." % new_pdf_file)

def merge_pdf_files(pdfs):
pdf_file_merger = PdfFileMerger()
merged_pdf = 'merged_pdf_file.pdf'

for index, pdf in enumerate(pdfs, 1):
if merged_pdf == pdf:
remove_pdf_file(pdf)
try:
new_pdf_file = str(index)+'. '+pdf
pdf_file_merger.append(open(new_pdf_file, 'rb'))
except:
print("merging pdf file %s failed." % new_pdf_file)

with open(merged_pdf, 'wb') as fout:
pdf_file_merger.write(fout)

return merged_pdf

def remove_pdf_file(file):
os.remove(file)

def remove_pdf_files(pdfs):
for file in pdfs:
remove_pdf_file(file)

def main():
URL = "http://www.cs.brandeis.edu/~cs134/"
html = get_page_source(URL)
pdfs = get_all_pdfs_url(html)

download_all_pdf_files(URL, pdfs)
print('The file', merge_pdf_files(pdfs), 'is generated.')
#remove_pdf_files(pdfs) # if we want to remove those original pdf files

if __name__ == "__main__":
main()

#The following pdf files are downloaded from http://www.cs.brandeis.edu/~cs134/
#1. Lecture1-Overview.pdf
#2. Lecture2-ProbabilityFundementals.pdf
#3. Lecture3-TextClassification-NB-Part1.pdf
#4. TextClassification-NB-MaxEnt.pdf
#5. K_F_Ch3.pdf
#6. Handout1.pdf
#7. Quiz1Topics.pdf
#8. HW1.pdf
#PdfReadWarning: Xref table not zero-indexed. ID numbers for objects will be corrected. [pdf.py:1736]
#The fil

from:https://blog.csdn.net/weixin_43379056/article/details/88020504

上一篇:【BZOJ-1912】patrol巡逻 树的直径 + DFS(树形DP)


下一篇:『无为则无心』Python基础 — 15、Python流程控制语句(for循环语句)