scrapy在python3版本运行问题

C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 562, in module_for_loader_wrapper
  File "<frozen importlib._bootstrap>", line 870, in _load_module
  File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
  File "D:\Python33\lib\site-packages\twisted\internet\reactor.py", line 38, in
<module>
    from twisted.internet import default
  File "D:\Python33\lib\site-packages\twisted\internet\default.py", line 56, in
<module>
    install = _getInstallFunction(platform)
  File "D:\Python33\lib\site-packages\twisted\internet\default.py", line 50, in
_getInstallFunction
    from twisted.internet.selectreactor import install
  File "D:\Python33\lib\site-packages\twisted\internet\selectreactor.py", line 1
8, in <module>
    from twisted.internet import posixbase
  File "D:\Python33\lib\site-packages\twisted\internet\posixbase.py", line 24, i
n <module>
    from twisted.internet import error, udp, tcp
  File "D:\Python33\lib\site-packages\twisted\internet\udp.py", line 51, in <mod
ule>
    from twisted.internet import base, defer, address
  File "D:\Python33\lib\site-packages\twisted\internet\base.py", line 23, in <mo
dule>
    from twisted.internet import fdesc, main, error, abstract, defer, threads
  File "D:\Python33\lib\site-packages\twisted\internet\defer.py", line 29, in <m
odule>
    from twisted.python import lockfile, log, failure
  File "D:\Python33\lib\site-packages\twisted\python\lockfile.py", line 52, in <
module>
    _open = file
NameError: name ‘file‘ is not defined
 这个问题是因为twisted不支持Python3的原因
 Python3没有file内建方法用open来代替
 python2x(file(filename[, mode[, bufsize]]) When opening a file, it’s preferable to use open() instead of

invoking this constructor directly. file is more suited to type testing (for example, writing isinstance(f,

file)).)

解决这个问题: 将 _open = file 改为:try:
                    _open = file
                    except:
                    _open = open

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 9, in <module>
    from scrapy.crawler import CrawlerProcess
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\crawler.py"
, line 3, in <module>
    from twisted.internet import reactor, defer
  File "<frozen importlib._bootstrap>", line 1567, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1534, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 586, in _check_name_wrapper
  File "<frozen importlib._bootstrap>", line 1024, in load_module
  File "<frozen importlib._bootstrap>", line 1005, in load_module
  File "<frozen importlib._bootstrap>", line 565, in module_for_loader_wrapper
KeyError: ‘twisted.internet.reactor‘
 
做了上面的更改时出现下面问题

C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 9, in <module>
    from scrapy.crawler import CrawlerProcess
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\crawler.py"
, line 5, in <module>
    from scrapy.core.engine import ExecutionEngine
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\engine
.py", line 13, in <module>
    from scrapy import log, signals
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\log.py", li
ne 13, in <module>
    from scrapy.utils.python import unicode_to_str
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\pytho
n.py", line 14, in <module>
    from sgmllib import SGMLParser
ImportError: No module named ‘sgmllib‘
sgmllib是2.6以后引入python,在3.0以后这个库被移除了。如果你的python版本<2.6或者>=3.0就找不到这个module。
如果你要使用已有的、依赖sgmllib的代码,安装python2.7等合适的版本。如果要迁移到3.0环境,需要移植代码,可以使用

html.parser.HTMLParser

解决:# from sgmllib import SGMLParser 注释掉
      from html.parser import HTMLParser as SGMLParser
another: try:
             from sgmllib import SGMLParser
    except:
         from html.parser import HTMLParser as SGMLParser


又是版本问题

C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 9, in <module>
    from scrapy.crawler import CrawlerProcess
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\crawler.py"
, line 5, in <module>
    from scrapy.core.engine import ExecutionEngine
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\engine
.py", line 14, in <module>
    from scrapy.core.downloader import Downloader
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\__init__.py", line 9, in <module>
    from scrapy.utils.httpobj import urlparse_cached
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\httpo
bj.py", line 5, in <module>
    from urlparse import urlparse
ImportError: No module named ‘urlparse‘
 
该urlparse:python3在urllib.parse中


C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 9, in <module>
    from scrapy.crawler import CrawlerProcess
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\crawler.py"
, line 5, in <module>
    from scrapy.core.engine import ExecutionEngine
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\engine
.py", line 14, in <module>
    from scrapy.core.downloader import Downloader
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\__init__.py", line 13, in <module>
    from .middleware import DownloaderMiddlewareManager
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\middleware.py", line 7, in <module>
    from scrapy.http import Request, Response
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\http\__init
__.py", line 10, in <module>
    from scrapy.http.request import Request
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\http\reques
t\__init__.py", line 15, in <module>
    from scrapy.utils.url import escape_ajax
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\url.p
y", line 9, in <module>
    import urlparse
ImportError: No module named ‘urlparse‘
更改:# import urlparse
from urllib import parse as urlparse


C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 9, in <module>
    from scrapy.crawler import CrawlerProcess
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\crawler.py"
, line 5, in <module>
    from scrapy.core.engine import ExecutionEngine
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\engine
.py", line 14, in <module>
    from scrapy.core.downloader import Downloader
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\__init__.py", line 13, in <module>
    from .middleware import DownloaderMiddlewareManager
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\middleware.py", line 7, in <module>
    from scrapy.http import Request, Response
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\http\__init
__.py", line 11, in <module>
    from scrapy.http.request.form import FormRequest
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\http\reques
t\form.py", line 8, in <module>
    import urllib, urlparse
ImportError: No module named ‘urlparse‘

同上

靠,怎么这么多
C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 9, in <module>
    from scrapy.crawler import CrawlerProcess
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\crawler.py"
, line 5, in <module>
    from scrapy.core.engine import ExecutionEngine
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\engine
.py", line 14, in <module>
    from scrapy.core.downloader import Downloader
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\__init__.py", line 13, in <module>
    from .middleware import DownloaderMiddlewareManager
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\middleware.py", line 7, in <module>
    from scrapy.http import Request, Response
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\http\__init
__.py", line 12, in <module>
    from scrapy.http.request.rpc import XmlRpcRequest
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\http\reques
t\rpc.py", line 7, in <module>
    import xmlrpclib
ImportError: No module named ‘xmlrpclib‘

The xmlrpclib module has been renamed to xmlrpc.client in Python 3.0. The 2to3 tool will automatically adapt

imports when converting your sources to 3.0.

解决:# import xmlrpclib
from xmlrpc import client as xmlrpclib


C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 9, in <module>
    from scrapy.crawler import CrawlerProcess
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\crawler.py"
, line 5, in <module>
    from scrapy.core.engine import ExecutionEngine
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\engine
.py", line 14, in <module>
    from scrapy.core.downloader import Downloader
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\__init__.py", line 13, in <module>
    from .middleware import DownloaderMiddlewareManager
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\core\downlo
ader\middleware.py", line 10, in <module>
    from scrapy.utils.conf import build_component_list
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\conf.
py", line 3, in <module>
    from ConfigParser import SafeConfigParser
ImportError: No module named ‘ConfigParser‘


解决:# from ConfigParser import SafeConfigParser
from configparser import ConfigParser as SafeConfigParser
他妈的太多了 直接用2to3转


C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 14, in <module>
    from scrapy.utils.project import inside_project, get_project_settings
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\proje
ct.py", line 2, in <module>
    import cPickle as pickle
ImportError: No module named ‘cPickle‘
#系列化Python对象功能

解决:# import cPickle as pickle
import pickle


C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 168, in <module>
    execute()
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 122, in execute
    cmds = _get_commands_dict(settings, inproject)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 46, in _get_commands_dict
    cmds = _get_commands_from_module(‘scrapy.commands‘, inproject)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 29, in _get_commands_from_module
    for cmd in _iter_command_classes(module):
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 20, in _iter_command_classes
    for module in walk_modules(module_name):
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\misc.
py", line 68, in walk_modules
    submod = import_module(fullpath)
  File "D:\Python33\lib\importlib\__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1586, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1567, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1534, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 586, in _check_name_wrapper
  File "<frozen importlib._bootstrap>", line 1024, in load_module
  File "<frozen importlib._bootstrap>", line 1005, in load_module
  File "<frozen importlib._bootstrap>", line 562, in module_for_loader_wrapper
  File "<frozen importlib._bootstrap>", line 870, in _load_module
  File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\commands\be
nch.py", line 2, in <module>
    from scrapy.tests.spiders import FollowAllSpider
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\tests\spide
rs.py", line 6, in <module>
    from urllib import urlencode
ImportError: cannot import name urlencode

解决:# from urllib import urlencode
from urllib.parse import urlencode


C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 168, in <module>
    execute()
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 122, in execute
    cmds = _get_commands_dict(settings, inproject)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 46, in _get_commands_dict
    cmds = _get_commands_from_module(‘scrapy.commands‘, inproject)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 29, in _get_commands_from_module
    for cmd in _iter_command_classes(module):
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 20, in _iter_command_classes
    for module in walk_modules(module_name):
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\misc.
py", line 68, in walk_modules
    submod = import_module(fullpath)
  File "D:\Python33\lib\importlib\__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1586, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1567, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1534, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 586, in _check_name_wrapper
  File "<frozen importlib._bootstrap>", line 1024, in load_module
  File "<frozen importlib._bootstrap>", line 1005, in load_module
  File "<frozen importlib._bootstrap>", line 562, in module_for_loader_wrapper
  File "<frozen importlib._bootstrap>", line 870, in _load_module
  File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\commands\be
nch.py", line 2, in <module>
    from scrapy.tests.spiders import FollowAllSpider
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\tests\spide
rs.py", line 12, in <module>
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\contrib\lin
kextractors\sgml.py", line 5, in <module>
    from urlparse import urlparse, urljoin
ImportError: No module named ‘urlparse‘

解决:# from urlparse import urlparse, urljoin
from urllib.parse import urlparse,urljoin


C:\Users\Administrator>scrapy startproject sss
Traceback (most recent call last):
  File "D:\Python33\lib\runpy.py", line 160, in _run_module_as_main
    "__main__", fname, loader, pkg_name)
  File "D:\Python33\lib\runpy.py", line 73, in _run_code
    exec(code, run_globals)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 168, in <module>
    execute()
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 122, in execute
    cmds = _get_commands_dict(settings, inproject)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 46, in _get_commands_dict
    cmds = _get_commands_from_module(‘scrapy.commands‘, inproject)
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 29, in _get_commands_from_module
    for cmd in _iter_command_classes(module):
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\cmdline.py"
, line 20, in _iter_command_classes
    for module in walk_modules(module_name):
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\utils\misc.
py", line 68, in walk_modules
    submod = import_module(fullpath)
  File "D:\Python33\lib\importlib\__init__.py", line 90, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1586, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1567, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1534, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 586, in _check_name_wrapper
  File "<frozen importlib._bootstrap>", line 1024, in load_module
  File "<frozen importlib._bootstrap>", line 1005, in load_module
  File "<frozen importlib._bootstrap>", line 562, in module_for_loader_wrapper
  File "<frozen importlib._bootstrap>", line 870, in _load_module
  File "<frozen importlib._bootstrap>", line 313, in _call_with_frames_removed
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\commands\be
nch.py", line 3, in <module>
    from scrapy.tests.mockserver import MockServer
  File "D:\Python33\lib\site-packages\scrapy-0.22.2-py3.3.egg\scrapy\tests\mocks
erver.py", line 6, in <module>
    from twisted.internet import reactor, defer, ssl
  File "D:\Python33\lib\site-packages\twisted\internet\ssl.py", line 25, in <mod
ule>
    from OpenSSL import SSL
  File "D:\Python33\lib\site-packages\pyopenssl-0.14-py3.3.egg\OpenSSL\__init__.
py", line 8, in <module>
    from OpenSSL import rand, crypto, SSL
  File "D:\Python33\lib\site-packages\pyopenssl-0.14-py3.3.egg\OpenSSL\rand.py",
 line 11, in <module>
    from OpenSSL._util import (
  File "D:\Python33\lib\site-packages\pyopenssl-0.14-py3.3.egg\OpenSSL\_util.py"
, line 3, in <module>
    from cryptography.hazmat.bindings.openssl.binding import Binding
ImportError: No module named ‘cryptography‘

解决:待解决

scrapy在python3版本运行问题,布布扣,bubuko.com

scrapy在python3版本运行问题

上一篇:C++ 结构体内存对齐


下一篇:JAVA的继承