本帖最后由 姿态 于 2019-3-25 14:49 编辑
- python 3.6
- scrapy 1.1.1
- twisted 17.1.0
运行scrapy shell ,得到TypeError:
- (scrapy) D:\>scrapy shell "www.python.com"
- 2017-03-08 12:12:05 [scrapy] INFO: Scrapy 1.1.1 started (bot: scrapybot)
- 2017-03-08 12:12:05 [scrapy] INFO: Overridden settings: {'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'LOGSTATS_INTERVAL': 0}
- 2017-03-08 12:12:05 [scrapy] INFO: Enabled extensions:
- ['scrapy.extensions.corestats.CoreStats',
- 'scrapy.extensions.telnet.TelnetConsole']
- 2017-03-08 12:12:05 [scrapy] INFO: Enabled downloader middlewares:
- ['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
- 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
- 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
- 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
- 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
- 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
- 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
- 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
- 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
- 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
- 'scrapy.downloadermiddlewares.stats.DownloaderStats']
- 2017-03-08 12:12:05 [scrapy] INFO: Enabled spider middlewares:
- ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
- 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
- 'scrapy.spidermiddlewares.referer.RefererMiddleware',
- 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
- 'scrapy.spidermiddlewares.depth.DepthMiddleware']
- 2017-03-08 12:12:05 [scrapy] INFO: Enabled item pipelines:
- []
- 2017-03-08 12:12:05 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
- 2017-03-08 12:12:05 [scrapy] INFO: Spider opened
- Traceback (most recent call last):
- File "d:\Anaconda3\envs\scrapy\Scripts\scrapy-script.py", line 5, in <module>
- sys.exit(scrapy.cmdline.execute())
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\scrapy\cmdline.py", line 142, in execute
- _run_print_help(parser, _run_command, cmd, args, opts)
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\scrapy\cmdline.py", line 88, in _run_print_help
- func(*a, **kw)
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\scrapy\cmdline.py", line 149, in _run_command
- cmd.run(args, opts)
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\scrapy\commands\shell.py", line 71, in run
- shell.start(url=url)
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\scrapy\shell.py", line 47, in start
- self.fetch(url, spider)
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\scrapy\shell.py", line 112, in fetch
- reactor, self._schedule, request, spider)
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\twisted\internet\threads.py", line 122, in blockingCallFromThread
- result.raiseException()
- File "d:\Anaconda3\envs\scrapy\lib\site-packages\twisted\python\failure.py", line 372, in raiseException
- raise self.value.with_traceback(self.tb)
- TypeError: 'float' object is not iterable
复制代码
解决: 我在台式机上的虚拟环境为python3.6,scrapy1.1.1,同样可以运行,于是看了下返回的错误发生在twisted,检查了一下,台式机上是twisted16.6.0,然后应该是最近升级了一次,笔记本自动安装的是17.1.0的版本,然后执行
- conda install twisted==16.6.0
复制代码
然后DOWNGRADE,yes,再次运行scrapy shell “[url=http://www.python.org]www.python.org”,可以成功进入ipython。
|