scrapy-2 嗅探网站,解析HTML
首先我们要使用scrapy shell最好先安装ipython, 这个应用能让我们在python中使用Tab来补齐命令pip install ipython我们开始抓取一个网站进入我们的项目目录root@uliweb:~/spider/boge# pwd
/root/spider/boge
root@uliweb:~/spider/boge# scrapy shell http://blu-raydisc.tv/
2014-06-04 08:22:37+0800 INFO: Scrapy 0.22.2 started (bot: boge)
2014-06-04 08:22:37+0800 INFO: Optional features available: ssl, http11
2014-06-04 08:22:37+0800 INFO: Overridden settings: {'NEWSPIDER_MODULE': 'boge.spiders', 'SPIDER_MODULES': ['boge.spiders'], 'LOGSTATS_INTERVAL': 0, 'BOT_NAME': 'boge'}
2014-06-04 08:22:37+0800 INFO: Enabled extensions: TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-06-04 08:22:37+0800 INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-06-04 08:22:37+0800 INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-06-04 08:22:37+0800 INFO: Enabled item pipelines: ImagesPipeline
2014-06-04 08:22:37+0800 DEBUG: Telnet console listening on 0.0.0.0:6023
2014-06-04 08:22:37+0800 DEBUG: Web service listening on 0.0.0.0:6080
2014-06-04 08:22:37+0800 INFO: Spider opened
2014-06-04 08:22:40+0800 DEBUG: Crawled (200) <GET http://blu-raydisc.tv/> (referer: None)
Available Scrapy objects:
crawler <scrapy.crawler.Crawler object at 0x2faaad0>
item {}
request <GET http://blu-raydisc.tv/>
response <200 http://blu-raydisc.tv/>
sel <Selector xpath=None data=u'<html xmlns="\r\n\txml:lang=" zh-cn lang='>
settings <CrawlerSettings module=<module 'boge.settings' from '/root/spider/boge/boge/settings.pyc'>>
spider <Spider 'default' at 0x34f0f90>
Useful shortcuts:
shelp() Shell help (print this help)
fetch(req_or_url) Fetch request (or URL) and update local objects
view(response) View response in a browser
/usr/local/lib/python2.7/dist-packages/IPython/frontend.py:30: UserWarning: The top-level `frontend` package has been deprecated. All its subpackages have been moved to the top `IPython` level.
warn("The top-level `frontend` package has been deprecated. "已经抓取成功,注意看上面的命令,我们下面会用到response和sel,关于其他命令我们暂时用不到,以后再做讲解In : print response.
response.body response.copy response.flags response.meta response.request response.url
response.body_as_unicoderesponse.encoding response.headers response.replace response.status
In : print response.bo
response.body response.body_as_unicode
In : print response.body 这里因为抓取的主页有点大,所以不打印出来了我现在想抓图片试试,看看最近有什么好看的电影,我在这里截取了一段HTML代码<div id="slideshow-1-539b18af5ed5f" class="wk-slideshow">
<div class="slides-container">
<ul class="slides">
<li>
<article class="wk-content clearfix"><a href="/film/the-amazing-spider-man-2/" title="机械战警/铁甲威龙/机器战警"><img src="data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==" data-src="http://i.blu-raydisc.tv/images/photos/A_4.jpg" border="0" alt="超凡蜘蛛侠2/蜘蛛人:惊奇再起2/蜘蛛侠2:决战电魔" title="超凡蜘蛛侠2/蜘蛛人:惊奇再起2/蜘蛛侠2:决战电魔" width="920" height="450" /></a></article>
</li>
<li>
<article class="wk-content clearfix"><a href="/film/300-rise-of-an-empire/" title="3D蓝光电影下载"><img src="data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==" data-src="http://i.blu-raydisc.tv/images/photos/A_3.jpg" border="0" alt="《300勇士:帝国崛起/300勇士前传/300勇士:阿提米西亚之战》3D电影蓝光原盘下载" title="《300勇士:帝国崛起/300勇士前传/300勇士:阿提米西亚之战》3D电影蓝光原盘下载" /></a></article>
</li>
<li>
<article class="wk-content clearfix"><a href="/film/captain-america-the-winter-soldier/"><img src="data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==" data-src="http://i.blu-raydisc.tv/images/photos/A_2.jpg" border="0" alt="《美国队长2/美国队长2:酷寒战士/美国队长2:冬日战士/美国队长:冬兵 》3D电影IMAX3D蓝光原盘下载" title="《美国队长2/美国队长2:酷寒战士/美国队长2:冬日战士/美国队长:冬兵 》3D电影IMAX3D蓝光原盘下载" width="920" height="450" /></a></article>
</li>
<li>
<article class="wk-content clearfix"><a href="/film/x-men-days-of-future-past/" title="末日之战/僵尸世界大战/地球末日战"><img src="data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==" data-src="http://i.blu-raydisc.tv/images/photos/A_1.jpg" border="0" alt="X战警:逆转未来/X战警:未来昔日/变种特攻:未来同盟战" title="X战警:逆转未来/X战警:未来昔日/变种特攻:未来同盟战" width="920" height="450" /></a></article>
</li>
<li>
<article class="wk-content clearfix"><a href="/film/untitled-transformers-sequel/" title="变形金刚4:绝迹重生/变形金刚4:灭绝时代/"><img src="data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==" data-src="http://i.blu-raydisc.tv/images/photos/16.jpg" border="0" alt="饥变形金刚4:绝迹重生/变形金刚4:灭绝时代" title="饥变形金刚4:绝迹重生/变形金刚4:灭绝时代" /></a></article>
</li>
<li>
<article class="wk-content clearfix"><a href="/film/edge-of-tomorrow/" title="极品飞车蓝光电影下载"><img src="http://i.blu-raydisc.tv/images/photos/A_5.jpg" border="0" alt="明日边缘/明日边界/杀戮轮回/异空战士" title="明日边缘/明日边界/杀戮轮回/异空战士" /></a></article>
</li>首先分析下,图片在什么位置,以什么方式存在在这个代码里,我们这里可以用正则匹配,也可以用强大的xpathIn : sel.xpath('//img/@src').extract()
Out:
[u'http://blu-raydisc.tv/images/logo.png',
u'data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==',
u'data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==',
u'data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==',
u'data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==',
u'data:image/gif;base64,R0lGODlhAQABAJEAAAAAAP///////wAAACH5BAEHAAIALAAAAAABAAEAAAICVAEAOw==',
u'http://i.blu-raydisc.tv/images/photos/A_5.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.04.game-of-thrones-season-4.game-of-thrones-season-4_0nsp_275.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.02.winter-s-tale.winter-s-tale_1nsp_275.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.04.that-demon-within.that-demon-within_0nsp_275.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.04.the-fatal-encounter.the-fatal-encounter_0nsp_275.jpg',
_282.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.03.need-for-speed.need-for-speed_1nsp_282.jpg',
u'http://i.blu-raydisc.tv/images/photos/the-hobbit-2.jpg',
u'http://i.blu-raydisc.tv/images/photos/the-hobbit-2_1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.04.game-of-thrones-season-4.game-of-thrones-season-4_0newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.02.winter-s-tale.winter-s-tale_1newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.04.that-demon-within.that-demon-within_0newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2011.07.Ice-Age-3.Ice-Age-3_01newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.04.the-fatal-encounter.the-fatal-encounter_0newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.1987.07.a-chinese-ghost-story.a-chinese-ghost-story_1newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.1990.07.sinnui-yauman-2.sinnui-yauman-2_1newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.1991.09.a-chinese-ghost-story-3.a-chinese-ghost-story-3_1newspro1.jpg',
u'http://blu-raydisc.tv/modules/mod_news_pro_gk4/cache/Film.2014.06.edge-of-tomorrow.edge-of-tomorrow_0newspro1.jpg']
In : sel.xpath('//img/@data-src').extract()
Out:
[u'http://i.blu-raydisc.tv/images/photos/A_4.jpg',
u'http://i.blu-raydisc.tv/images/photos/A_3.jpg',
u'http://i.blu-raydisc.tv/images/photos/A_2.jpg',
u'http://i.blu-raydisc.tv/images/photos/A_1.jpg',
u'http://i.blu-raydisc.tv/images/photos/16.jpg']
好这就是我们爬到图片地址
sel.xpath('//img/@src').extract() 这个我是屡试不爽,HTML代码的图片存放路径基本都可以用这个方式爬去到
sel.xpath('//a/@title').extract() 抓取电影title
我们这样分析好了,如何能得到需要的数据,下面我们就来讲如何利用scrapy来做一个简单的初级的爬虫。
支持分享
页:
[1]