当前位置:首页 > 编程笔记 > 正文
已解决

Scrapy使用和学习笔记

来自网友在路上 166866提问 提问时间:2023-11-06 20:20:06阅读次数: 66

最佳答案 问答题库668位专家为你答疑解惑

前言

Scrapy是非常优秀的一个爬虫框架,基于twisted异步编程框架。yield的使用如此美妙。基于调度器,下载器可以对scrapy扩展编程。插件也是非常丰富,和Selenium,PlayWright集成也比较轻松。

当然,对网页中的ajax请求它是无能无力的,但结合mitmproxy几乎无所不能:Scrapy + PlayWright模拟用户点击,mitmproxy则在后台抓包取数据,登录一次,运行一天。

最终,我通过asyncio把这几个工具整合到了一起,基本达成了自动化无人值守的稳定运行,一篇篇的文章送入我的ElasticSearch集群,经过知识工厂流水线,变成知识商品。

”爬虫+数据,算法+智能“,这是一个技术人的理想。

配置与运行

安装:

pip install scrapy

当前目录下有scrapy.cfg和settings.py,即可运行scrapy

命令行运行:

scrapy crawl ArticleSpider

在程序中运行有三种写法:

from scrapy.cmdline import executeexecute('scrapy crawl ArticleSpider'.split())

采用CrawlerRunner:

# 采用CrawlerRunner
from twisted.internet.asyncioreactor import AsyncioSelectorReactor
reactor = AsyncioSelectorReactor()runner = CrawlerRunner(settings)
runner.crawl(ArticleSpider)
reactor.run()

采用CrawlerProcess

# 采用CrawlerProcess
process = CrawlerProcess(settings)
process.crawl(ArticleSpider)
process.start()

和PlayWright的集成

安装

pip install scrapy-playwright
playwright install
playwright install firefox chromium

settings.py配置

BOT_NAME = 'ispider'SPIDER_MODULES = ['ispider.spider']TWISTED_REACTOR = 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'
DOWNLOAD_HANDLERS = {"https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler","http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
}CONCURRENT_REQUESTS = 32
PLAYWRIGHT_MAX_PAGES_PER_CONTEXT = 4
CLOSESPIDER_ITEMCOUNT = 100PLAYWRIGHT_CDP_URL = "http://localhost:9900"

爬虫定义

class ArticleSpider(Spider):name = "ArticleSpider"custom_settings = {# "TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",# "DOWNLOAD_HANDLERS": {#     "https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",#     "http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",# },# "CONCURRENT_REQUESTS": 32,# "PLAYWRIGHT_MAX_PAGES_PER_CONTEXT": 4,# "CLOSESPIDER_ITEMCOUNT": 100,}start_urls = ["https://blog.csdn.net/nav/lang/javascript"]def __init__(self, name=None, **kwargs):super().__init__(name, **kwargs)logger.debug('ArticleSpider initialized.')def start_requests(self):for url in self.start_urls:yield Request(url,meta={"playwright": True,"playwright_context": "first","playwright_include_page": True,"playwright_page_goto_kwargs": {"wait_until": "domcontentloaded",},},)async def parse(self, response: Response, current_page: Optional[int] = None) -> Generator:content = response.textpage = response.meta["playwright_page"]context = page.contexttitle = await page.title()while True:## 垂直滚动下拉,不断刷新数据page.mouse.wheel(delta_x=0, delta_y=200)time.sleep(3)pass

参考链接

  • 官方scrapy-playwright插件
  • 崔庆才丨静觅写的插件GerapyPlaywright
查看全文

99%的人还看了

猜你感兴趣

版权申明

本文"Scrapy使用和学习笔记":http://eshow365.cn/6-33906-0.html 内容来自互联网,请自行判断内容的正确性。如有侵权请联系我们,立即删除!