如何正确使用Scrapys LinkExtractor

dtcbnfnu  于 2023-08-05  发布在  其他
关注(0)|答案(1)|浏览(72)

我一直在尝试从一个名为willys(https://www.willys.se/)的网站上抓取物品。我想达到与“sortiment/”在URL的网站内的所有链接。但是我似乎不能让allow=r“sortment/”(https://www.willys.se/sortiment/)工作,机器人将简单地停在机器人和主网页。

from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from bs4 import BeautifulSoup

class Spider(CrawlSpider):
    name = 'spider'
    start_urls = ["https://www.willys.se/"]
    allowed_domains = ["www.willys.se"]

    rules = (
        Rule(LinkExtractor(allow=r"sortiment/"), callback="parse", follow=True),
    )

    def parse(self, response):
        print("Parsing URL:", response.url)
        
        with open('response_content.txt', 'w', encoding="utf-8") as f:
            soup = BeautifulSoup(response.text,"html.parser")
            f.write(soup.prettify())

字符串
呼叫蜘蛛

scrapy crawl spider


这是输出

2023-08-02 16:21:28 [scrapy.utils.log] INFO: Scrapy 2.9.0 started (bot: my_project_name)
2023-08-02 16:21:28 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.1, Twisted 22.10.0, Python 3.11.2 (tags/v3.11.2:878ead1, Feb  7 2023, 16:38:35) [MSC v.1934 64 bit (AMD64)], pyOpenSSL 23.2.0 (OpenSSL 3.1.1 30 May 2023), cryptography 41.0.2, Platform Windows-10-10.0.22621-SP0
2023-08-02 16:21:28 [scrapy.crawler] INFO: Overridden settings:
{'AUTOTHROTTLE_ENABLED': True,
 'AUTOTHROTTLE_TARGET_CONCURRENCY': 0.5,
 'BOT_NAME': 'my_project_name',
 'COOKIES_ENABLED': False,
 'DOWNLOAD_DELAY': 3,
 'FEED_EXPORT_ENCODING': 'utf-8',
 'NEWSPIDER_MODULE': 'my_project_name.spiders',
 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['my_project_name.spiders'],
 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2023-08-02 16:21:28 [asyncio] DEBUG: Using selector: SelectSelector
2023-08-02 16:21:28 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2023-08-02 16:21:28 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2023-08-02 16:21:28 [scrapy.extensions.telnet] INFO: Telnet Password: 5130cc64f37e65de
2023-08-02 16:21:28 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.throttle.AutoThrottle']
2023-08-02 16:21:28 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2023-08-02 16:21:28 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2023-08-02 16:21:28 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2023-08-02 16:21:28 [scrapy.core.engine] INFO: Spider opened
2023-08-02 16:21:29 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-08-02 16:21:29 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2023-08-02 16:21:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.willys.se/robots.txt> (referer: None)
2023-08-02 16:21:34 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.willys.se/> (referer: None)
2023-08-02 16:21:34 [scrapy.core.engine] INFO: Closing spider (finished)
2023-08-02 16:21:34 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 436,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 106051,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'elapsed_time_seconds': 5.879534,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2023, 8, 2, 14, 21, 34, 884435),
 'httpcompression/response_bytes': 405009,
 'httpcompression/response_count': 2,
 'log_count/DEBUG': 5,
 'log_count/INFO': 10,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2023, 8, 2, 14, 21, 29, 4901)}
2023-08-02 16:21:34 [scrapy.core.engine] INFO: Spider closed (finished)


我已经尝试了其他网站使用相同的逻辑,并使其工作。是不是网址不匹配?如果我离开allow(allow=None),它会返回页面上的所有网站。任何建议将不胜感激!
我试着交换到其他网站,看看我是否会遇到同样的问题,但似乎不是这样。

xytpbqjk

xytpbqjk1#

这些链接不在第一个响应中,站点向https://www.willys.se/leftMenu/categorytree请求获取这些链接。一旦你把这个响应解析成一个字典,你就可以得到类似这样的链接:

[urljoin(https://www.willys.se/, "/sortiment/"+ children["url"]) for children in parsed_response["children"]]

字符串

相关问题