Scrapy-具有多个start_url的剧作家

p4tfgftt  于 2024-01-09  发布在  其他
关注(0)|答案(2)|浏览(160)

here讨论了类似的问题,但我无法使我的代码工作。目的是scrapy-playwright为start_urls中的每个URL生成请求-响应,并以相同的方式解析每个响应。CSV与URL被正确地读取到列表中,但请求不是由start_requests生成的。请参阅下面的注解代码。

import scrapy
import asyncio
from scrapy_playwright.page import PageMethod

class MySpider(scrapy.Spider):
    name = "Forum01"
    allowed_domains = ["example.com"]

    def start_requests(self):
        with open('FullLink.csv') as file:
            start_urls = [line.strip() for line in file]
        print(start_urls) # When Scrapy crawl the list of URLs is correctly printed
        
        for u in self.start_urls:    
            yield scrapy.Request(
                u,
                meta=dict(
                    playwright=True,
                    playwright_include_page=False,
                    playwright_page_methods=[
                        PageMethod("wait_for_selector", "div.modal-body > p")
                    ], # End of methods
                ), # End of meta
                callback=self.parse
            )

    async def parse(self, response): # Does not work either with sync or async
        for item in response.css('div.modal-content'):
            yield{
                'title': item.css('h1::text').get(),
                'info': item.css('.row+ p::text').get(),
            }

字符串
你有一个想法如何正确饲料的网址蜘蛛?谢谢!

7cjasjjr

7cjasjjr1#

你试图在for循环中创建一个空序列,而不是从csv文件中提取的序列。
除非被显式覆盖,否则self.start_urls将始终引用在scrapy.Spider构造函数中创建的空列表。删除self.start_urlsself部分应该可以解决您的问题。

import scrapy
import asyncio
from scrapy_playwright.page import PageMethod

class MySpider(scrapy.Spider):
    name = "Forum01"
    allowed_domains = ["example.com"]

    def start_requests(self):
        with open('FullLink.csv') as file:
            start_urls = [line.strip() for line in file] 
        print(start_urls) # When Scrapy crawl the list of URLs is correctly printed
        
        for u in self.start_urls: # <- change self.start_urls to just start_urls
            yield scrapy.Request(  #-----------------------------------
                u,
                meta=dict(
                    playwright=True,
                    playwright_include_page=False,
                    playwright_page_methods=[
                        PageMethod("wait_for_selector", "div.modal-body > p")
                    ], # End of methods
                ), # End of meta
                callback=self.parse
            )

    async def parse(self, response): # Does not work either with sync or async
        for item in response.css('div.modal-content'):
            yield{
                'title': item.css('h1::text').get(),
                'info': item.css('.row+ p::text').get(),
            }

字符串

55ooxyrt

55ooxyrt2#

问题

这个错误是由for u in self.start_urls生成的,因为你正在用一个空列表迭代一个循环。
def start_requests(self)函数中,你使用的是start_urls = [line.strip() for line in file]。而在for u loop in self.start_urls中,你使用的是self.start_urlsself。正如你所看到的,一个有self,另一个没有,因此你正在用一个空列表迭代循环。

溶液

有两种类型的解决方案,scrapy-playwright为start_urls中的每个URL生成请求-响应,并以相同的方式解析每个响应。

#1解决方案

第一个解决方案(但我不确定它是否有效)是将self添加到start_urls

def start_requests(self):
    with open('FullLink.csv') as file:
        self.start_urls = [line.strip() for line in file] #EDIT HERE, WITH SELF
    print(start_urls) # When Scrapy crawl the list of URLs is correctly printed
    
    for u in self.start_urls: #WITH SELF

字符串

#2解决方案

第二个解决方案,它将安全地工作,基于一个更简单的方法:从for u in self.start_urls中删除self(因此在任何地方使用start_urls而不使用self:在for u in start_urlsstart_urls = [line.strip() for line in file ]中都不使用self),然后写:

def start_requests(self):
    with open('FullLink.csv') as file:
        start_urls = [line.strip() for line in file] #NO SELF
    print(start_urls) # When Scrapy crawl the list of URLs is correctly printed
    
    for u in start_urls: #EDIT HERE, NO SELF


代码中的其他内容都是正确的。您只需要用self编辑部分

相关问题