如何抓取scrapy中url的url?
问题描述:
主URL = [https://www.amazon.in/s/ref=nb_sb_ss_i_1_8?url=search-alias%3Dcomputers&field-keywords=lenovo+laptop&sprefix=lenovo+m%2Cundefined%2C2740&crid=3L1Q2LMCKALCT]如何抓取scrapy中url的url?
import scrapy
from product.items import ProductItem
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class amazonSpider(scrapy.Spider):
name = "amazon"
allowed_domains = ["amazon.in"]
start_urls = [ main url here]
def parse(self, response):
item=ProductItem()
for content in response.xpath("sample xpath"):
url = content.xpath("a/@href").extract()
request = scrapy.Request(str(url[0]),callback=self.page2_parse)
#url is extracted from my main url
item['product_Rating'] = request
yield item
def page2_parse(self,response):
#here i dint get the response for the second url content
for content in response.xpath(sample xpath):
yield content.xpath(sample xpath).extract()
第二功能不执行这里提取的URL。请帮助我。
答
最后我这样做,请按照下面的代码来实现爬行值形成的URL的URL。
def parse(self, response):
item=ProductItem()
url_list = [content for content in response.xpath("//div[@class='listing']/div/a/@href").extract()]
item['product_DetailUrl'] = url_list
for url in url_list:
request = Request(str(url),callback=self.page2_parse)
request.meta['item'] = item
yield request
def page2_parse(self,response):
item=ProductItem()
item = response.meta['item']
item['product_ColorAvailability'] = [content for content in response.xpath("//div[@id='templateOption']//ul/li//img/@color").extract()]
yield item
这里Page2_pase不取第二个网址,我不能再爬 –
有不是一个真正的“刮网址的网址”;您的第二个网址与第一个网址相同。 – blacksite
嗨,我只爬行的第一个网址后,拿到了第二个网址。例如在我的主要网址中,我们可以看到多种产品[笔记本电脑]。因此,在抓取主要网址后,我会获取每个产品的详细信息页面网址。 –