当前位置 主页 > 网站技术 > 代码类 >

    Python 爬虫实现增加播客访问量的方法实现

    栏目:代码类 时间:2019-10-31 12:07

    一、序言:

    世界 1024 程序猿节日不加班,闲着没事儿。。。随手写了个播客访问量爬虫玩玩,访问量过万不是事儿!!!每个步骤注释都很清晰,代码仅供学习参考!

    ---- Nick.Peng

    二、所需环境:

    Python3.x
    相关模块: requests、json、lxml、urllib、bs4、fake_useragent

    三、增加Blog访问量代码如下:

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    # @Author: Nick
    # @Date:  2019-10-24 15:40:58
    # @Last Modified by:  Nick
    # @Last Modified time: 2019-10-24 16:54:31
    import random
    import re
    import time
    import urllib
    import requests
    
    from bs4 import BeautifulSoup
    from fake_useragent import UserAgent
    
    try:
      from lxml import etree
    except Exception as e:
      import lxml.html
      # 实例化一个etree对象(解决通过from lxml import etree导包失败)
      etree = lxml.html.etree
    
    # 实例化UserAgent对象,用于产生随机UserAgent
    ua = UserAgent()
    
    
    class BlogSpider(object):
      """
      Increase the number of CSDN blog visits.
      """
    
      def __init__(self):
        self.url = "https://blog.csdn.net/PY0312/article/list/{}"
        self.headers = {
          "Referer": "https://blog.csdn.net/PY0312/",
          "User-Agent": ua.random
        }
        self.firefoxHead = {
          "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:61.0) Gecko/20100101 Firefox/61.0"}
        self.IPRegular = r"(([1-9]?\d|1\d{2}|2[0-4]\d|25[0-5]).){3}([1-9]?\d|1\d{2}|2[0-4]\d|25[0-5])"
    
      def send_request(self, num):
        """
        模拟浏览器发起请求
        :param num: num
        :return: html_str
        """
        html_str = requests.get(self.url.format(
          num), headers=self.headers).content.decode()
        # print(html_str)
    
        return html_str
    
      def parse_data(self, html_str):
        """
        用于解析发起请求返回的数据
        :param html_str:
        :return: each_page_urls
        """
        # 将返回的 html字符串 转换为 element对象,用于xpath操作
        element_obj = etree.HTML(html_str)
        # print(element_obj)
    
        # 获取每一页所有blog的url
        each_page_urls = element_obj.xpath(
          '//*[@]/main/div[2]/div/h4/a/@href')
        # print(each_page_urls)
    
        return each_page_urls
    
      def parseIPList(self, url="http://www.xicidaili.com/"):
        """
        爬取最新代理ip,来源:西刺代理
        注意:西刺代理容易被封,如遇到IP被封情况,采用以下两种方法即可解决:
        方法一:请参考我上一篇博客《Python 实现快代理IP爬虫》 ===> 喜欢研究的同学,可参考对接此接口
        方法二:直接屏蔽掉此接口,不使用代理也能正常使用
        :param url: "http://www.xicidaili.com/"
        :return: 代理IP列表ips
        """
        ips = []
        request = urllib.request.Request(url, headers=self.firefoxHead)
        response = urllib.request.urlopen(request)
        soup = BeautifulSoup(response, "lxml")
        tds = soup.find_all("td")
        for td in tds:
          string = str(td.string)
          if re.search(self.IPRegular, string):
            ips.append(string)
        # print(ips)
        return ips
    
      def main(self, total_page, loop_times, each_num):
        """
        调度方法
        :param total_page: 设置博客总页数
        :param loop_times: 设置循环次数
        :param each_num: 设置每一页要随机挑选文章数
        :return:
        """
        i = 0
        # 根据设置次数,打开循环
        while i < loop_times:
          # 遍历,得到每一页的页码
          for j in range(total_page):
            # 拼接每一页的url,并模拟发送请求, 返回响应数据
            html_str = self.send_request(j + 1)
    
            # 解析响应数据,得到每一页所有博文的url
            each_page_urls = self.parse_data(html_str)
    
            # 调用parseIPList随机产生代理IP,防反爬
            # ips = self.parseIPList()
            # proxies = {"http": "{}:8080".format(
            #   ips[random.randint(0, 40)])}
    
            # 遍历,每一页随机挑选each_num篇文章
            for x in range(each_num):
              # 随机抽取每一页的一篇博文进行访问,防反爬
              current_url = random.choice(each_page_urls)
              status = True if requests.get(
                current_url, headers=self.headers).content.decode() else False
              print("当前正在访问的文章是:{},访问状态:{}".format(current_url, status))
              time.sleep(1)  # 延时1秒,防反爬
            time.sleep(1)  # 延时1秒,防反爬
          i += 1
    
    
    if __name__ == '__main__':
      bs = BlogSpider()
      bs.main(7, 200, 3) # 参数参照main方法说明,酌情设置