Python学习——爬虫(二)爬取微信文章

mac2022-06-30  25

使用代理处理反爬取微信文章

学习资源(https://www.bilibili.com/video/av19057145/?p=18) 打开搜狗搜索引擎,可以看到导航栏上的微信,这里是搜狗的一个微信文章爬虫入口: 登录后审查元素 打开pycharm,新建Project和spider项目

from urllib.parse import urlencode import requests base_url = 'https://weixin.sogou.com/weixin?' headers = { 'Cookie': 'CXID=B5BFDE61DD73217E137903780130D2C6; SUID=FECDF38C3965860A5C5158A500046538; SUV=006FB41A6E526CDC5C8634F3BBC7D355; sw_uuid=1053891526; ssuid=6154507169; LSTMV=192,176; LCLKINT=38003; sg_uuid=1121471965; IPLOC=CN3500; ABTEST=0|1570172306|v1; weixinIndexVisited=1; SNUID=7A1188F67B7FEE6A460C45E67B1DE830; JSESSIONID=aaaX1sma9XLFZ5MzIuq1w; ppinf=5|1570172685|1571382285|dHJ1c3Q6MToxfGNsaWVudGlkOjQ6MjAxN3x1bmlxbmFtZToyNzolRTYlODAlQUElRTYlODAlQUElRTQlQjglQjh8Y3J0OjEwOjE1NzAxNzI2ODV8cmVmbmljazoyNzolRTYlODAlQUElRTYlODAlQUElRTQlQjglQjh8dXNlcmlkOjQ0Om85dDJsdUdrYXJ1NFYwdDJ2Ry1EWnloTk9YendAd2VpeGluLnNvaHUuY29tfA; pprdig=Wbq68g8_mtwF-93esae-pBR0f13lIbKwzICVJP-ZnWJ1xkLnXel60zKsLXQEYWZbG2eflsSCWXfc2ddFSaMxMsmgoz6kMZaJ3imimyrLXlQ0OD9jZf2x7wyYONwjetZZwvVj7rw5NTztl6izQsFPsBDOTI2ToxONWrzRVzf6A5M; sgid=22-43552407-AV2W7w22iatgtNJxh3stU0BM; ppmdig=15701726860000001358e81c8111ae0616426c12d8c307aa; sct=3', 'Host': 'weixin.sogou.com', 'Upgrade-Insecure-Requests': '1', 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36' } keyword = '爬虫' def get_html(url): try: response = requests.get(url, allow_redirects=False, headers=headers) if response.status_code == 200: return response.text if response.status_code == 302: print("302") except ConnectionError: return get_html(url) def get_index(keyword, page): data = { 'query': keyword, 'type': 2, 'page': page } queries = urlencode(data) url = base_url + queries html = get_html(url) def main(): for page in range(1, 101): html = get_index(keyword, page) print(html) if __name__ == '__main__': main()

ip被封所以无法查看内容 这里需要用到之前配置的代理池,可直接通过pycharm终端调用:

最新回复(0)