使用:
from fake_useragent import UserAgent
ua = UserAgent() #ie浏览器的user agent print(ua.ie)
Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)
#opera浏览器print(ua.opera)
Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11
#chrome浏览器print(ua.chrome)
Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.2 (KHTML, like Gecko) Chrome/22.0.1216.0 Safari/537.2
#firefox浏览器print(ua.firefox)
Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:16.0.1) Gecko/20121011 Firefox/16.0.1
#safri浏览器print(ua.safari)
Mozilla/5.0 (iPad; CPU OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5355d Safari/8536.25
最实用的
但我认为写爬虫最实用的是可以随意变换headers,一定要有随机性。在这里我写了三个随机生成user agent,三次打印都不一样,随机性很强,十分方便。
from fake_useragent import UserAgent ua = UserAgent() print(ua.random) print(ua.random) print(ua.random)
Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:15.0) Gecko/20100101 Firefox/15.0.1 Mozilla/5.0 (Windows NT 6.2; Win64; x64; rv:16.0.1) Gecko/20121011 Firefox/16.0.1 Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11
爬虫中具体使用方法
import requestsfrom fake_useragent import UserAgent ua = UserAgent() headers = { 'User-Agent': ua.random} url = '待爬网页的url' resp = requests.get(url, headers=headers)
注意:
- fake-useragent 将收集到的数据缓存到temp文件夹, 例如 /tmp, 更新数据:
from fake_useragent import UserAgentua = UserAgent()ua.update()
- 1
- 2
- 3
- 有时候会因为网络或者其他问题,出现异常(
fake_useragent.errors.FakeUserAgentError: Maximum amount of retries reached
), 可以禁用服务器缓存(从这里踩了一个坑, 没仔细看文档的锅):
from fake_useragent import UserAgentua = UserAgent(use_cache_server=False)
- 1
- 2
- 可以自己添加本地数据文件(v0.1.4+)
import fake_useragent# I am STRONGLY!!! recommend to use version suffix location = '/home/user/fake_useragent%s.json' % fake_useragent.VERSION ua = fake_useragent.UserAgent(path=location) ua.random
文档 https://pypi.org/project/fake-useragent/