Scrapyd部署总结

二、环境安装
安装scprayd,网址:https://github.com/scrapy/scrapyd
安装scrapyd-client,网址:https://github.com/scrapy/scrapyd-client
建议从github上下载最新源码,然后用python setup.py install安装,因为pip安装源有可能不是最新版的。

三、验证安装成功
在命令框中输入scrapyd,输出如下说明安装成功

打开http://localhost:6800/ 可以看到


点击jobs可以查看爬虫运行情况。

接下来就是让人很头疼的scrapyd-deploy问题了,查看官方文档上说用
scrapyd-deploy -l
可以看到当前部署的爬虫项目,但是当我输入这段命令的时候显示这个命令不存在或者有错误、不合法之类的。
解决方案:
在你的python目录下的Scripts文件夹中,我的路径是“D:\program files\python2.7.0\Scripts”,增加一个scrapyd-deploy.bat文件。
内容为:
@echo off
"D:\program files\python2.7.0\python.exe" "D:\program files\python2.7.0\Scripts\scrapyd-deploy" %*
然后重新打开命令框,再运行scrapyd-deploy -l 就可以了。

四、发布工程到scrapyd
scrapyd-deploy <target> -p <project>
target为你的服务器命令,project是你的工程名字。
首先对你要发布的爬虫工程的scrapy.cfg 文件进行修改,我这个文件的内容如下:
[deploy:scrapyd1]
url = http://localhost:6800/
project = baidu

因此我输入的命令是:
scrapyd-deploy scrapyd1 -p baidu

输出如下

五、启动爬虫
使用如下命令启动一个爬虫
curl http://localhost:6800/schedule.json -d project=PROJECT_NAME -d spider=SPIDER_NAME
PROJECT_NAME填入你爬虫工程的名字,SPIDER_NAME填入你爬虫的名字
我输入的代码如下:
curl http://localhost:6800/schedule.json -d project=baidu -d spider=baidu

因为这个测试爬虫写的非常简单,一下子就运行完了。查看网站的jobs可以看到有一个爬虫已经运行完,处于Finished一列中
六、停止一个爬虫
curl http://localhost:6800/cancel.json -d project=PROJECT_NAME -d job=JOB_ID

七、根据文档编写的调度脚本
前提: 已安装scrapyd-client

# coding=utf-8
import requests
import time
import json
import os
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
BASE_DIR = os.path.dirname(os.path.abspath(os.path.dirname(__file__)))
sys.path.insert(0, os.path.join(BASE_DIR, 'conf'))

# 需要配置
SCRAPYD_NODE = ""
PROJECT_NAME = ""


class SpiderSwitch(object):

    def __init__(self, project_name="", node=""):
        self.project_name = project_name
        self.ip_port = node

    def delete_project(self):
        """
        删除项目
        """
        delete_data = {"project": self.project_name}
        delete_url = "http://{}/delproject.json".format(self.ip_port)
        response = requests.post(delete_url, data=delete_data)
        print "delete_project: [{}]".format(response.content.decode())

    def start_spider(self, spider_name):
        """
        启动爬虫的方法
        :return: 返回启动的状态
        """
        start_data = {'project': self.project_name, 'spider': spider_name}
        start_url = "http://{}/schedule.json".format(self.ip_port)
        response = requests.post(url=start_url, data=start_data)
        print "start_spider: [{}]".format(response.content.decode())

    def stop_spider(self, job_id):
        """
        关闭爬虫的方法
        :return:返回关闭结果
        """
        stop_data = {'project': self.project_name, 'job': job_id}
        stop_url = "http://{}/cancel.json".format(self.ip_port)
        response = requests.post(stop_url, data=stop_data)
        print "stop_spider: [{}]".format(response.content.decode())

    def show_spiders(self):
        """
        获取scrapyd服务器上名为myproject的工程下的爬虫清单
        :return: 返回查询到的状态
        """
        show_url = 'http://{}/listspiders.json?project={}'.format(self.ip_port, self.project_name)
        response = requests.get(show_url)
        print "show_spiders : [{}]".format(json.loads(response.content.decode()))

    def show_project(self):
        """
        查看所有项目
        :return:返回查询结果
        """
        show_url = "http://{}/listprojects.json".format(self.ip_port)
        response = requests.get(show_url)
        print "show_projects: [{}]".format(response.content.decode())

    def show_jobs(self):
        """
        查看正在运行的job
        :return: 返回查看结果
        """
        job_url = "http://{}/listjobs.json?project={}".format(self.ip_port, self.project_name)
        response = requests.get(job_url)
        print "jobs: [{}]".format(response.content.decode())
        return response.content.decode()

    def start_some(self, name, n):
        """
        一次启动多个任务
        :param:n: 启动的任务数量
        """
        for i in xrange(n):
            time.sleep(0.2)
            self.start_spider(name)

    def stop_project(self):
        """
        停止改项目下的所有爬虫
        """
        job_list = json.loads(self.show_jobs()).get(u"running")
        if job_list:
            for job in job_list:
                spider_job_id = job.get(u"id")
                self.stop_spider(spider_job_id)
                time.sleep(0.2)
        print u"the jobs for the project are all closed"


if __name__ == '__main__':
    scrpayd_node = SCRAPYD_NODE
    spider_switch = SpiderSwitch(project_name=PROJECT_NAME, node=scrpayd_node)
    task = sys.argv[1]

    if task == "start":
        spider_name = sys.argv[2]
        spider_switch.start_spider(spider_name)

    if task == "stop":
        job_id = sys.argv[2]
        spider_switch.stop_spider(job_id)

    if task == "show_jobs":
        spider_switch.show_jobs()

    if task == "show_spiders":
        spider_switch.show_spiders()

    if task == "delete_project":
        spider_switch.delete_project()

    if task == "show_projects":
        spider_switch.show_project()

    if task == "start_some":
        spider_name = sys.argv[2]
        job_num = int(sys.argv[3])
        spider_switch.start_some(spider_name, job_num)

    if task == "stop_project":
        spider_switch.stop_project()


刘小恺(Kyle) wechat
如有疑问可联系博主