# 4. Deploy spiders

This will be normally just for internal use.

Scrapyd is a daemon that can be started to schedule runs

<https://doc.scrapy.org/en/latest/index.html>

<http://scrapyd.readthedocs.io/en/latest/>

configure your live instance hostname in scrapy.cfg once you tested everything locally you can deploy to live scrapyd and schedule crawls using scrapyd-client

`docker exec -ti cli bash`&#x20;

`scrapyd-deploy live`&#x20;

once deployed you can interact directly with scrapyd through the webapi, either using the client

`docker exec -ti cli bash scrapyd-client -t` [`https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/`](https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/) `schedule -p Hoaxlyspiders climatefeedback.org`&#x20;

or from anywhere else.

`curl` [`https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/schedule.json`](https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/schedule.json) `-d project=Hoaxlyspiders -d spider=climatefeedback.org curl` [`https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/listprojects.json`](https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/listprojects.json)&#x20;

`curl` [`https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/listspiders.json?project=Hoaxlyspiders`](https://htaccessusername:htaccesspassword@scrapyd.hoax.ly/listspiders.json?project=Hoaxlyspiders)

A crawl can be scheduled to run regularly by deploying it to a dedicated server.

For portia spiders deployment should work normally but currently requires a workaround in our settings.
