python 如何在通过gunicorn运行django的docker容器中运行django-apscheduler

jhiyze9q  于 2024-01-05  发布在  Python
关注(0)|答案(1)|浏览(334)

是否可以在通过gunicorn运行django的docker容器中运行django-apscheduler?目前我遇到的问题是,我的入口点脚本中的自定义manage.py命令永远运行,因此gunicorn永远不会执行。
我的入口点脚本:

  1. #!/bin/sh
  2. python manage.py runapscheduler --settings=core.settings_dev_docker

字符串
我的runapscheduler.py

  1. # runapscheduler.py
  2. import logging
  3. from django.conf import settings
  4. from apscheduler.schedulers.blocking import BlockingScheduler
  5. from apscheduler.schedulers.background import BackgroundScheduler
  6. from apscheduler.triggers.cron import CronTrigger
  7. from django.core.management.base import BaseCommand
  8. from django_apscheduler.jobstores import DjangoJobStore
  9. from django_apscheduler.models import DjangoJobExecution
  10. from django_apscheduler import util
  11. from backend.scheduler.scheduler import scheduler
  12. logger = logging.getLogger("backend")
  13. def my_job():
  14. logger.error("Hello World!")
  15. # Your job processing logic here...
  16. pass
  17. # The `close_old_connections` decorator ensures that database connections, that have become
  18. # unusable or are obsolete, are closed before and after your job has run. You should use it
  19. # to wrap any jobs that you schedule that access the Django database in any way.
  20. @util.close_old_connections
  21. # TODO: Change max_age to keep old jobs longer
  22. def delete_old_job_executions(max_age=604_800):
  23. """
  24. This job deletes APScheduler job execution entries older than `max_age` from the database.
  25. It helps to prevent the database from filling up with old historical records that are no
  26. longer useful.
  27. :param max_age: The maximum length of time to retain historical job execution records.
  28. Defaults to 7 days.
  29. """
  30. DjangoJobExecution.objects.delete_old_job_executions(max_age)
  31. class Command(BaseCommand):
  32. help = "Runs APScheduler."
  33. def handle(self, *args, **options):
  34. # scheduler = BlockingScheduler(timezone=settings.TIME_ZONE)
  35. # scheduler.add_jobstore(DjangoJobStore(), "default")
  36. scheduler.add_job(
  37. my_job,
  38. trigger=CronTrigger(minute="*/1"), # Every 10 seconds
  39. id="my_job", # The `id` assigned to each job MUST be unique
  40. max_instances=1,
  41. replace_existing=True,
  42. )
  43. logger.error("Added job 'my_job'.")
  44. scheduler.add_job(
  45. delete_old_job_executions,
  46. trigger=CronTrigger(
  47. day_of_week="mon", hour="00", minute="00"
  48. ), # Midnight on Monday, before start of the next work week.
  49. id="delete_old_job_executions",
  50. max_instances=1,
  51. replace_existing=True,
  52. )
  53. logger.error(
  54. "Added weekly job: 'delete_old_job_executions'."
  55. )
  56. try:
  57. logger.error("Starting scheduler...")
  58. scheduler.start()
  59. except KeyboardInterrupt:
  60. logger.error("Stopping scheduler...")
  61. scheduler.shutdown()
  62. logger.error("Scheduler shut down successfully!")


我的docker容器中的命令如下:

  1. command: gunicorn core.wsgi:application --bind 0.0.0.0:8000


如何正确运行runapscheduler,使gunicorn也在运行?我必须为runapscheduler创建一个单独的进程吗?

56lgkhnf

56lgkhnf1#

我遇到了这个问题,并让它工作。我使用docker-compose来启动这个过程,但这是不相关的:

  1. version: "3.9"
  2. services:
  3. app:
  4. container_name: django
  5. build: .
  6. command: >
  7. bash -c "pipenv run python manage.py makemigrations
  8. && pipenv run python manage.py migrate
  9. & wait
  10. pipenv run python manage.py runserver 0.0.0.0:8000
  11. & pipenv run python manage.py startscheduler"
  12. volumes:
  13. - ./xy:/app
  14. ports:
  15. - 8000:8000
  16. environment:
  17. - HOST=db
  18. depends_on:
  19. db:
  20. condition: service_healthy

字符串
重要的部分是我们提供command的位置:

  • 如果使用&&来链接命令,我的倒数第二个命令将不会退出,因此下一个命令将不会启动
  • 如果您使用&来链接它们,则两者将并行运行

我使用wait等待第一个块运行,然后一起启动调度程序和应用程序Gunicorn服务器。
额外提示:如果在settings.py中配置了日志记录(不依赖于print),则可以将管理命令的日志输出到runserver的日志流中。

展开查看全部

相关问题