2019-01-22 20:25:04 +00:00
|
|
|
#
|
|
|
|
# This file is autogenerated by pip-compile
|
|
|
|
# To update, run:
|
|
|
|
#
|
2020-01-13 21:33:12 +00:00
|
|
|
# pip-compile requirements/base.in
|
2019-01-22 20:25:04 +00:00
|
|
|
#
|
2020-06-17 10:11:54 +00:00
|
|
|
appdirs==1.4.4 # via -r requirements/base.in
|
2020-10-15 15:20:27 +00:00
|
|
|
cachetools==4.1.1 # via google-auth
|
2020-11-11 09:36:42 +00:00
|
|
|
certifi==2020.11.8 # via kubernetes, requests
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
chardet==3.0.4 # via requests
|
2020-06-17 10:11:54 +00:00
|
|
|
click-repl==0.1.6 # via -r requirements/base.in
|
|
|
|
click==7.1.2 # via -r requirements/base.in, click-repl
|
2020-11-11 09:36:42 +00:00
|
|
|
google-auth==1.23.0 # via kubernetes
|
2020-10-15 15:20:27 +00:00
|
|
|
idna==2.10 # via requests
|
2020-06-17 10:11:54 +00:00
|
|
|
jinja2==2.11.2 # via -r requirements/base.in
|
2020-10-15 15:20:27 +00:00
|
|
|
kubernetes==12.0.0 # via -r requirements/base.in
|
2019-04-17 06:02:51 +00:00
|
|
|
markupsafe==1.1.1 # via jinja2
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
oauthlib==3.1.0 # via requests-oauthlib
|
2020-10-15 15:20:27 +00:00
|
|
|
prompt-toolkit==3.0.8 # via click-repl
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
pyasn1-modules==0.2.8 # via google-auth
|
|
|
|
pyasn1==0.4.8 # via pyasn1-modules, rsa
|
2020-11-11 09:36:42 +00:00
|
|
|
pycryptodome==3.9.9 # via -r requirements/base.in
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
python-dateutil==2.8.1 # via kubernetes
|
2020-06-17 10:11:54 +00:00
|
|
|
pyyaml==5.3.1 # via -r requirements/base.in, kubernetes
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
requests-oauthlib==1.3.0 # via kubernetes
|
2020-10-15 15:20:27 +00:00
|
|
|
requests==2.24.0 # via kubernetes, requests-oauthlib
|
2020-06-17 10:11:54 +00:00
|
|
|
rsa==4.6 # via google-auth
|
|
|
|
six==1.15.0 # via click-repl, google-auth, kubernetes, python-dateutil, websocket-client
|
2020-11-11 09:36:42 +00:00
|
|
|
urllib3==1.25.11 # via -r requirements/base.in, kubernetes, requests
|
2020-10-15 15:20:27 +00:00
|
|
|
wcwidth==0.2.5 # via prompt-toolkit
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
websocket-client==0.57.0 # via kubernetes
|
|
|
|
|
|
|
|
# The following packages are considered to be unsafe in a requirements file:
|
|
|
|
# setuptools
|