Previously, it was not possible to override the docker registry for just
one or a few services. Setting the DOCKER_REGISTRY configuration
parameter would apply to all images. This was inconvenient. To resolve
this, we include the docker registry value in the DOCKER_IMAGE_*
configuration parameters. This allows users to override the docker
registry individually by defining the DOCKER_IMAGE_SERVICENAME
configuration parameter.
See https://discuss.overhang.io/t/kubernetes-ci-cd-pipeline/765/3
Here, we upgrade the Open edX platform from Ironwood to Juniper. This
upgrade does not come with many feature changes, but there are many
technical improvements under the hood:
- Upgrade from Python 2.7 to 3.5
- Upgrade from Mongodb v3.2 to v3.6
- Upgrade Ruby to 2.5.7
We took the opportunity to completely rething the way locally running
platforms should be accessed for testing purposes. It is no longer
possible to access a running platform from http://localhost and
http://studio.localhost. Instead, users should access
http://local.overhang.io and https://studio.local.overhang.io. This
drastically simplifies internal communication between Docker containers.
To upgrade, users should simply run:
tutor local quickstart
For Kubernetes platform, the upgrade process is outlined when running:
tutor k8s upgrade --from=ironwood
There are too many different ways to deploy an Ingress resource and to
generate SSL/TLS certificates: it's too much responsibility to make that
decision for the end user.
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
tls renewal and generation was failing in cron jobs because of "The
input device is not a TTY" errors. This is because the "-it" docker
option does not work when a tty is not available.
This is for supporting json-based plugins. The great thing about this
change is that it allows us to easily print plugin version numbers in
`plugins list`.
By de-duplicating the code between dev.py and local.py, we are able to
support more docker-compose run/up/stop options passed from tutor. To do
so, we had to disable some features, such as automatically mounting the
edx-platform repo when the TUTOR_EDX_PLATFORM_PATH environment variable
was defined.
It makes more sense to document this command instead of adding it to the
`local` commands. If need be, in the future we should be able to re-add
it as a plugin.
This command adds a burden on the `local` and `k8s` command. It does not
make sense to provide this command out of the box, and not other
administration commands. Instead, we should better document how to run
regular `manage.py` commands from tutor.
Close #269.
The `dev` commands now rely on a different openedx-dev docker image.
This gives us multiple improvements:
- no more chown in base image
- faster chown in development
- mounted requirements volume in development
- fix static assets issues
- bundled ipdb/vim/... packages, which are convenient for development
Close #235
This has an impact on plugin hooks. Plugin hooks that needed to run
inside mysql-client now need to run inside mysql container. This
simplifies the deployment, as we no longer have an empty mysql-client
container sitting around.
When mysql is not enabled (ACTIVATE_MYSQL=False) the mysql container is
simply a mysql client.
We ran into an issue when trying to run migrations when the MinIO plugin
is activated. As seen in issues #243 and #244, the
certificates.0003_data__default_modes migration requires access to
MinIO. To do so, the MinIO host must be reached. That means that SSL
certificates must be in place (if https is enabled) and that the nginx
server must be booted. However, it does not make sense to require that
the minio container depends on the nginx container. So, in effect, we
need a fully working platform to run migrations.
In a sense, this is better as it harmonises the init task with k8s: in
k8s, init was already run with exec.
Next step is to get rid of these ugly mysql-client/minio-client
containers that must be up at all times. It would be much simpler to
just exec the commands inside the mysql/minio containers.