This was due to incorrectly loading the coursewarehistoryextended in the
installed applications. Also, the database router in charge of routing
requests to the student_history_module database must be disabled.
In CI, the webpack-stats.json file sometimes contains just:
{"status":"compiling"}
This was due to the fact that the `subprocess.call(...)` command in
openedx-assets did not check whether the command was killed -- for lack
of memory for instance. This is resolved by replacing "call(...)" by
"check_call(...)".
Here, we upgrade the Open edX platform from Ironwood to Juniper. This
upgrade does not come with many feature changes, but there are many
technical improvements under the hood:
- Upgrade from Python 2.7 to 3.5
- Upgrade from Mongodb v3.2 to v3.6
- Upgrade Ruby to 2.5.7
We took the opportunity to completely rething the way locally running
platforms should be accessed for testing purposes. It is no longer
possible to access a running platform from http://localhost and
http://studio.localhost. Instead, users should access
http://local.overhang.io and https://studio.local.overhang.io. This
drastically simplifies internal communication between Docker containers.
To upgrade, users should simply run:
tutor local quickstart
For Kubernetes platform, the upgrade process is outlined when running:
tutor k8s upgrade --from=ironwood
- [Improvement] Add `dig`, `ping` utilities to openedx-dev Docker image
- [Bugfix] Resolve "Can't connect to MySQL server" on init
- [Improvement] Make it possible to customize the MySQL root username,
for connecting to external MySQL databases
- [Improvement] Upgrade Android app to v2.21.1 and enable many features,
such as downloading videos to SD card. Thanks for the help @ejklock!
- [Bugfix] Fix Android app crash when accessing course
- [Improvement] Add ability to rescore SCORM units
- [Bugfix] Fix scoring of graded SCORM units
- [Improvement] Increase maximum uploaded file size in the CMS from 10
to 100 Mb.
- 💥[Improvement] Do not deploy an ingress or SSL/TLS certificate issuer
ressource by default in Kubernetes
- [Improvement] Fix tls certificate generation in k8s
- 💥[Improvement] Radically change the way jobs are run: we no longer
"exec", but instead run a dedicated container.
- 💥[Improvement] Upgrade k8s certificate issuer to
cert-manager.io/v1alpha2
- [Feature] Add SCORM XBlock to default openedx docker image
When running "python setup.py install" in CI, we were getting
"requests distribution was not found and is required by kubernetes"
errors. I can reproduce this issue locally. The error disappears after
the same command is run a second time.
This is a similar issue: https://github.com/pypa/setuptools/issues/498
There are too many different ways to deploy an Ingress resource and to
generate SSL/TLS certificates: it's too much responsibility to make that
decision for the end user.
Now, when we run init for the forum, we go through the forum container
entrypoint. At this stage, the elasticsearch:9200/content url returns a
404 error, because, well, the init job has not run yet. So instead of
checking for the /content url, we simply check that the elasticsearch
container is up and running. Note that this might cause the initial
forum container to crash right after start.
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
The "Certificate" objects are no longer required. As a consequence, the
"k8s-ingress-certificates" has become useless and should be removed from
plugins.