Here, we upgrade the Open edX platform from Ironwood to Juniper. This
upgrade does not come with many feature changes, but there are many
technical improvements under the hood:
- Upgrade from Python 2.7 to 3.5
- Upgrade from Mongodb v3.2 to v3.6
- Upgrade Ruby to 2.5.7
We took the opportunity to completely rething the way locally running
platforms should be accessed for testing purposes. It is no longer
possible to access a running platform from http://localhost and
http://studio.localhost. Instead, users should access
http://local.overhang.io and https://studio.local.overhang.io. This
drastically simplifies internal communication between Docker containers.
To upgrade, users should simply run:
tutor local quickstart
For Kubernetes platform, the upgrade process is outlined when running:
tutor k8s upgrade --from=ironwood
- [Improvement] Add `dig`, `ping` utilities to openedx-dev Docker image
- [Bugfix] Resolve "Can't connect to MySQL server" on init
- [Improvement] Make it possible to customize the MySQL root username,
for connecting to external MySQL databases
- [Improvement] Upgrade Android app to v2.21.1 and enable many features,
such as downloading videos to SD card. Thanks for the help @ejklock!
- [Bugfix] Fix Android app crash when accessing course
- [Improvement] Add ability to rescore SCORM units
- [Bugfix] Fix scoring of graded SCORM units
- [Improvement] Increase maximum uploaded file size in the CMS from 10
to 100 Mb.
- 💥[Improvement] Do not deploy an ingress or SSL/TLS certificate issuer
ressource by default in Kubernetes
- [Improvement] Fix tls certificate generation in k8s
- 💥[Improvement] Radically change the way jobs are run: we no longer
"exec", but instead run a dedicated container.
- 💥[Improvement] Upgrade k8s certificate issuer to
cert-manager.io/v1alpha2
- [Feature] Add SCORM XBlock to default openedx docker image
There are too many different ways to deploy an Ingress resource and to
generate SSL/TLS certificates: it's too much responsibility to make that
decision for the end user.
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
The "Certificate" objects are no longer required. As a consequence, the
"k8s-ingress-certificates" has become useless and should be removed from
plugins.
- [Feature] Make it easy to add custom translation strings to the
openedx Docker image
- [Improvement] Make it possible to rely on a different npm registry for
faster image building
Users can now add custom translation strings to a locale folder at build
time, very much in the same way as custom themes or requirements. This
is quite convenient, although is does require quite a bit of time to
rebuild the docker images.
During an incident at npmjs.org it was extremely difficult to pull
nodejs packages -- so we made it possible to pull from a custom
registry, deployed for instance with Verdaccio (https://verdaccio.org/).
- [Bugfix] Make sure all emails (including "password reset") are
properly saved to a local file in development mode (#315)
- [Improvement] Add `openedx-development-settings` patch to patch the
LMS and the CMS simultaneously in development
- [Bugfix] Fix missing celery tasks in the CMS
In development, emails sent from edx-platform were using the
"file_email" channel from edx-ace ("edX's automated communication
engine"). This channel was failing because it tries to write to a file
located in the /edx folder, which does not exist in tutor containers. To
fix this, we configure edx-ace to rely on the django email backend,
which itself is configured to send emails to a file in development. It
turns out that this backend was also configured to store emails in a
file located in the /edx folder, so we had to add the standard
EMAIL_FILE_PATH django setting to our development settings.
It was easier to reconfigure the django file email backend than the
edx-ace file_email channel because the output path of the latter cannot
be modified by a setting.
Note that this causes all emails to be stored in local files instead of
being sent to actual recipients. This is the default behaviour in Open
edX, and indeed in most default django apps (in development). This is a
good thing! If, for some reason, developers would like to try out email
sending during development, they should modify the EMAIL_BACKEND
setting and set it to 'django.core.mail.backends.smtp.EmailBackend'.
This is quite easy to achieve with the help of a plugin:
name: sendemailsindev
version: 0.1.0
patches:
openedx-development-settings: |
# actually send emails in dev
EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
Close #315
When we were changing unit titles in the CMS, the changes were taking a
long time to be reflected in the LMS. That's because the cache key that
corresponds to the course structure was not being updated. It was the
responsibility of an asynchronous LMS celery worker to update this cache
entry. However, this was impossible in most cases because tasks
triggered in the CMS were only processed by CMS workers. That is, unless
we are using a custom celery router:
https://celery.readthedocs.io/en/latest/userguide/routing.html#routers
This is what edx-platform does in the devstack: certain CMS tasks are
forwarded both to CMS and to LMS workers. This is achieved by defining
the ALTERNATE_WORKER_QUEUES="lms" django setting in the CMS.
Adding this setting to Tutor solves the problem in production. However,
in development mode Open edX runs without workers
(`CELERY_ALWAYS_EAGER=True`). This means that the course structure will
not be automatically updated when running `tutor dev` commands, which is
a shame. The alternative is to define the
"block_structure.invalidate_cache_on_publish" waffle switch. This can be
done from the UI (in /admin/waffle/switch/add/) or by running:
tutor dev run lms ./manage.py lms waffle_switch block_structure.invalidate_cache_on_publish on --create
However, this flag seems to slow down access to the LMS for the first
user who tries to access the course after it has been updated.
Close #302
- [Feature] Add `encrypt` template filter to conveniently add
htpasswd-based authentication to nginx
- [Bugfix] Fix "missing tty" during init in cron jobs
Client-side translations are stored in "djangojs.js" files. Supposedly,
these files were properly compiled prior to the Ironwood release -- but
this is not the case, so we need to re-generate them.
Also, we need to re-generate the djangojs.js files for the custom,
downloaded locales. The assets collection settings are also fixed to
take into account the separate locale folder.
This step needs to happen prior to static assets collection, as the
djangojs files are collected to the staticfiles/ folder.
See these conversations:
https://discuss.overhang.io/t/localization-not-works-perfect/363https://discuss.openedx.org/t/localization-not-work-for-js-files/1671
tls renewal and generation was failing in cron jobs because of "The
input device is not a TTY" errors. This is because the "-it" docker
option does not work when a tty is not available.
- [Bugfix] Fix "Unable to resolve dependency" error during forum
initialisation
- [Feature] Add `settheme` command to easily assign a theme to a domain
name
- [Improvement] Modify nginx access logs to include request scheme and
server name (plugin developers should use the "tutor" log format)
- [Bugfix] Fix DNS resolution of restarted service
- [Feature] Restart multiple services with `local restart`
- [Feature] Make it possible to easily reload openedx gunicorn process
with `tutor local exec lms reload-gunicorn``
- [Improvement] Rename lms/cms_worker to lms/cms-worker in local
deployment
- [Improvement] Add the management plugin to the rabbitmq container
- [Improvement] Make it possible to run an Elasticsearch service on
https
When running "bundle exec rake search:initialize" in the forum, we were
getting the following error:
/openedx/ruby/lib/ruby/site_ruby/2.4.0/rubygems/resolver.rb:235:in `search_for': Unable to resolve dependency: user requested 'did_you_mean (= 1.1.0)' (Gem::UnsatisfiableDependencyError)
It happens this error suddently happened because rubygems-update was not
pinned to a specific version. v3.0.4 was working and v3.1.2 was not.
As it happens, we don't need rubygems-update, so we simply get rid of it
entirely.
To do so, we had to create a new log_format.
Plugin developers are strongly encouraged to start using this log format
by adding the `access_log /var/log/nginx/access.log tutor;` directive to
their extra nginx configurations.
In order to load this log format early, the `tutor.conf` config file had
to be renamed to something early in the alphabet... (hence starting by
an underscore) Older users would face an error on nginx reload, so older
"tutor.conf" files are automatically removed on config save.
Previously, a common error when restarting e.g: the lms or the cms was
that nginx redirected to the wrong container. For instance:
access studio.localhost
tutor local restart lms cms
access studio.localhost
In the second call to studio.localhost, we were frequently trying to
access the LMS, which resulted in a 400 error.
We solve this issue by setting a TTL of 10s on the nginx proxy name
resolution.
More docs:
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver
The management plugin exposes an http API that makes it possible to
monitor rabbitmq. By default, we do not expose the management dashboard.
As a consequence, the API is only usable by other internal containers.
- [Improvement] Switch edx-platform from open-release/ironwood.2 tag to
the open-release/ironwood.master branch
- [Security] Upgrade django to 1.11.28
- [Improvement] Make it possible to configure the elasticsearch heap
size
- [Bugfix] Fix broken elasticsearch environment variables
- [Improvement] Restore more recent Android app version (#289).
There are too many patches on top of ironwood.2, and it's not practical
to pull them all one by one. We still want to build on top of a specific
version, and not a branch, so we use a dirty hack to guarantee that the
docker image is properly rebuilt by CI when we change it.
Because we are running a version of elasticsearch older than Methusalem,
the docker environment variables were not properly taken into account.
For instance, the cluster name and "mlockall" settings were incorrect,
as we could see by running:
$ tutor local run lms curl elasticsearch:9200 | grep cluster_name
...
"cluster_name" : "elasticsearch",
$ tutor local run lms curl elasticsearch:9200/_nodes/process?pretty | grep mlock
...
"mlockall" : false
See
https://discuss.overhang.io/t/elastic-container-is-not-being-removed/312/3
for discussion.
This fix also introduces a new tutor configuration setting to adjust the
elasticsearch heap size.
A prior change used the ironwood.1 tag to build the Android app in an
attempt to solve #289. Turns out that this change was unnecessary. So
here we revert to a more recent release of the Android app. Instead of
building from the master branch (which might create suprises) we build
from a fixed release tag.
The source repo and version are customisable via build arguments.
This is for supporting json-based plugins. The great thing about this
change is that it allows us to easily print plugin version numbers in
`plugins list`.
- [Bugfix] Fix oauth authentication in dev mode
- [Improvement] Upgrade to the 3.7 docker-compose syntax
- [Improvement] The `dev runserver` command can now be run for just any
service
- 💥[Feature] `dev run/exec` commands now support generic options which
are passed to docker-compose. Consequently, defining the
`TUTOR_EDX_PLATFORM_PATH` environment variable no longer works. Instead,
users are encouraged to explicitely pass the `-v` option, define a
command alias or create a `docker-compose.override.yml` file.
By de-duplicating the code between dev.py and local.py, we are able to
support more docker-compose run/up/stop options passed from tutor. To do
so, we had to disable some features, such as automatically mounting the
edx-platform repo when the TUTOR_EDX_PLATFORM_PATH environment variable
was defined.
- [Improvement] Make it possible to override the project name in development mode
- [Bugfix] Fix user switching in development mode
- [Bugfix] Fix "k8s quickstart" crash
"Course visibility In Catalog" advanced CMS setting was being ignored
because the default the COURSE_CATALOG_VISIBILITY_PERMISSION and
COURSE_ABOUT_VISIBILITY_PERMISSION settings were incorrectly set to the
legacy default, which "see_exists".
See
https://discuss.overhang.io/t/catalog-visibility-in-tutor-deployed-open-edx/214
It makes more sense to document this command instead of adding it to the
`local` commands. If need be, in the future we should be able to re-add
it as a plugin.
This command adds a burden on the `local` and `k8s` command. It does not
make sense to provide this command out of the box, and not other
administration commands. Instead, we should better document how to run
regular `manage.py` commands from tutor.
Close #269.
- [Improvement] Add `k8s-deployments-nginx-volume-mounts` patch
- [Bugfix] Fix running forum locally when both elasticsearch and mongodb
are not activated (#266)
- [Bugfix] Fix MongoDb url in forum when running separate service (#267)
- 💥[Improvement] Better `dev` commands, with dedicated development
docker image. One of the consequences is that the `dev watchthemes`
command is replaced by `dev run lms watchthemes`.
- [Improvement] `images` commands now accept multiple `image` arguments
MONGOHQ_URL was not properly set when MONGODB_HOST/PORT was customised.
That's because the environment variable was being defined in the
Dockerfile, and not at runtime.
Close #267
The `dev` commands now rely on a different openedx-dev docker image.
This gives us multiple improvements:
- no more chown in base image
- faster chown in development
- mounted requirements volume in development
- fix static assets issues
- bundled ipdb/vim/... packages, which are convenient for development
Close #235
- [Feature] Introduce tutor docker image
- [Feature] Introduce `local hook` command for plugin development.
- [Bugfix] Persist `private.txt` file between two runs of `config save`.
(#247)
- [Improvement] Added configuration values to limit the number of
gunicorn workers for the LMS and CMS.
- 💥[Improvement] Get rid of mysql-client container
- [Improvement] Add "local-docker-compose-lms/cms-dependencies" plugin
patches
- [Improvement] Use "exec" instead of "run" to initialise local platform
This has an impact on plugin hooks. Plugin hooks that needed to run
inside mysql-client now need to run inside mysql container. This
simplifies the deployment, as we no longer have an empty mysql-client
container sitting around.
When mysql is not enabled (ACTIVATE_MYSQL=False) the mysql container is
simply a mysql client.
We ran into an issue when trying to run migrations when the MinIO plugin
is activated. As seen in issues #243 and #244, the
certificates.0003_data__default_modes migration requires access to
MinIO. To do so, the MinIO host must be reached. That means that SSL
certificates must be in place (if https is enabled) and that the nginx
server must be booted. However, it does not make sense to require that
the minio container depends on the nginx container. So, in effect, we
need a fully working platform to run migrations.
In a sense, this is better as it harmonises the init task with k8s: in
k8s, init was already run with exec.
Next step is to get rid of these ugly mysql-client/minio-client
containers that must be up at all times. It would be much simpler to
just exec the commands inside the mysql/minio containers.
All existing plugins are added to the binary bundle, in their latest
version, so that users don't need to pip install tutor.
Also, the tutor MANIFEST.in file was removed to simplify the management
of package data.
Close #242.
- [Feature] Modify ``createuser`` commands to define a password from the
command line
- [Improvement] Better yaml value parsing from command line
- [Feature] Add `dev exec` command
- [Bugfix] Fix incorrect notes settings definition
- [Improvement] Make it possible to start/stop/reboot a selection of
services
- [Improvement] Add `local/k8s reboot` commands
- [Improvement] Add `-U/--unset` option to `config save`
- [Bugfix] Fix insecure static asset loading when web proxy is enabled
- [Improvement] Rename `SECRET_KEY` configuration parameter to
`OPENEDX_SECRET_KEY`
- [Improvement] Add support for SSL and TLS in external SMTP server
(#231)
- [Bugfix] Fix missing video transcripts in LMS (#229)
- [Improvement] Make it possible to enable/disable multiple plugins at
once
- [Improvement] Add a few local and k8s patches for plugins
Video transcripts uploaded in the CMS were not visible in the LMS. This
was a symptom caused by the fact that the LMS and the CMS do not share
the same MEDIA_ROOT. We initially thought that data uploaded in the CMS
(such as transcripts) was stored in a shared data service, such as
mongodb. It is, in fact, not. This makes it even more important to run
an object storage service like minio for distributed services.
Close #229
The 0003 migration from the certificates app of the LMS requires that
the S3-like platform is correctly setup during initialisation. To solve
this issue, we introduce a pre-init hook that is run prior to the LMS
migrations.