- [Improvement] Add CORS basic configuration to LMS for subdomains of
the LMS
- [Feature] Add support for `images build --add-host` option (thanks
@grinderz!)
- [Bugfix] Fix podman compatibility by replacing `docker-compose rm`
command by `docker-compose stop` when stopping containers
- [Improvement] Improve plugin data deletion
- [Improvement] Introduce the `OPENEDX_COMMON_VERSION` setting
- [Bugfix] Make it possible to run init jobs without starting the entire
platform
- [Improvement] Reduce "openedx" Docker image size with static asset
de-duplication
This is done by explicitely listing job dependencies. Unfortunately,
it's not yet possible to move `init` before `start` in `quickstart`,
because some services, such as discovery, depend on the LMS, which takes
a few seconds to boot up. Actually, discovery also depends on ngins, as
it points to "local.overhang.io" when referring to the lms.
- [Bugfix] Upgrade all services to open-release/juniper.3
- [Bugfix] Fix upload of video transcripts to S3
- [Improvement] Memorize whether the user is running a production
platform during interactive configuration
- [Bugfix] Fix incorrect loading of some resources from localhost:18000
in development
- [Bugfix] Fix Samesite=None Secure=False cookie error for users
accessing the LMS with the latest release of Google Chrome
- [Security] Apply javascript security patch ([pull
request](https://github.com/edx/edx-platform/pull/24762))
- [Bugfix] Fix "FileError" on Scorm package upload in Scorm XBlock
- 💥[Improvement] Serve openedx static assets with
[whitenoise](http://whitenoise.evans.io/en/stable/) instead of nginx.
This removes the `k8s-deployments-nginx-init-containers` patch. Plugins
are encouraged to implement static asset serving with Whitenoise as
well.
- [Bugfix] Fix dependency on mysql service when mysql is not activated
- [Improvement] Improve openedx Docker image build time and size with
multi-stage build
- 💥[Feature] Get rid of outdated sysadmin dashboard in LMS at /sysadmin
In the LMS, some resources where loaded from localhost:18000. For
instance: http://localhost:18000/static/images/logo.png
This was due to the fact that the LMS_BASE, LMS_ROOT_URL and thus
SITE_NAME settings are overwritten by the devstack settings, so we need
to define them again in development.
This drastically simplifies volume management, as it is no longer
necessary to manually copy static assets from the docker image to the
bind-mounted volume.
This deprecates the "k8s-deployments-nginx-init-containers" patch, as we
no longer need to init the nginx container. Plugins are encouraged to
start using whitenoise as well for serving static assets.
TODO:
- test media serving: DOES NOT WORK. Whitenoise was designed to serve a
fixed list of static files. Godammit.
- compare performances
This reduces the size of the final image from 3.25Gb to 2.8Gb. Also, it
should be faster to rebuild the image in most cases. For instance, we
will not have to re-install nodejs requirements after part of the
edx-platform repo was modified.
- [Security] Apply edx-platform upstream xss security fixes ([pull
request](https://github.com/edx/edx-platform/pull/24568))
- 💥[Feature] Make it possible to override the docker registry for just
a few services by setting `DOCKER_IMAGE_SERVICENAME` values.
Previously, it was not possible to override the docker registry for just
one or a few services. Setting the DOCKER_REGISTRY configuration
parameter would apply to all images. This was inconvenient. To resolve
this, we include the docker registry value in the DOCKER_IMAGE_*
configuration parameters. This allows users to override the docker
registry individually by defining the DOCKER_IMAGE_SERVICENAME
configuration parameter.
See https://discuss.overhang.io/t/kubernetes-ci-cd-pipeline/765/3
The dashboard is available at /sysadmin. It's a CRUD interface for
managing users and courses.
Enabling this interface required that the DATA_DIR setting was not a
string, but a Path object.
Close #353.
In k8s, creating a user is an interactive command, so it needs to run in
exec. Thus, the DJANGO_SETTINGS_MODULE needs to be defined for this
command.
Close #344
Half of the tasks from edx.lms.core.default celery queue were being
processed by the CMS worker. Unfortunately, this CMS worker crashes on
some of those tasks. For instance, activation emails complain of a
missing "django_markup" template tag library because "xss_utils" is not
part of the installed app in the CMS.
The problem is that we need this edx.lms.core.default queue to be part
of the CELERY_QUEUES in the cms in order to send tasks from the CMS to
the LMS. The trick to resolve this situation is to ask the CMS celery
worker to not process the tasks from this queue.
To debug this issue, run in the LMS:
from student.tasks import send_activation_email
send_activation_email("{}")
Then watch the logs of the lms and cms workers. If the CMS workers picks
up this task (50% of the time prior to this change) then we have an
issue.
See:
https://discuss.overhang.io/t/reset-password-email-sent-but-activation-email-dont/690
"tutor ui" was failing miserably, printing a lot of garbled characters
in the shell. In fact, a FileNotFound error was being raised and
automatically caught by the ui command. When removing the catch all,
this was the error that was raised:
FileNotFoundError: [Errno 2] No such file or directory:
'/tmp/_MEIimsqmq/wcwidth/version.json' │
This is resolved on SO:
https://stackoverflow.com/questions/62155242/pyinstaller-cant-find-wcwidth-version-json-when-running-executable
This was due to incorrectly loading the coursewarehistoryextended in the
installed applications. Also, the database router in charge of routing
requests to the student_history_module database must be disabled.
In CI, the webpack-stats.json file sometimes contains just:
{"status":"compiling"}
This was due to the fact that the `subprocess.call(...)` command in
openedx-assets did not check whether the command was killed -- for lack
of memory for instance. This is resolved by replacing "call(...)" by
"check_call(...)".
Here, we upgrade the Open edX platform from Ironwood to Juniper. This
upgrade does not come with many feature changes, but there are many
technical improvements under the hood:
- Upgrade from Python 2.7 to 3.5
- Upgrade from Mongodb v3.2 to v3.6
- Upgrade Ruby to 2.5.7
We took the opportunity to completely rething the way locally running
platforms should be accessed for testing purposes. It is no longer
possible to access a running platform from http://localhost and
http://studio.localhost. Instead, users should access
http://local.overhang.io and https://studio.local.overhang.io. This
drastically simplifies internal communication between Docker containers.
To upgrade, users should simply run:
tutor local quickstart
For Kubernetes platform, the upgrade process is outlined when running:
tutor k8s upgrade --from=ironwood
- [Improvement] Add `dig`, `ping` utilities to openedx-dev Docker image
- [Bugfix] Resolve "Can't connect to MySQL server" on init
- [Improvement] Make it possible to customize the MySQL root username,
for connecting to external MySQL databases
- [Improvement] Upgrade Android app to v2.21.1 and enable many features,
such as downloading videos to SD card. Thanks for the help @ejklock!
- [Bugfix] Fix Android app crash when accessing course
- [Improvement] Add ability to rescore SCORM units
- [Bugfix] Fix scoring of graded SCORM units
- [Improvement] Increase maximum uploaded file size in the CMS from 10
to 100 Mb.
- 💥[Improvement] Do not deploy an ingress or SSL/TLS certificate issuer
ressource by default in Kubernetes
- [Improvement] Fix tls certificate generation in k8s
- 💥[Improvement] Radically change the way jobs are run: we no longer
"exec", but instead run a dedicated container.
- 💥[Improvement] Upgrade k8s certificate issuer to
cert-manager.io/v1alpha2
- [Feature] Add SCORM XBlock to default openedx docker image
There are too many different ways to deploy an Ingress resource and to
generate SSL/TLS certificates: it's too much responsibility to make that
decision for the end user.
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
The "Certificate" objects are no longer required. As a consequence, the
"k8s-ingress-certificates" has become useless and should be removed from
plugins.
- [Feature] Make it easy to add custom translation strings to the
openedx Docker image
- [Improvement] Make it possible to rely on a different npm registry for
faster image building
Users can now add custom translation strings to a locale folder at build
time, very much in the same way as custom themes or requirements. This
is quite convenient, although is does require quite a bit of time to
rebuild the docker images.
During an incident at npmjs.org it was extremely difficult to pull
nodejs packages -- so we made it possible to pull from a custom
registry, deployed for instance with Verdaccio (https://verdaccio.org/).
- [Bugfix] Make sure all emails (including "password reset") are
properly saved to a local file in development mode (#315)
- [Improvement] Add `openedx-development-settings` patch to patch the
LMS and the CMS simultaneously in development
- [Bugfix] Fix missing celery tasks in the CMS
In development, emails sent from edx-platform were using the
"file_email" channel from edx-ace ("edX's automated communication
engine"). This channel was failing because it tries to write to a file
located in the /edx folder, which does not exist in tutor containers. To
fix this, we configure edx-ace to rely on the django email backend,
which itself is configured to send emails to a file in development. It
turns out that this backend was also configured to store emails in a
file located in the /edx folder, so we had to add the standard
EMAIL_FILE_PATH django setting to our development settings.
It was easier to reconfigure the django file email backend than the
edx-ace file_email channel because the output path of the latter cannot
be modified by a setting.
Note that this causes all emails to be stored in local files instead of
being sent to actual recipients. This is the default behaviour in Open
edX, and indeed in most default django apps (in development). This is a
good thing! If, for some reason, developers would like to try out email
sending during development, they should modify the EMAIL_BACKEND
setting and set it to 'django.core.mail.backends.smtp.EmailBackend'.
This is quite easy to achieve with the help of a plugin:
name: sendemailsindev
version: 0.1.0
patches:
openedx-development-settings: |
# actually send emails in dev
EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
Close #315
When we were changing unit titles in the CMS, the changes were taking a
long time to be reflected in the LMS. That's because the cache key that
corresponds to the course structure was not being updated. It was the
responsibility of an asynchronous LMS celery worker to update this cache
entry. However, this was impossible in most cases because tasks
triggered in the CMS were only processed by CMS workers. That is, unless
we are using a custom celery router:
https://celery.readthedocs.io/en/latest/userguide/routing.html#routers
This is what edx-platform does in the devstack: certain CMS tasks are
forwarded both to CMS and to LMS workers. This is achieved by defining
the ALTERNATE_WORKER_QUEUES="lms" django setting in the CMS.
Adding this setting to Tutor solves the problem in production. However,
in development mode Open edX runs without workers
(`CELERY_ALWAYS_EAGER=True`). This means that the course structure will
not be automatically updated when running `tutor dev` commands, which is
a shame. The alternative is to define the
"block_structure.invalidate_cache_on_publish" waffle switch. This can be
done from the UI (in /admin/waffle/switch/add/) or by running:
tutor dev run lms ./manage.py lms waffle_switch block_structure.invalidate_cache_on_publish on --create
However, this flag seems to slow down access to the LMS for the first
user who tries to access the course after it has been updated.
Close #302
- [Feature] Add `encrypt` template filter to conveniently add
htpasswd-based authentication to nginx
- [Bugfix] Fix "missing tty" during init in cron jobs
Client-side translations are stored in "djangojs.js" files. Supposedly,
these files were properly compiled prior to the Ironwood release -- but
this is not the case, so we need to re-generate them.
Also, we need to re-generate the djangojs.js files for the custom,
downloaded locales. The assets collection settings are also fixed to
take into account the separate locale folder.
This step needs to happen prior to static assets collection, as the
djangojs files are collected to the staticfiles/ folder.
See these conversations:
https://discuss.overhang.io/t/localization-not-works-perfect/363https://discuss.openedx.org/t/localization-not-work-for-js-files/1671
tls renewal and generation was failing in cron jobs because of "The
input device is not a TTY" errors. This is because the "-it" docker
option does not work when a tty is not available.
- [Bugfix] Fix "Unable to resolve dependency" error during forum
initialisation
- [Feature] Add `settheme` command to easily assign a theme to a domain
name
- [Improvement] Modify nginx access logs to include request scheme and
server name (plugin developers should use the "tutor" log format)
- [Bugfix] Fix DNS resolution of restarted service
- [Feature] Restart multiple services with `local restart`
- [Feature] Make it possible to easily reload openedx gunicorn process
with `tutor local exec lms reload-gunicorn``
- [Improvement] Rename lms/cms_worker to lms/cms-worker in local
deployment
- [Improvement] Add the management plugin to the rabbitmq container
- [Improvement] Make it possible to run an Elasticsearch service on
https
When running "bundle exec rake search:initialize" in the forum, we were
getting the following error:
/openedx/ruby/lib/ruby/site_ruby/2.4.0/rubygems/resolver.rb:235:in `search_for': Unable to resolve dependency: user requested 'did_you_mean (= 1.1.0)' (Gem::UnsatisfiableDependencyError)
It happens this error suddently happened because rubygems-update was not
pinned to a specific version. v3.0.4 was working and v3.1.2 was not.
As it happens, we don't need rubygems-update, so we simply get rid of it
entirely.