The "latest" tag is a pain to maintain: it's a tag that we delete and
re-create at every release. Whenever we delete it, the binaries become
unavailable on Github until they are re-generated. Thus, from now on, we
conform to good practices (as examplified by the
github.com/docker/compose) project and distribute only pinned release.
The "nightly" tag remains, for now, as it allows us to distribute beta
features. It may disappear in the future.
For some reason, the "nosetests" binary is not available on Mac OS in
Travis-ci. This is what we get when we try to install nose:
Requirement already satisfied: nose==1.3.7 in
/usr/local/Cellar/numpy/1.14.5/libexec/nose/lib/python3.6/site-packages
(from -r requirements/dev.txt (line 25)) (1.3.7)
Environment is no longer generated separately for each target, but only
once the configuration is saved.
Note that the environment is automatically updated during
re-configuration, based on a "version" file stored in the environment.
Replace all make commands by a single "tutor" binary. Environment and
data are all moved to ~/.tutor/local/share/tutor. We take the
opportunity to add a web UI and revamp the documentation.
This is a complete rewrite.
Close #121.
Close #147.
"env" now only generates the environment, and depends only on
config.json, which is run only when necessary. There exists only one
"make configure" command, which force-runs config.json and builds the
env.
This allows us to deploy much faster: all we have to do is to copy the
assets from the container to the shared volume.
We also changed the way themes are managed: similarly to static assets,
they are now packaged inside the docker image.
We don't need to run "chmod" on openedx files outside of development
mode. So, there is no need to set the USERID environment variable in
most cases. This should considerably accelerate pretty much all commands
that involve the openedx container.
For discussion consult PR #98.
webpack requires the NODE_ENV environment variable which is incorrectly
set "paver update_assets" in development mode. To avoid this issue, we
split update_assets into its subparts.
Template files are now directly loaded in the configurator container, so
that it is possible to run the configurator container directly, outside
of this project.
HTTPS is now fully supported. The ACTIVATE_HTTPS feature flag needs to
be set. Required domain names are LMS_HOST, preview.LMS_HOST and
CMS_HOST.
Close #46.
We no longer run the `configure` script on the host. Instead, we run a
container that generates the configuration files. This opens the way for
more complex configuration templates that would be written in jinja2.
More complex templates are required for feature flags, such as SSL,
XQUEUE, etc.
We replace the custom script wait-for-greenlight by dockerize: it is a
generic tool that allows for different services to be up. Also, on day,
it will allow us to generate config files dynamically (maybe).
Xqueue containers consist of two services: a gunicorn service, that
receives requests from LMS/CMS, and a worker service. I guess the worker
service receives orders from the gunicorn service, through rabbitmq.
(but I'm less than certain about that).
While adding xqueue containers, we refactored the way mysql databases
are created, and how the root password is loaded. Also, we silenced some
options from the configure script.
Images are no longer built locally, Instead, they are downloaded from
docker hub. This completely changes config file organisation. In
particular, we no longer copy configuration files to the original docker
image.
Demo course about pages were not available (and probably many other
pages as well) because the preview url was the same as the real url.
Kudos to @dannielarriola for solving this!
Close issues #11#7.
This allows the user to run their own devstack inside the containers.
Yay!
Also, we handle file permissions cleanly: in docker-entrypoint.sh we
chmod the data and edx-platform files to the same UID of the user on the
host machine. No more permission headaches!
When running "wait-for-mysql.sh", the "echo 'show tables' | manage.py
dbshell" command is always considered a success, even if the database is
unreachable ($? == 0). So we replace this command by a global system
check.
This closes #3.