Merge branch 'master' into nightly

This commit is contained in:
Régis Behmo 2023-06-14 22:49:05 +02:00
commit c575422d9c
73 changed files with 1575 additions and 1573 deletions

View File

@ -38,7 +38,7 @@ jobs:
# https://github.com/actions/setup-python
uses: actions/setup-python@v3
with:
python-version: 3.7
python-version: 3.8
cache: 'pip'
cache-dependency-path: requirements/dev.txt
- name: Upgrade pip and setuptools

View File

@ -14,11 +14,11 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: 3.9
python-version: 3.8
cache: 'pip'
cache-dependency-path: requirements/dev.txt
- name: Upgrade pip
run: python -m pip install --upgrade pip setuptools==44.0.0
run: python -m pip install --upgrade pip setuptools
- name: Install dependencies
run: pip install -r requirements/dev.txt
- name: Static code analysis

View File

@ -20,6 +20,37 @@ instructions, because git commits are used to generate release notes:
<!-- scriv-insert-here -->
<a id='changelog-16.0.0'></a>
## v16.0.0 (2023-06-14)
- 💥[Feature] Upgrade to Palm. (by @regisb)
- [Bugfix] Rename ORA2 file upload folder from "SET-ME-PLEASE (ex. bucket-name)" to "openedxuploads". This has the effect of moving the corresponding folder from the `<tutor root>/data/lms/ora2` directory. MinIO users were not affected by this bug.
- 💥[Improvement] During registration, the honor code and terms of service links are no longer visible by default. For most platforms, these links did not work anyway.
- 💥[Deprecation] Halt support for Python 3.7. The binary release of Tutor is also no longer compatible with macOS 10.
- 💥[Deprecation] Drop support for `docker-compose`, also known as Compose V1. The `docker compose` (no hyphen) plugin must be installed.
- 💥[Refactor] We simplify the hooks API by getting rid of the `ContextTemplate`, `FilterTemplate` and `ActionTemplate` classes. As a consequences, the following changes occur:
- `APP` was previously a ContextTemplate, and is now a dictionary of contexts indexed by name. Developers who implemented this context should replace `Contexts.APP(...)` by `Contexts.app(...)`.
- Removed the `ENV_PATCH` filter, which was for internal use only anyway.
- The `PLUGIN_LOADED` ActionTemplate is now an Action which takes a single argument. (the plugin name)
- 💥[Refactor] We refactored the hooks API further by removing the static hook indexes and the hooks names. As a consequence, the syntactic sugar functions from the "filters" and "actions" modules were all removed: `get`, `add*`, `iterate*`, `apply*`, `do*`, etc.
- 💥[Deprecation] The obsolete filters `COMMANDS_PRE_INIT` and `COMMANDS_INIT` have been removed. Plugin developers should instead use `CLI_DO_INIT_TASKS` (with suitable priorities).
- 💥[Feature] The "openedx" Docker image is no longer built with docker-compose in development on `tutor dev start`. This used to be the case to make sure that it was always up-to-date, but it introduced a discrepancy in how images were build (`docker compose build` vs `docker build`). As a consequence:
- The "openedx" Docker image in development can be built with `tutor images build openedx-dev`.
- The `tutor dev/local start --skip-build` option is removed. It is replaced by opt-in `--build`.
- [Improvement] The `IMAGES_BUILD` filter now supports relative paths as strings, and not just as tuple of strings.
- [Improvement] Auto-complete the image names in the `images build/pull/push/printtag` commands.
- [Deprecation] For local installations, Docker v20.10.15 and Compose v2.0.0 are now the minimum required versions.
- [Bugfix] Make `tutor config printvalue ...` print actual yaml-formatted values, such as "true" and "null"
- 💥[Improvement] MongoDb was upgraded to 4.4.
- 💥[Improvement] Deprecate the `RUN_LMS` and `RUN_CMS` tutor settings, which should be mostly unused. (by @regisb)
- [Improvement] Greatly simplify ownership of bind-mounted volumes with docker-compose. Instead of running one service per application, we run just a single "permissions" service. This change should be backward-compatible. (by @regisb)
- [Feature] Add a `config save -a/--append -A/--remove` options to conveniently append and remove values to/from list entries. (by @regisb)
- [Improvement] Considerably accelerate building the "openedx" Docker image with `RUN --mount=type=cache`. This feature is only for Docker with BuildKit, so detection is performed at build-time. (by @regisb)
- [Improvement] Automatically pull Docker image cache from the remote registry. Again, this will considerably improve image build-time, particularly in "cold-start" scenarios, where the images need to be built from scratch. The registry cache can be disabled with the `tutor images build --no-registry-cache` option. (by @regisb)
- [Feature] Automatically mount host folders *at build time*. This is a really important feature, as it allows us to transparently build images using local forks of remote repositories. (by @regisb)
- 💥[Deprecation] Remove the various `--mount` options. These options are replaced by persistent mounts, which are managed by the `tutor mounts` commands. (by @regisb)
- [Feature] Add the `do importdemocourse --repo-dir=...` option, to import courses from subdirectories of git repositories. This allows us to import the openedx-test-course in Palm with: `tutor local do importdemocourse --repo=https://github.com/openedx/openedx-test-course --version=o
pen-release/palm.master --repo-dir=test-course/course`. (by @regisb)
<a id='changelog-15.3.7'></a>
## v15.3.7 (2023-06-13)

View File

@ -10,7 +10,7 @@
# Because this image is still experimental, and we are not quite sure if it's going to
# be very useful, we do not provide any usage documentation.
FROM docker.io/python:3.7-slim-stretch
FROM docker.io/python:3.8-slim-stretch
# As per https://github.com/docker/compose/issues/3918
COPY --from=library/docker:19.03 /usr/local/bin/docker /usr/bin/docker

View File

@ -91,7 +91,6 @@ bootstrap-dev-plugins: bootstrap-dev ## Install dev requirements and all support
pull-base-images: # Manually pull base images
docker image pull docker.io/ubuntu:20.04
docker image pull docker.io/python:3.7-alpine
ci-info: ## Print info about environment
python --version

View File

@ -35,7 +35,7 @@ autodoc_typehints = "description"
# For the life of me I can't get the docs to compile in nitpicky mode without these
# ignore statements. You are most welcome to try and remove them.
# To make matters worse, some ignores are only required for some versions of Python,
# from 3.7 to 3.10...
# from 3.8 to 3.10...
nitpick_ignore = [
# Sphinx does not handle ParamSpec arguments
("py:class", "T.args"),
@ -48,8 +48,6 @@ nitpick_ignore = [
("py:class", "t.Callable"),
("py:class", "t.Iterator"),
("py:class", "t.Optional"),
# python 3.7
("py:class", "Concatenate"),
# python 3.10
("py:class", "NoneType"),
("py:class", "click.core.Command"),
@ -57,8 +55,6 @@ nitpick_ignore = [
# Resolve type aliases here
# https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#confval-autodoc_type_aliases
autodoc_type_aliases: dict[str, str] = {
"T1": "tutor.core.hooks.filters.T1",
"L": "tutor.core.hooks.filters.L",
# python 3.10
"T": "tutor.core.hooks.actions.T",
"T2": "tutor.core.hooks.filters.T2",
@ -132,14 +128,12 @@ def youtube(
return [
docutils.nodes.raw(
"",
"""
f"""
<iframe width="560" height="315"
src="https://www.youtube-nocookie.com/embed/{video_id}"
frameborder="0"
allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen>
</iframe>""".format(
video_id=video_id
),
</iframe>""",
format="html",
)
]

View File

@ -40,8 +40,6 @@ With an up-to-date environment, Tutor is ready to launch an Open edX platform an
Individual service activation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- ``RUN_LMS`` (default: ``true``)
- ``RUN_CMS`` (default: ``true``)
- ``RUN_ELASTICSEARCH`` (default: ``true``)
- ``RUN_MONGODB`` (default: ``true``)
- ``RUN_MYSQL`` (default: ``true``)
@ -67,27 +65,35 @@ This configuration parameter defines the name of the Docker image to run for the
This configuration paramater defines the name of the Docker image to run the development version of the lms and cms containers. By default, the Docker image tag matches the Tutor version it was built with.
.. https://hub.docker.com/r/devture/exim-relay/tags
- ``DOCKER_IMAGE_CADDY`` (default: ``"docker.io/caddy:2.6.2"``)
This configuration paramater defines which Caddy Docker image to use.
- ``DOCKER_IMAGE_ELASTICSEARCH`` (default: ``"docker.io/elasticsearch:7.10.1"``)
- ``DOCKER_IMAGE_ELASTICSEARCH`` (default: ``"docker.io/elasticsearch:7.17.9"``)
This configuration parameter defines which Elasticsearch Docker image to use.
- ``DOCKER_IMAGE_MONGODB`` (default: ``"docker.io/mongo:4.2.24"``)
- ``DOCKER_IMAGE_MONGODB`` (default: ``"docker.io/mongo:4.4.22"``)
This configuration parameter defines which MongoDB Docker image to use.
- ``DOCKER_IMAGE_MYSQL`` (default: ``"docker.io/mysql:5.7.35"``)
.. https://hub.docker.com/_/mysql/tags?page=1&name=8.0
- ``DOCKER_IMAGE_MYSQL`` (default: ``"docker.io/mysql:8.0.33"``)
This configuration parameter defines which MySQL Docker image to use.
- ``DOCKER_IMAGE_REDIS`` (default: ``"docker.io/redis:6.2.6"``)
.. https://hub.docker.com/_/redis/tags
- ``DOCKER_IMAGE_REDIS`` (default: ``"docker.io/redis:7.0.11"``)
This configuration parameter defines which Redis Docker image to use.
- ``DOCKER_IMAGE_SMTP`` (default: ``"docker.io/devture/exim-relay:4.95-r0-2``)
.. https://hub.docker.com/r/devture/exim-relay/tags
- ``DOCKER_IMAGE_SMTP`` (default: ``"docker.io/devture/exim-relay:4.96-r1-0``)
This configuration parameter defines which Simple Mail Transfer Protocol (SMTP) Docker image to use.
@ -130,7 +136,7 @@ Open edX customisation
This defines the git repository from which you install Open edX platform code. If you run an Open edX fork with custom patches, set this to your own git repository. You may also override this configuration parameter at build time, by providing a ``--build-arg`` option.
- ``OPENEDX_COMMON_VERSION`` (default: ``"open-release/olive.4"``)
- ``OPENEDX_COMMON_VERSION`` (default: ``"open-release/palm.1"``)
This defines the default version that will be pulled from all Open edX git repositories.
@ -150,7 +156,7 @@ These two configuration parameters define which Redis database to use for Open e
.. _openedx_extra_pip_requirements:
- ``OPENEDX_EXTRA_PIP_REQUIREMENTS`` (default: ``["openedx-scorm-xblock>=15.0.0,<16.0.0"]``)
- ``OPENEDX_EXTRA_PIP_REQUIREMENTS`` (default: ``["openedx-scorm-xblock>=16.0.0,<17.0.0"]``)
This defines extra pip packages that are going to be installed for Open edX.
@ -353,19 +359,12 @@ Installing extra xblocks and requirements
Would you like to include custom xblocks, or extra requirements to your Open edX platform? Additional requirements can be added to the ``OPENEDX_EXTRA_PIP_REQUIREMENTS`` parameter in the :ref:`config file <configuration>` or to the ``env/build/openedx/requirements/private.txt`` file. The difference between them, is that ``private.txt`` file, even though it could be used for both, :ref:`should be used for installing extra xblocks or requirements from private repositories <extra_private_xblocks>`. For instance, to include the `polling xblock from Opencraft <https://github.com/open-craft/xblock-poll/>`_:
- add the following to the ``config.yml``::
tutor config save --append OPENEDX_EXTRA_PIP_REQUIREMENTS=git+https://github.com/open-craft/xblock-poll.git
OPENEDX_EXTRA_PIP_REQUIREMENTS:
- "git+https://github.com/open-craft/xblock-poll.git"
.. warning::
Specifying extra requirements through ``config.yml`` overwrites :ref:`the default extra requirements<openedx_extra_pip_requirements>`. You might need to add them to the list if your configuration depends on them.
- or add the dependency to ``private.txt``::
Alternatively, add the dependency to ``private.txt``::
echo "git+https://github.com/open-craft/xblock-poll.git" >> "$(tutor config printroot)/env/build/openedx/requirements/private.txt"
Then, the ``openedx`` docker image must be rebuilt::
tutor images build openedx
@ -404,14 +403,14 @@ If you don't create your fork from this tag, you *will* have important compatibi
- Do not try to run a fork from an older (pre-Olive) version of edx-platform: this will simply not work.
- Do not try to run a fork from the edx-platform master branch: there is a 99% probability that it will fail.
- Do not try to run a fork from the open-release/olive.master branch: Tutor will attempt to apply security and bug fix patches that might already be included in the open-release/olive.master but which were not yet applied to the latest release tag. Patch application will thus fail if you base your fork from the open-release/olive.master branch.
- Do not try to run a fork from the open-release/palm.master branch: Tutor will attempt to apply security and bug fix patches that might already be included in the open-release/palm.master but which were not yet applied to the latest release tag. Patch application will thus fail if you base your fork from the open-release/palm.master branch.
.. _i18n:
Adding custom translations
~~~~~~~~~~~~~~~~~~~~~~~~~~
If you are not running Open edX in English (``LANGUAGE_CODE`` default: ``"en"``), chances are that some strings will not be properly translated. In most cases, this is because not enough contributors have helped translate Open edX into your language. It happens! With Tutor, available translated languages include those that come bundled with `edx-platform <https://github.com/openedx/edx-platform/tree/open-release/olive.master/conf/locale>`__ as well as those from `openedx-i18n <https://github.com/openedx/openedx-i18n/tree/master/edx-platform/locale>`__.
If you are not running Open edX in English (``LANGUAGE_CODE`` default: ``"en"``), chances are that some strings will not be properly translated. In most cases, this is because not enough contributors have helped translate Open edX into your language. It happens! With Tutor, available translated languages include those that come bundled with `edx-platform <https://github.com/openedx/edx-platform/tree/open-release/palm.master/conf/locale>`__ as well as those from `openedx-i18n <https://github.com/openedx/openedx-i18n/tree/master/edx-platform/locale>`__.
Tutor offers a relatively simple mechanism to add custom translations to the openedx Docker image. You should create a folder that corresponds to your language code in the "build/openedx/locale" folder of the Tutor environment. This folder should contain a "LC_MESSAGES" folder. For instance::
@ -432,9 +431,9 @@ Then, add a "django.po" file there that will contain your custom translations::
.. warning::
Don't forget to specify the file ``Content-Type`` when adding message strings with non-ASCII characters; otherwise a ``UnicodeDecodeError`` will be raised during compilation.
The "String to translate" part should match *exactly* the string that you would like to translate. You cannot make it up! The best way to find this string is to copy-paste it from the `upstream django.po file for the English language <https://github.com/openedx/edx-platform/blob/open-release/olive.master/conf/locale/en/LC_MESSAGES/django.po>`__.
The "String to translate" part should match *exactly* the string that you would like to translate. You cannot make it up! The best way to find this string is to copy-paste it from the `upstream django.po file for the English language <https://github.com/openedx/edx-platform/blob/open-release/palm.master/conf/locale/en/LC_MESSAGES/django.po>`__.
If you cannot find the string to translate in this file, then it means that you are trying to translate a string that is used in some piece of javascript code. Those strings are stored in a different file named "djangojs.po". You can check it out `in the edx-platform repo as well <https://github.com/openedx/edx-platform/blob/open-release/olive.master/conf/locale/en/LC_MESSAGES/djangojs.po>`__. Your custom javascript strings should also be stored in a "djangojs.po" file that should be placed in the same directory.
If you cannot find the string to translate in this file, then it means that you are trying to translate a string that is used in some piece of javascript code. Those strings are stored in a different file named "djangojs.po". You can check it out `in the edx-platform repo as well <https://github.com/openedx/edx-platform/blob/open-release/palm.master/conf/locale/en/LC_MESSAGES/djangojs.po>`__. Your custom javascript strings should also be stored in a "djangojs.po" file that should be placed in the same directory.
To recap, here is an example. To translate a few strings in French, both from django.po and djangojs.po, we would have the following file hierarchy::

View File

@ -12,31 +12,25 @@ First-time setup
Firstly, either :ref:`install Tutor <install>` (for development against the named releases of Open edX) or :ref:`install Tutor Nightly <nightly>` (for development against Open edX's master branches).
Then, run one of the following in order to launch the developer platform setup process::
Then, optionally, tell Tutor to use a local fork of edx-platform.::
# To use the edx-platform repository that is built into the image, run:
tutor mounts add ./edx-platform
Then, launch the developer platform setup process::
tutor images build openedx-dev
tutor dev launch
# To bind-mount and run a local clone of edx-platform, replace
# './edx-platform' with the path to the local clone and run:
tutor dev launch --mount=./edx-platform
This will perform several tasks. It will:
* build the "openedx-dev" Docker image, which is based on the "openedx" production image but is `specialized for developer usage`_ (eventually with your fork),
* stop any existing locally-running Tutor containers,
* disable HTTPS,
* set ``LMS_HOST`` to `local.overhang.io <http://local.overhang.io>`_ (a convenience domain that simply `points at 127.0.0.1 <https://dnschecker.org/#A/local.overhang.io>`_),
* prompt for a platform details (with suitable defaults),
* build an ``openedx-dev`` image, which is based ``openedx`` production image but is `specialized for developer usage`_,
* build an ``openedx-dev`` image,
* start LMS, CMS, supporting services, and any plugged-in services,
* ensure databases are created and migrated, and
* run service initialization scripts, such as service user creation and Waffle configuration.
Additionally, when a local clone of edx-platform is bind-mounted, it will:
@ -55,10 +49,13 @@ Now, use the ``tutor dev ...`` command-line interface to manage the development
.. note::
Wherever the ``[--mount=./edx-platform]`` option is present, either:
If you've added your edx-platform to the bind-mounted folders, you can remove at any time by running::
* omit it when running of the edx-platform repository built into the image, or
* substitute it with ``--mount=<path/to/edx-platform>``.
tutor mounts remove ./edx-platform
At any time, check your configuration by running::
tutor mounts list
Read more about bind-mounts :ref:`below <bind_mounts>`.
@ -74,17 +71,17 @@ Starting the platform back up
Once first-time setup has been performed with ``launch``, the platform can be started going forward with the lighter-weight ``start -d`` command, which brings up containers *detached* (that is: in the background), but does not perform any initialization tasks::
tutor dev start -d [--mount=./edx-platform]
tutor dev start -d
Or, to start with platform with containers *attached* (that is: in the foreground, the current terminal), omit the ``-d`` flag::
tutor dev start [--mount=./edx-platform]
tutor dev start
When running containers attached, stop the platform with ``Ctrl+c``, or switch to detached mode using ``Ctrl+z``.
Finally, the platform can also be started back up with ``launch``. It will take longer than ``start``, but it will ensure that config is applied, databases are provisioned & migrated, plugins are fully initialized, and (if applicable) the bind-mounted edx-platform is set up. Notably, ``launch`` is idempotent, so it is always safe to run it again without risk to data. Including the ``--pullimages`` flag will also ensure that container images are up-to-date::
tutor dev launch [--mount=./edx-platform] --pullimages
tutor dev launch --pullimages
Debugging with breakpoints
--------------------------
@ -92,32 +89,32 @@ Debugging with breakpoints
To debug a local edx-platform repository, add a `python breakpoint <https://docs.python.org/3/library/functions.html#breakpoint>`__ with ``breakpoint()`` anywhere in the code. Then, attach to the applicable service's container by running ``start`` (without ``-d``) followed by the service's name::
# Debugging LMS:
tutor dev start [--mount=./edx-platform] lms
tutor dev start lms
# Or, debugging CMS:
tutor dev start [--mount=./edx-platform] cms
tutor dev start cms
Running arbitrary commands
--------------------------
To run any command inside one of the containers, run ``tutor dev run [OPTIONS] SERVICE [COMMAND] [ARGS]...``. For instance, to open a bash shell in the LMS or CMS containers::
tutor dev run [--mount=./edx-platform] lms bash
tutor dev run [--mount=./edx-platform] cms bash
tutor dev run lms bash
tutor dev run cms bash
To open a python shell in the LMS or CMS, run::
tutor dev run [--mount=./edx-platform] lms ./manage.py lms shell
tutor dev run [--mount=./edx-platform] cms ./manage.py cms shell
tutor dev run lms ./manage.py lms shell
tutor dev run cms ./manage.py cms shell
You can then import edx-platform and django modules and execute python code.
To rebuild assets, you can use the ``openedx-assets`` command that ships with Tutor::
tutor dev run [--mount=./edx-platform] lms openedx-assets build --env=dev
tutor dev run lms openedx-assets build --env=dev
.. _specialized for developer usage:
.. _specialized for developer usage:
Rebuilding the openedx-dev image
--------------------------------
@ -125,15 +122,17 @@ Rebuilding the openedx-dev image
The ``openedx-dev`` Docker image is based on the same ``openedx`` image used by ``tutor local ...`` to run LMS and CMS. However, it has a few differences to make it more convenient for developers:
- The user that runs inside the container has the same UID as the user on the host, to avoid permission problems inside mounted volumes (and in particular in the edx-platform repository).
- Additional Python and system requirements are installed for convenient debugging: `ipython <https://ipython.org/>`__, `ipdb <https://pypi.org/project/ipdb/>`__, vim, telnet.
- The edx-platform `development requirements <https://github.com/openedx/edx-platform/blob/open-release/olive.master/requirements/edx/development.in>`__ are installed.
- The edx-platform `development requirements <https://github.com/openedx/edx-platform/blob/open-release/palm.master/requirements/edx/development.in>`__ are installed.
If you are using a custom ``openedx`` image, then you will need to rebuild ``openedx-dev`` every time you modify ``openedx``. To so, run::
tutor dev dc build lms
tutor images build openedx-dev
Alternatively, the image will be automatically rebuilt every time you run::
tutor dev launch
.. _bind_mounts:
@ -143,35 +142,76 @@ Sharing directories with containers
It may sometimes be convenient to mount container directories on the host, for instance: for editing and debugging. Tutor provides different solutions to this problem.
.. _mount_option:
.. _persistent_mounts:
Bind-mount volumes with ``--mount``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Persistent bind-mounted volumes with ``tutor mounts``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``launch``, ``run``, ``init`` and ``start`` subcommands of ``tutor dev`` and ``tutor local`` support the ``-m/--mount`` option (see :option:`tutor dev start -m`) which can take two different forms. The first is explicit::
``tutor mounts`` is a set of Tutor command to manage bind-mounted host directories. Directories are mounted `both` at build time and run time:
tutor dev start --mount=lms:/path/to/edx-platform:/openedx/edx-platform lms
- At build time: some of the host directories will be added the `Docker build context <https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context>`__. This makes it possible to transparently build a Docker image using a locally checked-out repository.
- At run time: host directories will be bind-mounted in running containers, using either an automatic or a manual configuration.
And the second is implicit::
tutor dev start --mount=/path/to/edx-platform lms
After some directories have been added with ``tutor mounts add``, all ``tutor dev`` and ``tutor local`` commands will make use of these bind-mount volumes.
With the explicit form, the ``--mount`` option means "bind-mount the host folder /path/to/edx-platform to /openedx/edx-platform in the lms container".
Values passed to ``tutor mounts add ...`` can take one of two forms. The first is explicit::
If you use the explicit format, you will quickly realise that you usually want to bind-mount folders in multiple containers at a time. For instance, you will want to bind-mount the edx-platform repository in the "cms" container. To do that, write instead::
tutor mounts add lms:/path/to/edx-platform:/openedx/edx-platform
tutor dev start --mount=lms,cms:/path/to/edx-platform:/openedx/edx-platform lms
The second is implicit::
This command line can become cumbersome and inconvenient to work with. But Tutor can be smart about bind-mounting folders to the right containers in the right place when you use the implicit form of the ``--mount`` option. For instance, the following commands are equivalent::
tutor mounts add /path/to/edx-platform
# Explicit form
tutor dev start --mount=lms,lms-worker,lms-job,cms,cms-worker,cms-job:/path/to/edx-platform:/openedx/edx-platform lms
# Implicit form
tutor dev start --mount=/path/to/edx-platform lms
With the explicit form, the value means "bind-mount the host folder /path/to/edx-platform to /openedx/edx-platform in the lms container at run time".
So, when should you *not* be using the implicit form? That would be when Tutor does not know where to bind-mount your host folders. For instance, if you wanted to bind-mount your edx-platform virtual environment located in ``~/venvs/edx-platform``, you should not write ``--mount=~/venvs/edx-platform``, because that folder would be mounted in a way that would override the edx-platform repository in the container. Instead, you should write::
If you use the explicit format, you will quickly realise that you usually want to bind-mount folders in multiple containers at a time. For instance, you will want to bind-mount the edx-platform repository in the "cms" container, but also the "lms-worker" and "cms-worker" containers. To do that, write instead::
tutor dev start --mount=lms:~/venvs/edx-platform:/openedx/venv lms
# each service is added to a coma-separated list
tutor mounts add lms,cms,lms-worker,cms-worker:/path/to/edx-platform:/openedx/edx-platform
This command line is a bit cumbersome. In addition, with this explicit form, the edx-platform repository will *not* be added to the build context at build time. But Tutor can be smart about bind-mounting folders to the right containers in the right place when you use the implicit form of the ``tutor mounts add`` command. For instance, the following implicit form can be used instead of the explicit form above::
tutor mounts add /path/to/edx-platform
With this implicit form, the edx-platform repo will be bind-mounted in the containers at run time, just like with the explicit form. But in addition, the edx-platform will also automatically be added to the Docker image at build time.
To check whether you have used the correct syntax, you should run ``tutor mounts list``. This command will indicate whether your folders will be bind-mounted at build time, run time, or both. For instance::
$ tutor mounts add /path/to/edx-platform
$ tutor mounts list
- name: /home/data/regis/projets/overhang/repos/edx/edx-platform
build_mounts:
- image: openedx
context: edx-platform
- image: openedx-dev
context: edx-platform
compose_mounts:
- service: lms
container_path: /openedx/edx-platform
- service: cms
container_path: /openedx/edx-platform
- service: lms-worker
container_path: /openedx/edx-platform
- service: cms-worker
container_path: /openedx/edx-platform
- service: lms-job
container_path: /openedx/edx-platform
- service: cms-job
container_path: /openedx/edx-platform
So, when should you *not* be using the implicit form? That would be when Tutor does not know where to bind-mount your host folders. For instance, if you wanted to bind-mount your edx-platform virtual environment located in ``~/venvs/edx-platform``, you should not write ``mounts add ~/venvs/edx-platform``, because that folder would be mounted in a way that would override the edx-platform repository in the container. Instead, you should write::
tutor mounts add lms:~/venvs/edx-platform:/openedx/venv
Verify the configuration with the ``list`` command::
$ tutor mounts list
- name: lms:~/venvs/edx-platform:/openedx/venv
build_mounts: []
compose_mounts:
- service: lms
container_path: /openedx/venv
.. note:: Remember to setup your edx-platform repository for development! See :ref:`edx_platform_dev_env`.
@ -182,16 +222,16 @@ Sometimes, you may want to modify some of the files inside a container for which
tutor dev copyfrom lms /openedx/venv ~
Then, bind-mount that folder back in the container with the ``--mount`` option (described :ref:`above <mount_option>`)::
Then, bind-mount that folder back in the container with the ``MOUNTS`` setting (described :ref:`above <persistent_mounts>`)::
tutor dev start --mount lms:~/venv:/openedx/venv lms
tutor mounts add lms:~/venv:/openedx/venv
You can then edit the files in ``~/venv`` on your local filesystem and see the changes live in your container.
You can then edit the files in ``~/venv`` on your local filesystem and see the changes live in your "lms" container.
Manual bind-mount to any directory
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. warning:: Manually bind-mounting volumes with the ``--volume`` option makes it difficult to simultaneously bind-mount to multiple containers. Also, the ``--volume`` options are not compatible with ``start`` commands. For an alternative, see the :ref:`mount option <mount_option>`.
.. warning:: Manually bind-mounting volumes with the ``--volume`` option makes it difficult to simultaneously bind-mount to multiple containers. Also, the ``--volume`` options are not compatible with ``start`` commands. For an alternative, see the :ref:`persistent mounts <persistent_mounts>`.
The above solution may not work for you if you already have an existing directory, outside of the "volumes/" directory, which you would like mounted in one of your containers. For instance, you may want to mount your copy of the `edx-platform <https://github.com/openedx/edx-platform/>`__ repository. In such cases, you can simply use the ``-v/--volume`` `Docker option <https://docs.docker.com/storage/volumes/#choose-the--v-or---mount-flag>`__::
@ -200,7 +240,7 @@ The above solution may not work for you if you already have an existing director
Override docker-compose volumes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The above solutions require that you explicitly pass the ``-m/--mount`` options to every ``run``, ``start`` or ``init`` command, which may be inconvenient. To address these issues, you can create a ``docker-compose.override.yml`` file that will specify custom volumes to be used with all ``dev`` commands::
Adding items to the ``MOUNTS`` setting effectively adds new bind-mount volumes to the ``docker-compose.yml`` files. But you might want to have more control over your volumes, such as adding read-only options, or customising other fields of the different services. To address these issues, you can create a ``docker-compose.override.yml`` file that will specify custom volumes to be used with all ``dev`` commands::
vim "$(tutor config printroot)/env/dev/docker-compose.override.yml"
@ -221,7 +261,7 @@ You are then free to bind-mount any directory to any container. For instance, to
volumes:
- /path/to/edx-platform:/openedx/edx-platform
This override file will be loaded when running any ``tutor dev ..`` command. The edx-platform repo mounted at the specified path will be automatically mounted inside all LMS and CMS containers. With this file, you should no longer specify the ``-m/--mount`` option from the command line.
This override file will be loaded when running any ``tutor dev ..`` command. The edx-platform repo mounted at the specified path will be automatically mounted inside all LMS and CMS containers.
.. note::
The ``tutor local`` commands load the ``docker-compose.override.yml`` file from the ``$(tutor config printroot)/env/local/docker-compose.override.yml`` directory. One-time jobs from initialisation commands load the ``local/docker-compose.jobs.override.yml`` and ``dev/docker-compose.jobs.override.yml``.

View File

@ -38,7 +38,7 @@ The `devstack <https://github.com/openedx/devstack>`_ is meant for development o
Is Tutor officially supported by edX?
-------------------------------------
Yes: as of the Open edX Maple release (December 9th 2021), Tutor is the only officially supported installation method for Open edX: see the `official installation instructions <https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-olive.master/installation/index.html>`__.
Yes: as of the Open edX Maple release (December 9th 2021), Tutor is the only officially supported installation method for Open edX: see the `official installation instructions <https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-palm.master/installation/index.html>`__.
What features are missing from Tutor?
-------------------------------------

View File

@ -12,8 +12,8 @@ Requirements
* Architecture: support for ARM64 is a work-in-progress. See `this issue <https://github.com/overhangio/tutor/issues/510>`__.
* Required software:
- `Docker <https://docs.docker.com/engine/installation/>`__: v18.06.0+
- `Docker Compose <https://docs.docker.com/compose/install/>`__: v1.22.0+
- `Docker <https://docs.docker.com/engine/installation/>`__: v20.10.15+
- `Docker Compose <https://docs.docker.com/compose/install/>`__: v2.0.0+
.. warning::
Do not attempt to simply run ``apt-get install docker docker-compose`` on older Ubuntu platforms, such as 16.04 (Xenial), as you will get older versions of these utilities.

View File

@ -22,7 +22,7 @@ Yes :) This is what happens when you run ``tutor local launch``:
2. Configuration files are generated from templates.
3. Docker images are downloaded.
4. Docker containers are provisioned.
5. A full, production-ready Open edX platform (`Olive <https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-olive.master/platform_releases/olive.html>`__ release) is run with docker-compose.
5. A full, production-ready Open edX platform (`Palm <https://edx.readthedocs.io/projects/edx-installing-configuring-and-running/en/open-release-palm.master/platform_releases/palm.html>`__ release) is run with docker-compose.
The whole procedure should require less than 10 minutes, on a server with good bandwidth. Note that your host environment will not be affected in any way, since everything runs inside docker containers. Root access is not even necessary.

View File

@ -9,9 +9,6 @@ Actions are one of the two types of hooks (the other being :ref:`filters`) that
.. autoclass:: tutor.core.hooks.Action
:members:
.. autoclass:: tutor.core.hooks.ActionTemplate
:members:
.. The following are only to ensure that the docs build without warnings
.. class:: tutor.core.hooks.actions.T
.. class:: tutor.types.Config

View File

@ -9,9 +9,6 @@ Filters are one of the two types of hooks (the other being :ref:`actions`) that
.. autoclass:: tutor.core.hooks.Filter
:members:
.. autoclass:: tutor.core.hooks.FilterTemplate
:members:
.. The following are only to ensure that the docs build without warnings
.. class:: tutor.core.hooks.filters.T1
.. class:: tutor.core.hooks.filters.T2

View File

@ -7,10 +7,10 @@ Plugin indexes are a great way to have your plugins discovered by other users. P
Index file paths
================
A plugin index is a yaml-formatted file. It can be stored on the web or on your computer. In both cases, the index file location must end with "<current release name>/plugins.yml". For instance, the following are valid index locations if you run the Open edX "Olive" release:
A plugin index is a yaml-formatted file. It can be stored on the web or on your computer. In both cases, the index file location must end with "<current release name>/plugins.yml". For instance, the following are valid index locations if you run the Open edX "Palm" release:
- https://overhang.io/tutor/main/olive/plugins.yml
- ``/path/to/your/local/index/olive/plugins.yml``
- https://overhang.io/tutor/main/palm/plugins.yml
- ``/path/to/your/local/index/palm/plugins.yml``
To add either indexes, run the ``tutor plugins index add`` command without the suffix. For instance::
@ -106,9 +106,9 @@ Manage plugins in development
Plugin developers and maintainers often want to install local versions of their plugins. They usually achieve this with ``pip install -e /path/to/tutor-plugin``. We can improve that workflow by creating an index for local plugins::
# Create the plugin index directory
mkdir -p ~/localindex/olive/
mkdir -p ~/localindex/palm/
# Edit the index
vim ~/localindex/olive/plugins.yml
vim ~/localindex/palm/plugins.yml
Add the following to the index::

View File

@ -203,6 +203,25 @@ File: ``local/docker-compose.jobs.yml``
File: ``local/docker-compose.yml``
.. patch:: local-docker-compose-permissions-command
``local-docker-compose-permissions-command``
============================================
File: ``apps/permissions/setowners.sh``
Add commands to this script to set ownership of bind-mounted docker-compose volumes at runtime. See :patch:`local-docker-compose-permissions-volumes`.
.. patch:: local-docker-compose-permissions-volumes
``local-docker-compose-permissions-volumes``
============================================
File: ``local/docker-compose.yml``
Add bind-mounted volumes to this patch to set their owners properly. See :patch:`local-docker-compose-permissions-command`.
.. patch:: local-docker-compose-prod-services
``local-docker-compose-prod-services``

View File

@ -7,7 +7,6 @@ Tutor can be used on ARM64 systems, although no official ARM64 docker images are
.. note:: There are generally two ways to run Tutor on an ARM system - using emulation (via qemu or Rosetta 2) to run x86_64 images or running native ARM images. Since emulation can be noticeably slower (typically 20-100% slower depending on the emulation method), this tutorial aims to use native images where possible.
Building the images
-------------------
@ -27,30 +26,14 @@ Then, build the "openedx" and "permissions" images::
tutor images build openedx permissions
If you want to use Tutor as an Open edX development environment, you should also build the development images::
If you want to use Tutor as an Open edX development environment, you should also build the development image::
tutor dev dc build lms
Change the database server
--------------------------
The version of MySQL that Open edX uses by default (5.7) does not support the ARM architecture. You need to tell Tutor to use MySQL 8.0, which does support the ARM architecture and which has been supported by Open edX since the "Nutmeg" release.
Configure Tutor to use MySQL 8::
tutor config save --set DOCKER_IMAGE_MYSQL=docker.io/mysql:8.0
(If you need to run an older release of Open edX on ARM64, you can try using `mariadb:10.4` although it's not officially supported nor recommended for production.)
Finish setup and start Tutor
----------------------------
tutor images build openedx-dev # this will be automatically done by `tutor dev launch`
From this point on, use Tutor as normal. For example, start Open edX and run migrations with::
tutor local start -d
tutor local do init
tutor local launch
Or for a development environment::
tutor dev start -d
tutor dev do init
tutor dev launch

View File

@ -1,5 +1,5 @@
#
# This file is autogenerated by pip-compile with Python 3.7
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
# pip-compile requirements/base.in
@ -20,8 +20,6 @@ google-auth==2.19.1
# via kubernetes
idna==3.4
# via requests
importlib-metadata==6.6.0
# via click
jinja2==3.1.2
# via -r requirements/base.in
kubernetes==26.1.0
@ -63,12 +61,9 @@ six==1.16.0
# python-dateutil
tomli==2.0.1
# via mypy
typed-ast==1.5.4
# via mypy
typing-extensions==4.6.3
# via
# -r requirements/base.in
# importlib-metadata
# mypy
urllib3==1.26.16
# via
@ -77,8 +72,6 @@ urllib3==1.26.16
# requests
websocket-client==1.5.2
# via kubernetes
zipp==3.15.0
# via importlib-metadata
# The following packages are considered to be unsafe in a requirements file:
# setuptools

View File

@ -9,7 +9,7 @@ twine
# doc requirement is lagging behind
# https://github.com/readthedocs/sphinx_rtd_theme/issues/1323
docutils<0.18
docutils<0.19
# Types packages
types-docutils

View File

@ -1,5 +1,5 @@
#
# This file is autogenerated by pip-compile with Python 3.7
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
# pip-compile requirements/dev.in
@ -48,7 +48,7 @@ cryptography==41.0.1
# via secretstorage
dill==0.3.6
# via pylint
docutils==0.17.1
docutils==0.18.1
# via
# -r requirements/dev.in
# readme-renderer
@ -62,16 +62,9 @@ idna==3.4
# requests
importlib-metadata==6.6.0
# via
# -r requirements/base.txt
# attrs
# build
# click
# keyring
# pyinstaller
# twine
importlib-resources==5.12.0
# via keyring
isort==5.11.5
isort==5.12.0
# via pylint
jaraco-classes==3.2.3
# via keyring
@ -206,12 +199,6 @@ tomlkit==0.11.8
# via pylint
twine==4.0.2
# via -r requirements/dev.in
typed-ast==1.5.4
# via
# -r requirements/base.txt
# astroid
# black
# mypy
types-docutils==0.20.0.1
# via -r requirements/dev.in
types-pyyaml==6.0.12.10
@ -222,13 +209,7 @@ typing-extensions==4.6.3
# via
# -r requirements/base.txt
# astroid
# black
# importlib-metadata
# markdown-it-py
# mypy
# platformdirs
# pylint
# rich
urllib3==1.26.16
# via
# -r requirements/base.txt
@ -247,10 +228,7 @@ wheel==0.40.0
wrapt==1.15.0
# via astroid
zipp==3.15.0
# via
# -r requirements/base.txt
# importlib-metadata
# importlib-resources
# via importlib-metadata
# The following packages are considered to be unsafe in a requirements file:
# pip

View File

@ -1,5 +1,5 @@
#
# This file is autogenerated by pip-compile with Python 3.7
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
# pip-compile requirements/docs.in
@ -42,11 +42,6 @@ idna==3.4
# requests
imagesize==1.4.1
# via sphinx
importlib-metadata==6.6.0
# via
# -r requirements/base.txt
# click
# sphinx
jinja2==3.1.2
# via
# -r requirements/base.txt
@ -86,8 +81,6 @@ python-dateutil==2.8.2
# via
# -r requirements/base.txt
# kubernetes
pytz==2023.3
# via babel
pyyaml==6.0
# via
# -r requirements/base.txt
@ -114,7 +107,7 @@ six==1.16.0
# python-dateutil
snowballstemmer==2.2.0
# via sphinx
sphinx==5.3.0
sphinx==6.2.1
# via
# -r requirements/docs.in
# sphinx-click
@ -124,11 +117,11 @@ sphinx-click==4.4.0
# via -r requirements/docs.in
sphinx-rtd-theme==1.2.1
# via -r requirements/docs.in
sphinxcontrib-applehelp==1.0.2
sphinxcontrib-applehelp==1.0.4
# via sphinx
sphinxcontrib-devhelp==1.0.2
# via sphinx
sphinxcontrib-htmlhelp==2.0.0
sphinxcontrib-htmlhelp==2.0.1
# via sphinx
sphinxcontrib-jquery==4.1
# via sphinx-rtd-theme
@ -142,14 +135,9 @@ tomli==2.0.1
# via
# -r requirements/base.txt
# mypy
typed-ast==1.5.4
# via
# -r requirements/base.txt
# mypy
typing-extensions==4.6.3
# via
# -r requirements/base.txt
# importlib-metadata
# mypy
urllib3==1.26.16
# via
@ -161,10 +149,6 @@ websocket-client==1.5.2
# via
# -r requirements/base.txt
# kubernetes
zipp==3.15.0
# via
# -r requirements/base.txt
# importlib-metadata
# The following packages are considered to be unsafe in a requirements file:
# setuptools

View File

@ -1,11 +1,11 @@
# change version ranges when upgrading from olive
tutor-android>=15.0.0,<16.0.0
tutor-discovery>=15.0.0,<16.0.0
tutor-ecommerce>=15.0.0,<16.0.0
tutor-forum>=15.0.0,<16.0.0
tutor-license>=15.0.0,<16.0.0
tutor-mfe>=15.0.0,<16.0.0
tutor-minio>=15.0.0,<16.0.0
tutor-notes>=15.0.0,<16.0.0
tutor-webui>=15.0.0,<16.0.0
tutor-xqueue>=15.0.0,<16.0.0
# change version ranges when upgrading from palm
tutor-android>=16.0.0,<17.0.0
tutor-discovery>=16.0.0,<17.0.0
tutor-ecommerce>=16.0.0,<17.0.0
tutor-forum>=16.0.0,<17.0.0
tutor-license>=16.0.0,<17.0.0
tutor-mfe>=16.0.0,<17.0.0
tutor-minio>=16.0.0,<17.0.0
tutor-notes>=16.0.0,<17.0.0
tutor-webui>=16.0.0,<17.0.0
tutor-xqueue>=16.0.0,<17.0.0

View File

@ -56,7 +56,7 @@ setup(
long_description_content_type="text/x-rst",
packages=find_packages(exclude=["tests*"]),
include_package_data=True,
python_requires=">=3.7",
python_requires=">=3.8",
install_requires=load_requirements("base.in"),
extras_require={
"full": load_requirements("plugins.txt"),
@ -68,10 +68,10 @@ setup(
"License :: OSI Approved :: GNU Affero General Public License v3",
"Operating System :: OS Independent",
"Programming Language :: Python",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
],
test_suite="tests",
)

View File

@ -1,88 +0,0 @@
from __future__ import annotations
import typing as t
import unittest
from io import StringIO
from unittest.mock import patch
from click.exceptions import ClickException
from tutor import hooks
from tutor.commands import compose
from tutor.commands.local import LocalContext
class ComposeTests(unittest.TestCase):
maxDiff = None # Ensure we can see long diffs of YAML files.
def test_mount_option_parsing(self) -> None:
param = compose.MountParam()
self.assertEqual(
[("lms", "/path/to/edx-platform", "/openedx/edx-platform")],
param("lms:/path/to/edx-platform:/openedx/edx-platform"),
)
self.assertEqual(
[
("lms", "/path/to/edx-platform", "/openedx/edx-platform"),
("cms", "/path/to/edx-platform", "/openedx/edx-platform"),
],
param("lms,cms:/path/to/edx-platform:/openedx/edx-platform"),
)
self.assertEqual(
[
("lms", "/path/to/edx-platform", "/openedx/edx-platform"),
("cms", "/path/to/edx-platform", "/openedx/edx-platform"),
],
param("lms, cms:/path/to/edx-platform:/openedx/edx-platform"),
)
self.assertEqual(
[
("lms", "/path/to/edx-platform", "/openedx/edx-platform"),
("lms-worker", "/path/to/edx-platform", "/openedx/edx-platform"),
],
param("lms,lms-worker:/path/to/edx-platform:/openedx/edx-platform"),
)
with self.assertRaises(ClickException):
param("lms,:/path/to/edx-platform:/openedx/edx-platform")
@patch("sys.stdout", new_callable=StringIO)
def test_compose_local_tmp_generation(self, _mock_stdout: StringIO) -> None:
"""
Ensure that docker-compose.tmp.yml is correctly generated.
"""
param = compose.MountParam()
mount_args = (
# Auto-mounting of edx-platform to lms* and cms*
param.convert_implicit_form("/path/to/edx-platform"),
# Manual mounting of some other folder to mfe and lms
param.convert_explicit_form(
"mfe,lms:/path/to/something-else:/openedx/something-else"
),
)
# Mount volumes
compose.mount_tmp_volumes(mount_args, LocalContext(""))
compose_file: dict[str, t.Any] = hooks.Filters.COMPOSE_LOCAL_TMP.apply({})
actual_services: dict[str, t.Any] = compose_file["services"]
expected_services: dict[str, t.Any] = {
"cms": {"volumes": ["/path/to/edx-platform:/openedx/edx-platform"]},
"cms-worker": {"volumes": ["/path/to/edx-platform:/openedx/edx-platform"]},
"lms": {
"volumes": [
"/path/to/edx-platform:/openedx/edx-platform",
"/path/to/something-else:/openedx/something-else",
]
},
"lms-worker": {"volumes": ["/path/to/edx-platform:/openedx/edx-platform"]},
"mfe": {"volumes": ["/path/to/something-else:/openedx/something-else"]},
}
self.assertEqual(actual_services, expected_services)
compose_jobs_file = hooks.Filters.COMPOSE_LOCAL_JOBS_TMP.apply({})
actual_jobs_services = compose_jobs_file["services"]
expected_jobs_services: dict[str, t.Any] = {
"cms-job": {"volumes": ["/path/to/edx-platform:/openedx/edx-platform"]},
"lms-job": {"volumes": ["/path/to/edx-platform:/openedx/edx-platform"]},
}
self.assertEqual(actual_jobs_services, expected_jobs_services)

View File

@ -1,6 +1,7 @@
import unittest
from tests.helpers import temporary_root
from tutor import config as tutor_config
from .base import TestCommandMixin
@ -59,6 +60,45 @@ class ConfigTests(unittest.TestCase, TestCommandMixin):
self.assertEqual(0, result.exit_code)
self.assertTrue(result.output)
def test_config_append(self) -> None:
with temporary_root() as root:
self.invoke_in_root(
root, ["config", "save", "--append=TEST=value"], catch_exceptions=False
)
config1 = tutor_config.load(root)
self.invoke_in_root(
root, ["config", "save", "--append=TEST=value"], catch_exceptions=False
)
config2 = tutor_config.load(root)
self.invoke_in_root(
root, ["config", "save", "--remove=TEST=value"], catch_exceptions=False
)
config3 = tutor_config.load(root)
# Value is appended
self.assertEqual(["value"], config1["TEST"])
# Value is not appended a second time
self.assertEqual(["value"], config2["TEST"])
# Value is removed
self.assertEqual([], config3["TEST"])
def test_config_append_with_existing_default(self) -> None:
with temporary_root() as root:
self.invoke_in_root(
root,
[
"config",
"save",
"--append=OPENEDX_EXTRA_PIP_REQUIREMENTS=my-package==1.0.0",
],
catch_exceptions=False,
)
config = tutor_config.load(root)
assert isinstance(config["OPENEDX_EXTRA_PIP_REQUIREMENTS"], list)
self.assertEqual(2, len(config["OPENEDX_EXTRA_PIP_REQUIREMENTS"]))
self.assertEqual(
"my-package==1.0.0", config["OPENEDX_EXTRA_PIP_REQUIREMENTS"][1]
)
class PatchesTests(unittest.TestCase, TestCommandMixin):
def test_config_patches_list(self) -> None:

View File

@ -1,7 +1,7 @@
from unittest.mock import Mock, patch
from tests.helpers import PluginsTestCase, temporary_root
from tutor import images, plugins
from tutor import images, plugins, utils
from tutor.__about__ import __version__
from tutor.commands.images import ImageNotFoundError
@ -49,7 +49,7 @@ class ImagesTests(PluginsTestCase, TestCommandMixin):
self.assertIsNone(result.exception)
self.assertEqual(0, result.exit_code)
# Note: we should update this tag whenever the mysql image is updated
image_pull.assert_called_once_with("docker.io/mysql:5.7.35")
image_pull.assert_called_once_with("docker.io/mysql:8.0.33")
def test_images_printtag_image(self) -> None:
result = self.invoke(["images", "printtag", "openedx"])
@ -128,16 +128,29 @@ class ImagesTests(PluginsTestCase, TestCommandMixin):
"service1",
]
with temporary_root() as root:
self.invoke_in_root(root, ["config", "save"])
result = self.invoke_in_root(root, build_args)
utils.is_buildkit_enabled.cache_clear()
with patch.object(utils, "is_buildkit_enabled", return_value=False):
self.invoke_in_root(root, ["config", "save"])
result = self.invoke_in_root(root, build_args)
self.assertIsNone(result.exception)
self.assertEqual(0, result.exit_code)
image_build.assert_called()
self.assertIn("service1:1.0.0", image_build.call_args[0])
for arg in image_build.call_args[0][2:]:
# The only extra args are `--build-arg`
if arg != "--build-arg":
self.assertIn(arg, build_args)
self.assertEqual(
[
"service1:1.0.0",
"--no-cache",
"--build-arg",
"myarg=value",
"--add-host",
"host",
"--target",
"target",
"docker_args",
"--cache-from=type=registry,ref=service1:1.0.0-cache",
],
list(image_build.call_args[0][1:]),
)
def test_images_push(self) -> None:
result = self.invoke(["images", "push"])

View File

@ -1,4 +1,3 @@
import typing as t
import unittest
from unittest.mock import Mock, patch

View File

@ -8,16 +8,12 @@ class PluginActionsTests(unittest.TestCase):
def setUp(self) -> None:
self.side_effect_int = 0
def tearDown(self) -> None:
super().tearDown()
actions.clear_all(context="tests")
def run(self, result: t.Any = None) -> t.Any:
with contexts.enter("tests"):
return super().run(result=result)
def test_do(self) -> None:
action: actions.Action[int] = actions.get("test-action")
action: actions.Action[int] = actions.Action()
@action.add()
def _test_action_1(increment: int) -> None:
@ -31,29 +27,33 @@ class PluginActionsTests(unittest.TestCase):
self.assertEqual(3, self.side_effect_int)
def test_priority(self) -> None:
@actions.add("test-action", priority=2)
action: actions.Action[[]] = actions.Action()
@action.add(priority=2)
def _test_action_1() -> None:
self.side_effect_int += 4
@actions.add("test-action", priority=1)
@action.add(priority=1)
def _test_action_2() -> None:
self.side_effect_int = self.side_effect_int // 2
# Action 2 must be performed before action 1
self.side_effect_int = 4
actions.do("test-action")
action.do()
self.assertEqual(6, self.side_effect_int)
def test_equal_priority(self) -> None:
@actions.add("test-action", priority=2)
action: actions.Action[[]] = actions.Action()
@action.add(priority=2)
def _test_action_1() -> None:
self.side_effect_int += 4
@actions.add("test-action", priority=2)
@action.add(priority=2)
def _test_action_2() -> None:
self.side_effect_int = self.side_effect_int // 2
# Action 2 must be performed after action 1
self.side_effect_int = 4
actions.do("test-action")
action.do()
self.assertEqual(4, self.side_effect_int)

View File

@ -7,32 +7,32 @@ from tutor.core.hooks import contexts, filters
class PluginFiltersTests(unittest.TestCase):
def tearDown(self) -> None:
super().tearDown()
filters.clear_all(context="tests")
def run(self, result: t.Any = None) -> t.Any:
with contexts.enter("tests"):
return super().run(result=result)
def test_add(self) -> None:
@filters.add("tests:count-sheeps")
filtre: filters.Filter[int, []] = filters.Filter()
@filtre.add()
def filter1(value: int) -> int:
return value + 1
value = filters.apply("tests:count-sheeps", 0)
value = filtre.apply(0)
self.assertEqual(1, value)
def test_add_items(self) -> None:
@filters.add("tests:add-sheeps")
filtre: filters.Filter[list[int], []] = filters.Filter()
@filtre.add()
def filter1(sheeps: list[int]) -> list[int]:
return sheeps + [0]
filters.add_item("tests:add-sheeps", 1)
filters.add_item("tests:add-sheeps", 2)
filters.add_items("tests:add-sheeps", [3, 4])
filtre.add_item(1)
filtre.add_item(2)
filtre.add_items([3, 4])
sheeps: list[int] = filters.apply("tests:add-sheeps", [])
sheeps: list[int] = filtre.apply([])
self.assertEqual([0, 1, 2, 3, 4], sheeps)
def test_filter_callbacks(self) -> None:
@ -42,20 +42,20 @@ class PluginFiltersTests(unittest.TestCase):
self.assertEqual(1, callback.apply(0))
def test_filter_context(self) -> None:
filtre: filters.Filter[list[int], []] = filters.Filter()
with contexts.enter("testcontext"):
filters.add_item("test:sheeps", 1)
filters.add_item("test:sheeps", 2)
filtre.add_item(1)
filtre.add_item(2)
self.assertEqual([1, 2], filters.apply("test:sheeps", []))
self.assertEqual(
[1], filters.apply_from_context("testcontext", "test:sheeps", [])
)
self.assertEqual([1, 2], filtre.apply([]))
self.assertEqual([1], filtre.apply_from_context("testcontext", []))
def test_clear_context(self) -> None:
filtre: filters.Filter[list[int], []] = filters.Filter()
with contexts.enter("testcontext"):
filters.add_item("test:sheeps", 1)
filters.add_item("test:sheeps", 2)
filtre.add_item(1)
filtre.add_item(2)
self.assertEqual([1, 2], filters.apply("test:sheeps", []))
filters.clear("test:sheeps", context="testcontext")
self.assertEqual([2], filters.apply("test:sheeps", []))
self.assertEqual([1, 2], filtre.apply([]))
filtre.clear(context="testcontext")
self.assertEqual([2], filtre.apply([]))

65
tests/test_bindmount.py Normal file
View File

@ -0,0 +1,65 @@
from __future__ import annotations
import unittest
from tutor import bindmount
class BindmountTests(unittest.TestCase):
def test_parse_explicit(self) -> None:
self.assertEqual(
[("lms", "/path/to/edx-platform", "/openedx/edx-platform")],
bindmount.parse_explicit_mount(
"lms:/path/to/edx-platform:/openedx/edx-platform"
),
)
self.assertEqual(
[
("lms", "/path/to/edx-platform", "/openedx/edx-platform"),
("cms", "/path/to/edx-platform", "/openedx/edx-platform"),
],
bindmount.parse_explicit_mount(
"lms,cms:/path/to/edx-platform:/openedx/edx-platform"
),
)
self.assertEqual(
[
("lms", "/path/to/edx-platform", "/openedx/edx-platform"),
("cms", "/path/to/edx-platform", "/openedx/edx-platform"),
],
bindmount.parse_explicit_mount(
"lms, cms:/path/to/edx-platform:/openedx/edx-platform"
),
)
self.assertEqual(
[
("lms", "/path/to/edx-platform", "/openedx/edx-platform"),
("lms-worker", "/path/to/edx-platform", "/openedx/edx-platform"),
],
bindmount.parse_explicit_mount(
"lms,lms-worker:/path/to/edx-platform:/openedx/edx-platform"
),
)
self.assertEqual(
[("lms", "/path/to/edx-platform", "/openedx/edx-platform")],
bindmount.parse_explicit_mount(
"lms,:/path/to/edx-platform:/openedx/edx-platform"
),
)
def test_parse_implicit(self) -> None:
# Import module to make sure filter is created
# pylint: disable=import-outside-toplevel,unused-import
import tutor.commands.compose
self.assertEqual(
[
("lms", "/path/to/edx-platform", "/openedx/edx-platform"),
("cms", "/path/to/edx-platform", "/openedx/edx-platform"),
("lms-worker", "/path/to/edx-platform", "/openedx/edx-platform"),
("cms-worker", "/path/to/edx-platform", "/openedx/edx-platform"),
("lms-job", "/path/to/edx-platform", "/openedx/edx-platform"),
("cms-job", "/path/to/edx-platform", "/openedx/edx-platform"),
],
bindmount.parse_implicit_mount("/path/to/edx-platform"),
)

View File

@ -259,7 +259,7 @@ class CurrentVersionTests(unittest.TestCase):
) as f:
f.write(__version__)
self.assertEqual(__version__, env.current_version(root))
self.assertEqual("olive", env.get_env_release(root))
self.assertEqual("palm", env.get_env_release(root))
self.assertIsNone(env.should_upgrade_from_release(root))
self.assertTrue(env.is_up_to_date(root))

View File

@ -1,14 +0,0 @@
import unittest
from tutor import images
from tutor.types import Config
class ImagesTests(unittest.TestCase):
def test_get_tag(self) -> None:
config: Config = {
"DOCKER_IMAGE_OPENEDX": "registry/openedx",
"DOCKER_IMAGE_OPENEDX_DEV": "registry/openedxdev",
}
self.assertEqual("registry/openedx", images.get_tag(config, "openedx"))
self.assertEqual("registry/openedxdev", images.get_tag(config, "openedx-dev"))

View File

@ -92,7 +92,7 @@ class PluginsTests(PluginsTestCase):
def test_plugin_without_patches(self) -> None:
plugins_v0.DictPlugin({"name": "plugin1"})
plugins.load("plugin1")
plugins.load_all(["plugin1"])
patches = list(plugins.iter_patches("patch1"))
self.assertEqual([], patches)

View File

@ -41,3 +41,17 @@ class SerializeTests(unittest.TestCase):
"x=key1:\n subkey: value\nkey2:\n subkey: value"
),
)
def test_str_format(self) -> None:
self.assertEqual("true", serialize.str_format(True))
self.assertEqual("false", serialize.str_format(False))
self.assertEqual("null", serialize.str_format(None))
self.assertEqual("éü©", serialize.str_format("éü©"))
self.assertEqual("""[1, 'abcd']""", serialize.str_format([1, "abcd"]))
def test_load_str_format(self) -> None:
self.assertEqual(True, serialize.load(serialize.str_format(True)))
self.assertEqual(False, serialize.load(serialize.str_format(False)))
self.assertEqual(None, serialize.load(serialize.str_format(None)))
self.assertEqual("éü©", serialize.load(serialize.str_format("éü©")))
self.assertEqual([1, "abcd"], serialize.load(serialize.str_format([1, "abcd"])))

View File

@ -2,7 +2,7 @@ import os
# Increment this version number to trigger a new release. See
# docs/tutor.html#versioning for information on the versioning scheme.
__version__ = "15.3.7"
__version__ = "16.0.0"
# The version suffix will be appended to the actual version, separated by a
# dash. Use this suffix to differentiate between the actual released version and

76
tutor/bindmount.py Normal file
View File

@ -0,0 +1,76 @@
from __future__ import annotations
import os
import re
import typing as t
from functools import lru_cache
from tutor import hooks, types
def get_mounts(config: types.Config) -> list[str]:
return types.get_typed(config, "MOUNTS", list)
def iter_mounts(user_mounts: list[str], name: str) -> t.Iterable[str]:
"""
Iterate on the bind-mounts that are available to any given compose service. The list
of bind-mounts is parsed from `user_mounts` and we yield only those for service
`name`.
Calling this function multiple times makes repeated calls to the parsing functions,
but that's OK because their result is cached.
"""
for user_mount in user_mounts:
for service, host_path, container_path in parse_mount(user_mount):
if service == name:
yield f"{host_path}:{container_path}"
def parse_mount(value: str) -> list[tuple[str, str, str]]:
"""
Parser for mount arguments of the form
"service1[,service2,...]:/host/path:/container/path" (explicit) or "/host/path".
Returns a list of (service, host_path, container_path) tuples.
"""
mounts = parse_explicit_mount(value) or parse_implicit_mount(value)
return mounts
@lru_cache(maxsize=None)
def parse_explicit_mount(value: str) -> list[tuple[str, str, str]]:
"""
Argument is of the form "containers:/host/path:/container/path".
"""
# Note that this syntax does not allow us to include colon ':' characters in paths
match = re.match(
r"(?P<services>[a-zA-Z0-9-_, ]+):(?P<host_path>[^:]+):(?P<container_path>[^:]+)",
value,
)
if not match:
return []
mounts: list[tuple[str, str, str]] = []
services: list[str] = [service.strip() for service in match["services"].split(",")]
host_path = os.path.abspath(os.path.expanduser(match["host_path"]))
host_path = host_path.replace(os.path.sep, "/")
container_path = match["container_path"]
for service in services:
if service:
mounts.append((service, host_path, container_path))
return mounts
@lru_cache(maxsize=None)
def parse_implicit_mount(value: str) -> list[tuple[str, str, str]]:
"""
Argument is of the form "/path/to/host/directory"
"""
mounts: list[tuple[str, str, str]] = []
host_path = os.path.abspath(os.path.expanduser(value))
for service, container_path in hooks.Filters.COMPOSE_MOUNTS.iterate(
os.path.basename(host_path)
):
mounts.append((service, host_path, container_path))
return mounts

View File

@ -1,61 +0,0 @@
import os
from tutor.exceptions import TutorError
from tutor.tasks import BaseComposeTaskRunner
from tutor.utils import get_user_id
def create(
runner: BaseComposeTaskRunner,
service: str,
path: str,
) -> str:
volumes_root_path = get_root_path(runner.root)
volume_name = get_name(path)
container_volumes_root_path = "/tmp/volumes"
command = """rm -rf {volumes_path}/{volume_name}
cp -r {src_path} {volumes_path}/{volume_name}
chown -R {user_id} {volumes_path}/{volume_name}""".format(
volumes_path=container_volumes_root_path,
volume_name=volume_name,
src_path=path,
user_id=get_user_id(),
)
# Create volumes root dir if it does not exist. Otherwise it is created with root owner and might not be writable
# in the container, e.g: in the dev containers.
if not os.path.exists(volumes_root_path):
os.makedirs(volumes_root_path)
runner.docker_compose(
"run",
"--rm",
"--no-deps",
"--user=0",
"--volume",
f"{volumes_root_path}:{container_volumes_root_path}",
service,
"sh",
"-e",
"-c",
command,
)
return os.path.join(volumes_root_path, volume_name)
def get_path(root: str, container_bind_path: str) -> str:
bind_basename = get_name(container_bind_path)
return os.path.join(get_root_path(root), bind_basename)
def get_name(container_bind_path: str) -> str:
# We rstrip slashes, otherwise os.path.basename returns an empty string
# We don't use basename here as it will not work on Windows
name = container_bind_path.rstrip("/").split("/")[-1]
if not name:
raise TutorError("Mounting a container root folder is not supported")
return name
def get_root_path(root: str) -> str:
return os.path.join(root, "volumes")

View File

@ -14,6 +14,7 @@ from tutor.commands.dev import dev
from tutor.commands.images import images_command
from tutor.commands.k8s import k8s
from tutor.commands.local import local
from tutor.commands.mounts import mounts_command
from tutor.commands.plugins import plugins_command
@ -129,7 +130,16 @@ def help_command(context: click.Context) -> None:
hooks.Filters.CLI_COMMANDS.add_items(
[images_command, config_command, local, dev, k8s, help_command, plugins_command]
[
config_command,
dev,
help_command,
images_command,
k8s,
local,
mounts_command,
plugins_command,
]
)

View File

@ -1,26 +1,26 @@
from __future__ import annotations
import os
import re
import typing as t
from copy import deepcopy
import click
from click.shell_completion import CompletionItem
from typing_extensions import TypeAlias
from tutor import bindmount
from tutor import config as tutor_config
from tutor import env as tutor_env
from tutor import fmt, hooks, serialize, utils
from tutor.commands import jobs
from tutor import fmt, hooks
from tutor import interactive as interactive_config
from tutor import utils
from tutor.commands import images, jobs
from tutor.commands.config import save as config_save_command
from tutor.commands.context import BaseTaskContext
from tutor.commands.upgrade import OPENEDX_RELEASE_NAMES
from tutor.commands.upgrade.compose import upgrade_from
from tutor.core.hooks import Filter # pylint: disable=unused-import
from tutor.exceptions import TutorError
from tutor.tasks import BaseComposeTaskRunner
from tutor.types import Config
COMPOSE_FILTER_TYPE: TypeAlias = "Filter[dict[str, t.Any], []]"
class ComposeTaskRunner(BaseComposeTaskRunner):
def __init__(self, root: str, config: Config):
@ -47,47 +47,6 @@ class ComposeTaskRunner(BaseComposeTaskRunner):
*args, "--project-name", self.project_name, *command
)
def update_docker_compose_tmp(
self,
compose_tmp_filter: COMPOSE_FILTER_TYPE,
compose_jobs_tmp_filter: COMPOSE_FILTER_TYPE,
docker_compose_tmp_path: str,
docker_compose_jobs_tmp_path: str,
) -> None:
"""
Update the contents of the docker-compose.tmp.yml and
docker-compose.jobs.tmp.yml files, which are generated at runtime.
"""
compose_base: dict[str, t.Any] = {
"version": "{{ DOCKER_COMPOSE_VERSION }}",
"services": {},
}
# 1. Apply compose_tmp filter
# 2. Render the resulting dict
# 3. Serialize to yaml
# 4. Save to disk
docker_compose_tmp: str = serialize.dumps(
tutor_env.render_unknown(
self.config, compose_tmp_filter.apply(deepcopy(compose_base))
)
)
tutor_env.write_to(
docker_compose_tmp,
docker_compose_tmp_path,
)
# Same thing but with tmp jobs
docker_compose_jobs_tmp: str = serialize.dumps(
tutor_env.render_unknown(
self.config, compose_jobs_tmp_filter.apply(deepcopy(compose_base))
)
)
tutor_env.write_to(
docker_compose_jobs_tmp,
docker_compose_jobs_tmp_path,
)
def run_task(self, service: str, command: str) -> int:
"""
Run the "{{ service }}-job" service from local/docker-compose.jobs.yml with the
@ -113,158 +72,179 @@ class ComposeTaskRunner(BaseComposeTaskRunner):
class BaseComposeContext(BaseTaskContext):
COMPOSE_TMP_FILTER: COMPOSE_FILTER_TYPE = NotImplemented
COMPOSE_JOBS_TMP_FILTER: COMPOSE_FILTER_TYPE = NotImplemented
NAME: t.Literal["local", "dev"]
def job_runner(self, config: Config) -> ComposeTaskRunner:
raise NotImplementedError
class MountParam(click.ParamType):
@click.command(help="Configure and run Open edX from scratch")
@click.option("-I", "--non-interactive", is_flag=True, help="Run non-interactively")
@click.option("-p", "--pullimages", is_flag=True, help="Update docker images")
@click.option("--skip-build", is_flag=True, help="Skip building Docker images")
@click.pass_context
def launch(
context: click.Context,
non_interactive: bool,
pullimages: bool,
skip_build: bool,
) -> None:
context_name = context.obj.NAME
run_for_prod = context_name != "dev"
utils.warn_macos_docker_memory()
# Upgrade has to run before configuration
interactive_upgrade(context, not non_interactive, run_for_prod)
interactive_configuration(context, not non_interactive, run_for_prod)
config = tutor_config.load(context.obj.root)
if not skip_build:
click.echo(fmt.title("Building Docker images"))
images_to_build = hooks.Filters.IMAGES_BUILD_REQUIRED.apply([], context_name)
if not images_to_build:
fmt.echo_info("No image to build")
context.invoke(images.build, image_names=images_to_build)
click.echo(fmt.title("Stopping any existing platform"))
context.invoke(stop)
if pullimages:
click.echo(fmt.title("Docker image updates"))
context.invoke(dc_command, command="pull")
click.echo(fmt.title("Starting the platform in detached mode"))
context.invoke(start, detach=True)
click.echo(fmt.title("Database creation and migrations"))
context.invoke(do.commands["init"])
# Print the urls of the user-facing apps
public_app_hosts = ""
for host in hooks.Filters.APP_PUBLIC_HOSTS.iterate(context_name):
public_app_host = tutor_env.render_str(
config, "{% if ENABLE_HTTPS %}https{% else %}http{% endif %}://" + host
)
public_app_hosts += f" {public_app_host}\n"
if public_app_hosts:
fmt.echo_info(
f"""The platform is now running and can be accessed at the following urls:
{public_app_hosts}"""
)
def interactive_upgrade(
context: click.Context, interactive: bool, run_for_prod: bool
) -> None:
"""
Parser for --mount arguments of the form "service1[,service2,...]:/host/path:/container/path".
Piece of code that is only used in launch.
"""
run_upgrade_from_release = tutor_env.should_upgrade_from_release(context.obj.root)
if run_upgrade_from_release is not None:
click.echo(fmt.title("Upgrading from an older release"))
if interactive:
to_release = tutor_env.get_current_open_edx_release_name()
question = f"""You are about to upgrade your Open edX platform from {run_upgrade_from_release.capitalize()} to {to_release.capitalize()}
name = "mount"
MountType = t.Tuple[str, str, str]
# Note that this syntax does not allow us to include colon ':' characters in paths
PARAM_REGEXP = (
r"(?P<services>[a-zA-Z0-9-_, ]+):(?P<host_path>[^:]+):(?P<container_path>[^:]+)"
)
It is strongly recommended to make a backup before upgrading. To do so, run:
def convert(
self,
value: str,
param: t.Optional["click.Parameter"],
ctx: t.Optional[click.Context],
) -> list["MountType"]:
mounts = self.convert_explicit_form(value) or self.convert_implicit_form(value)
return mounts
tutor local stop # or 'tutor dev stop' in development
sudo rsync -avr "$(tutor config printroot)"/ /tmp/tutor-backup/
def convert_explicit_form(self, value: str) -> list["MountParam.MountType"]:
"""
Argument is of the form "containers:/host/path:/container/path".
"""
match = re.match(self.PARAM_REGEXP, value)
if not match:
return []
In case of problem, to restore your backup you will then have to run: sudo rsync -avr /tmp/tutor-backup/ "$(tutor config printroot)"/
mounts: list["MountParam.MountType"] = []
services: list[str] = [
service.strip() for service in match["services"].split(",")
]
host_path = os.path.abspath(os.path.expanduser(match["host_path"]))
host_path = host_path.replace(os.path.sep, "/")
container_path = match["container_path"]
for service in services:
if not service:
self.fail(f"incorrect services syntax: '{match['services']}'")
mounts.append((service, host_path, container_path))
return mounts
Are you sure you want to continue?"""
click.confirm(
fmt.question(question), default=True, abort=True, prompt_suffix=" "
)
context.invoke(
upgrade,
from_release=run_upgrade_from_release,
)
def convert_implicit_form(self, value: str) -> list["MountParam.MountType"]:
"""
Argument is of the form "/host/path"
"""
mounts: list["MountParam.MountType"] = []
host_path = os.path.abspath(os.path.expanduser(value))
for service, container_path in hooks.Filters.COMPOSE_MOUNTS.iterate(
os.path.basename(host_path)
):
mounts.append((service, host_path, container_path))
if not mounts:
raise self.fail(f"no mount found for {value}")
return mounts
# Update env and configuration
# Don't run in interactive mode, otherwise users gets prompted twice.
interactive_configuration(context, False, run_for_prod)
def shell_complete(
self, ctx: click.Context, param: click.Parameter, incomplete: str
) -> list[CompletionItem]:
"""
Mount argument completion works only for the single path (implicit) form. The
reason is that colons break words in bash completion:
http://tiswww.case.edu/php/chet/bash/FAQ (E13)
Thus, we do not even attempt to auto-complete mount arguments that include
colons: such arguments will not even reach this method.
"""
return [CompletionItem(incomplete, type="file")]
# Post upgrade
if interactive:
question = f"""Your platform is being upgraded from {run_upgrade_from_release.capitalize()}.
If you run custom Docker images, you must rebuild them now by running the following command in a different shell:
tutor images build all # list your custom images here
See the documentation for more information:
https://docs.tutor.overhang.io/install.html#upgrading-to-a-new-open-edx-release
Press enter when you are ready to continue"""
click.confirm(
fmt.question(question), default=True, abort=True, prompt_suffix=" "
)
mount_option = click.option(
"-m",
"--mount",
"mounts",
help="""Bind-mount a folder from the host in the right containers. This option can take two different forms. The first one is explicit: 'service1[,service2...]:/host/path:/container/path'. The other is implicit: '/host/path'. Arguments passed in the implicit form will be parsed by plugins to define the right folders to bind-mount from the host.""",
type=MountParam(),
multiple=True,
def interactive_configuration(
context: click.Context, interactive: bool, run_for_prod: bool
) -> None:
click.echo(fmt.title("Interactive platform configuration"))
config = tutor_config.load_minimal(context.obj.root)
if interactive:
interactive_config.ask_questions(config, run_for_prod=run_for_prod)
tutor_config.save_config_file(context.obj.root, config)
config = tutor_config.load_full(context.obj.root)
tutor_env.save(context.obj.root, config)
@click.command(
short_help="Perform release-specific upgrade tasks",
help="Perform release-specific upgrade tasks. To perform a full upgrade remember to run `launch`.",
)
def mount_tmp_volumes(
all_mounts: tuple[list[MountParam.MountType], ...],
context: BaseComposeContext,
) -> None:
for mounts in all_mounts:
for service, host_path, container_path in mounts:
mount_tmp_volume(service, host_path, container_path, context)
def mount_tmp_volume(
service: str,
host_path: str,
container_path: str,
context: BaseComposeContext,
) -> None:
"""
Append user-defined bind-mounted volumes to the docker-compose.tmp file(s).
The service/host path/container path values are appended to the docker-compose
files by mean of two filters. Each dev/local environment is then responsible for
generating the files based on the output of these filters.
Bind-mounts that are associated to "*-job" services will be added to the
docker-compose jobs file.
"""
fmt.echo_info(f"Bind-mount: {host_path} -> {container_path} in {service}")
compose_tmp_filter: COMPOSE_FILTER_TYPE = (
context.COMPOSE_JOBS_TMP_FILTER
if service.endswith("-job")
else context.COMPOSE_TMP_FILTER
@click.option(
"--from",
"from_release",
type=click.Choice(OPENEDX_RELEASE_NAMES),
)
@click.pass_context
def upgrade(context: click.Context, from_release: t.Optional[str]) -> None:
fmt.echo_alert(
"This command only performs a partial upgrade of your Open edX platform. "
"To perform a full upgrade, you should run `tutor local launch` (or `tutor dev launch` "
"in development)."
)
@compose_tmp_filter.add()
def _add_mounts_to_docker_compose_tmp(
docker_compose: dict[str, t.Any],
) -> dict[str, t.Any]:
services = docker_compose.setdefault("services", {})
services.setdefault(service, {"volumes": []})
services[service]["volumes"].append(f"{host_path}:{container_path}")
return docker_compose
if from_release is None:
from_release = tutor_env.get_env_release(context.obj.root)
if from_release is None:
fmt.echo_info("Your environment is already up-to-date")
else:
upgrade_from(context, from_release)
# We update the environment to update the version
context.invoke(config_save_command)
@click.command(
short_help="Run all or a selection of services.",
help="Run all or a selection of services. Docker images will be rebuilt where necessary.",
)
@click.option("--skip-build", is_flag=True, help="Skip image building")
@click.option("--build", is_flag=True, help="Build images on start")
@click.option("-d", "--detach", is_flag=True, help="Start in daemon mode")
@mount_option
@click.argument("services", metavar="service", nargs=-1)
@click.pass_obj
def start(
context: BaseComposeContext,
skip_build: bool,
build: bool,
detach: bool,
mounts: tuple[list[MountParam.MountType]],
services: list[str],
) -> None:
command = ["up", "--remove-orphans"]
if not skip_build:
if build:
command.append("--build")
if detach:
command.append("-d")
# Start services
mount_tmp_volumes(mounts, context)
config = tutor_config.load(context.root)
context.job_runner(config).docker_compose(*command, *services)
@ -306,31 +286,18 @@ def restart(context: BaseComposeContext, services: list[str]) -> None:
else:
for service in services:
if service == "openedx":
if config["RUN_LMS"]:
command += ["lms", "lms-worker"]
if config["RUN_CMS"]:
command += ["cms", "cms-worker"]
command += ["lms", "lms-worker", "cms", "cms-worker"]
else:
command.append(service)
context.job_runner(config).docker_compose(*command)
@jobs.do_group
@mount_option
@click.pass_obj
def do(context: BaseComposeContext, mounts: tuple[list[MountParam.MountType]]) -> None:
def do() -> None:
"""
Run a custom job in the right container(s).
"""
@hooks.Actions.DO_JOB.add()
def _mount_tmp_volumes(_job_name: str, *_args: t.Any, **_kwargs: t.Any) -> None:
"""
We add this logic to an action callback because we do not want to trigger it
whenever we run `tutor local do <job> --help`.
"""
mount_tmp_volumes(mounts, context)
@click.command(
short_help="Run a command in a new container",
@ -341,18 +308,16 @@ def do(context: BaseComposeContext, mounts: tuple[list[MountParam.MountType]]) -
),
context_settings={"ignore_unknown_options": True},
)
@mount_option
@click.argument("args", nargs=-1, required=True)
@click.pass_context
def run(
context: click.Context,
mounts: tuple[list[MountParam.MountType]],
args: list[str],
) -> None:
extra_args = ["--rm"]
if not utils.is_a_tty():
extra_args.append("-T")
context.invoke(dc_command, mounts=mounts, command="run", args=[*extra_args, *args])
context.invoke(dc_command, command="run", args=[*extra_args, *args])
@click.command(
@ -449,17 +414,14 @@ def status(context: click.Context) -> None:
context_settings={"ignore_unknown_options": True},
name="dc",
)
@mount_option
@click.argument("command")
@click.argument("args", nargs=-1)
@click.pass_obj
def dc_command(
context: BaseComposeContext,
mounts: tuple[list[MountParam.MountType]],
command: str,
args: list[str],
) -> None:
mount_tmp_volumes(mounts, context)
config = tutor_config.load(context.root)
context.job_runner(config).docker_compose(command, *args)
@ -469,8 +431,8 @@ def _mount_edx_platform(
volumes: list[tuple[str, str]], name: str
) -> list[tuple[str, str]]:
"""
When mounting edx-platform with `--mount=/path/to/edx-platform`, bind-mount the host
repo in the lms/cms containers.
When mounting edx-platform with `tutor mounts add /path/to/edx-platform`,
bind-mount the host repo in the lms/cms containers.
"""
if name == "edx-platform":
path = "/openedx/edx-platform"
@ -485,7 +447,23 @@ def _mount_edx_platform(
return volumes
@hooks.Filters.APP_PUBLIC_HOSTS.add()
def _edx_platform_public_hosts(
hosts: list[str], context_name: t.Literal["local", "dev"]
) -> list[str]:
if context_name == "dev":
hosts += ["{{ LMS_HOST }}:8000", "{{ CMS_HOST }}:8001"]
else:
hosts += ["{{ LMS_HOST }}", "{{ CMS_HOST }}"]
return hosts
hooks.Filters.ENV_TEMPLATE_VARIABLES.add_item(("iter_mounts", bindmount.iter_mounts))
def add_commands(command_group: click.Group) -> None:
command_group.add_command(launch)
command_group.add_command(upgrade)
command_group.add_command(start)
command_group.add_command(stop)
command_group.add_command(restart)

View File

@ -6,12 +6,13 @@ import typing as t
import click
import click.shell_completion
from .. import config as tutor_config
from .. import env, exceptions, fmt
from .. import interactive as interactive_config
from .. import serialize
from ..types import Config, ConfigValue
from .context import Context
from tutor import config as tutor_config
from tutor import env, exceptions, fmt
from tutor import interactive as interactive_config
from tutor import serialize
from tutor.commands.context import Context
from tutor.commands.params import ConfigLoaderParam
from tutor.types import ConfigValue
@click.group(
@ -23,7 +24,7 @@ def config_command() -> None:
pass
class ConfigKeyParamType(click.ParamType):
class ConfigKeyParamType(ConfigLoaderParam):
name = "configkey"
def shell_complete(
@ -31,26 +32,21 @@ class ConfigKeyParamType(click.ParamType):
) -> list[click.shell_completion.CompletionItem]:
return [
click.shell_completion.CompletionItem(key)
for key, _value in self._shell_complete_config_items(ctx, incomplete)
for key, _value in self._shell_complete_config_items(incomplete)
]
@staticmethod
def _shell_complete_config_items(
ctx: click.Context, incomplete: str
self, incomplete: str
) -> list[tuple[str, ConfigValue]]:
# Here we want to auto-complete the name of the config key. For that we need to
# figure out the list of enabled plugins, and for that we need the project root.
# The project root would ordinarily be stored in ctx.obj.root, but during
# auto-completion we don't have access to our custom Tutor context. So we resort
# to a dirty hack, which is to examine the grandparent context.
root = getattr(
getattr(getattr(ctx, "parent", None), "parent", None), "params", {}
).get("root", "")
config = tutor_config.load_full(root)
return [
(key, value) for key, value in config.items() if key.startswith(incomplete)
(key, value)
for key, value in self._candidate_config_items()
if key.startswith(incomplete)
]
def _candidate_config_items(self) -> t.Iterable[tuple[str, ConfigValue]]:
yield from self.config.items()
class ConfigKeyValParamType(ConfigKeyParamType):
"""
@ -76,21 +72,30 @@ class ConfigKeyValParamType(ConfigKeyParamType):
# further auto-complete later.
return [
click.shell_completion.CompletionItem(f"'{key}='")
for key, value in self._shell_complete_config_items(ctx, incomplete)
for key, value in self._shell_complete_config_items(incomplete)
]
if incomplete.endswith("="):
# raise ValueError(f"incomplete: <{incomplete}>")
# Auto-complete with '<KEY>=<VALUE>'
return [
click.shell_completion.CompletionItem(f"{key}={json.dumps(value)}")
for key, value in self._shell_complete_config_items(
ctx, incomplete[:-1]
)
for key, value in self._shell_complete_config_items(incomplete[:-1])
]
# Else, don't bother
return []
class ConfigListKeyValParamType(ConfigKeyValParamType):
"""
Same as the parent class, but for keys of type `list`.
"""
def _candidate_config_items(self) -> t.Iterable[tuple[str, ConfigValue]]:
for key, val in self.config.items():
if isinstance(val, list):
yield key, val
@click.command(help="Create and save configuration interactively")
@click.option("-i", "--interactive", is_flag=True, help="Run interactively")
@click.option(
@ -102,6 +107,24 @@ class ConfigKeyValParamType(ConfigKeyParamType):
metavar="KEY=VAL",
help="Set a configuration value (can be used multiple times)",
)
@click.option(
"-a",
"--append",
"append_vars",
type=ConfigListKeyValParamType(),
multiple=True,
metavar="KEY=VAL",
help="Append an item to a configuration value of type list. The value will only be added it it is not already present. (can be used multiple times)",
)
@click.option(
"-A",
"--remove",
"remove_vars",
type=ConfigListKeyValParamType(),
multiple=True,
metavar="KEY=VAL",
help="Remove an item from a configuration value of type list (can be used multiple times)",
)
@click.option(
"-U",
"--unset",
@ -117,16 +140,43 @@ class ConfigKeyValParamType(ConfigKeyParamType):
def save(
context: Context,
interactive: bool,
set_vars: Config,
set_vars: list[tuple[str, t.Any]],
append_vars: list[tuple[str, t.Any]],
remove_vars: list[tuple[str, t.Any]],
unset_vars: list[str],
env_only: bool,
) -> None:
config = tutor_config.load_minimal(context.root)
config_full = tutor_config.load_full(context.root)
if interactive:
interactive_config.ask_questions(config)
if set_vars:
for key, value in dict(set_vars).items():
for key, value in set_vars:
config[key] = env.render_unknown(config, value)
if append_vars:
for key, value in append_vars:
if key not in config:
config[key] = config_full.get(key, [])
values = config[key]
if not isinstance(values, list):
raise exceptions.TutorError(
f"Could not append value to '{key}': current setting is of type '{values.__class__.__name__}', expected list."
)
if not isinstance(value, str):
raise exceptions.TutorError(
f"Could not append value to '{key}': appended value is of type '{value.__class__.__name__}', expected str."
)
if value not in values:
values.append(value)
if remove_vars:
for key, value in remove_vars:
values = config.get(key, [])
if not isinstance(values, list):
raise exceptions.TutorError(
f"Could not remove value from '{key}': current setting is of type '{values.__class__.__name__}', expected list."
)
while value in values:
values.remove(value)
for key in unset_vars:
config.pop(key, None)
if not env_only:
@ -149,10 +199,10 @@ def printroot(context: Context) -> None:
def printvalue(context: Context, key: str) -> None:
config = tutor_config.load(context.root)
try:
# Note that this will incorrectly print None values
fmt.echo(str(config[key]))
value = config[key]
except KeyError as e:
raise exceptions.TutorError(f"Missing configuration value: {key}") from e
fmt.echo(serialize.str_format(value))
@click.group(name="patches", help="Commands related to patches in configurations")
@ -171,5 +221,5 @@ def patches_list(context: Context) -> None:
config_command.add_command(save)
config_command.add_command(printroot)
config_command.add_command(printvalue)
config_command.add_command(patches_command)
patches_command.add_command(patches_list)
config_command.add_command(patches_command)

View File

@ -1,12 +1,11 @@
from __future__ import annotations
import typing as t
import click
from tutor import config as tutor_config
from tutor import env as tutor_env
from tutor import exceptions, fmt, hooks
from tutor import interactive as interactive_config
from tutor import utils
from tutor import hooks
from tutor.commands import compose
from tutor.types import Config, get_typed
@ -18,38 +17,22 @@ class DevTaskRunner(compose.ComposeTaskRunner):
"""
super().__init__(root, config)
self.project_name = get_typed(self.config, "DEV_PROJECT_NAME", str)
docker_compose_tmp_path = tutor_env.pathjoin(
self.root, "dev", "docker-compose.tmp.yml"
)
docker_compose_jobs_tmp_path = tutor_env.pathjoin(
self.root, "dev", "docker-compose.jobs.tmp.yml"
)
self.docker_compose_files += [
tutor_env.pathjoin(self.root, "local", "docker-compose.yml"),
tutor_env.pathjoin(self.root, "dev", "docker-compose.yml"),
docker_compose_tmp_path,
tutor_env.pathjoin(self.root, "local", "docker-compose.override.yml"),
tutor_env.pathjoin(self.root, "dev", "docker-compose.override.yml"),
]
self.docker_compose_job_files += [
tutor_env.pathjoin(self.root, "local", "docker-compose.jobs.yml"),
tutor_env.pathjoin(self.root, "dev", "docker-compose.jobs.yml"),
docker_compose_jobs_tmp_path,
tutor_env.pathjoin(self.root, "local", "docker-compose.jobs.override.yml"),
tutor_env.pathjoin(self.root, "dev", "docker-compose.jobs.override.yml"),
]
# Update docker-compose.tmp files
self.update_docker_compose_tmp(
hooks.Filters.COMPOSE_DEV_TMP,
hooks.Filters.COMPOSE_DEV_JOBS_TMP,
docker_compose_tmp_path,
docker_compose_jobs_tmp_path,
)
class DevContext(compose.BaseComposeContext):
COMPOSE_TMP_FILTER = hooks.Filters.COMPOSE_DEV_TMP
COMPOSE_JOBS_TMP_FILTER = hooks.Filters.COMPOSE_DEV_JOBS_TMP
NAME = "dev"
def job_runner(self, config: Config) -> DevTaskRunner:
return DevTaskRunner(self.root, config)
@ -61,62 +44,6 @@ def dev(context: click.Context) -> None:
context.obj = DevContext(context.obj.root)
@click.command(help="Configure and run Open edX from scratch, for development")
@click.option("-I", "--non-interactive", is_flag=True, help="Run non-interactively")
@click.option("-p", "--pullimages", is_flag=True, help="Update docker images")
@compose.mount_option
@click.pass_context
def launch(
context: click.Context,
non_interactive: bool,
pullimages: bool,
mounts: tuple[list[compose.MountParam.MountType]],
) -> None:
compose.mount_tmp_volumes(mounts, context.obj)
try:
utils.check_macos_docker_memory()
except exceptions.TutorError as e:
fmt.echo_alert(
f"""Could not verify sufficient RAM allocation in Docker:
{e}
Tutor may not work if Docker is configured with < 4 GB RAM. Please follow instructions from:
https://docs.tutor.overhang.io/install.html"""
)
click.echo(fmt.title("Interactive platform configuration"))
config = tutor_config.load_minimal(context.obj.root)
if not non_interactive:
interactive_config.ask_questions(config, run_for_prod=False)
tutor_config.save_config_file(context.obj.root, config)
config = tutor_config.load_full(context.obj.root)
tutor_env.save(context.obj.root, config)
click.echo(fmt.title("Stopping any existing platform"))
context.invoke(compose.stop)
if pullimages:
click.echo(fmt.title("Docker image updates"))
context.invoke(compose.dc_command, command="pull")
click.echo(fmt.title("Starting the platform in detached mode"))
context.invoke(compose.start, detach=True)
click.echo(fmt.title("Database creation and migrations"))
context.invoke(compose.do.commands["init"])
fmt.echo_info(
"""The Open edX platform is now running in detached mode
Your Open edX platform is ready and can be accessed at the following urls:
{http}://{lms_host}:8000
{http}://{cms_host}:8001
""".format(
http="https" if config["ENABLE_HTTPS"] else "http",
lms_host=config["LMS_HOST"],
cms_host=config["CMS_HOST"],
)
)
@hooks.Actions.COMPOSE_PROJECT_STARTED.add()
def _stop_on_local_start(root: str, config: Config, project_name: str) -> None:
"""
@ -128,5 +55,13 @@ def _stop_on_local_start(root: str, config: Config, project_name: str) -> None:
runner.docker_compose("stop")
dev.add_command(launch)
@hooks.Filters.IMAGES_BUILD_REQUIRED.add()
def _build_openedx_dev_on_launch(
image_names: list[str], context_name: t.Literal["local", "dev"]
) -> list[str]:
if context_name == "dev":
image_names.append("openedx-dev")
return image_names
compose.add_commands(dev)

View File

@ -1,38 +1,56 @@
from __future__ import annotations
import os
import typing as t
import click
from tutor import bindmount
from tutor import config as tutor_config
from tutor import env as tutor_env
from tutor import exceptions, hooks, images
from tutor import exceptions, fmt, hooks, images, utils
from tutor.commands.context import Context
from tutor.commands.params import ConfigLoaderParam
from tutor.core.hooks import Filter
from tutor.types import Config
BASE_IMAGE_NAMES = ["openedx", "permissions"]
VENDOR_IMAGES = [
"caddy",
"elasticsearch",
"mongodb",
"mysql",
"redis",
"smtp",
BASE_IMAGE_NAMES = [
("openedx", "DOCKER_IMAGE_OPENEDX"),
("permissions", "DOCKER_IMAGE_PERMISSIONS"),
]
@hooks.Filters.IMAGES_BUILD.add()
def _add_core_images_to_build(
build_images: list[tuple[str, tuple[str, ...], str, tuple[str, ...]]],
build_images: list[tuple[str, t.Union[str, tuple[str, ...]], str, tuple[str, ...]]],
config: Config,
) -> list[tuple[str, tuple[str, ...], str, tuple[str, ...]]]:
) -> list[tuple[str, t.Union[str, tuple[str, ...]], str, tuple[str, ...]]]:
"""
Add base images to the list of Docker images to build on `tutor build all`.
"""
for image in BASE_IMAGE_NAMES:
tag = images.get_tag(config, image)
build_images.append((image, ("build", image), tag, ()))
for image, tag in BASE_IMAGE_NAMES:
build_images.append(
(
image,
os.path.join("build", image),
tutor_config.get_typed(config, tag, str),
(),
)
)
# Build openedx-dev image
build_images.append(
(
"openedx-dev",
os.path.join("build", "openedx"),
tutor_config.get_typed(config, "DOCKER_IMAGE_OPENEDX_DEV", str),
(
"--target=development",
f"--build-arg=APP_USER_ID={utils.get_user_id() or 1000}",
),
)
)
return build_images
@ -43,11 +61,19 @@ def _add_images_to_pull(
"""
Add base and vendor images to the list of Docker images to pull on `tutor pull all`.
"""
for image in VENDOR_IMAGES:
vendor_images = [
("caddy", "DOCKER_IMAGE_CADDY"),
("elasticsearch", "DOCKER_IMAGE_ELASTICSEARCH"),
("mongodb", "DOCKER_IMAGE_MONGODB"),
("mysql", "DOCKER_IMAGE_MYSQL"),
("redis", "DOCKER_IMAGE_REDIS"),
("smtp", "DOCKER_IMAGE_SMTP"),
]
for image, tag_name in vendor_images:
if config.get(f"RUN_{image.upper()}", True):
remote_images.append((image, images.get_tag(config, image)))
for image in BASE_IMAGE_NAMES:
remote_images.append((image, images.get_tag(config, image)))
remote_images.append((image, tutor_config.get_typed(config, tag_name, str)))
for image, tag in BASE_IMAGE_NAMES:
remote_images.append((image, tutor_config.get_typed(config, tag, str)))
return remote_images
@ -58,24 +84,80 @@ def _add_core_images_to_push(
"""
Add base images to the list of Docker images to push on `tutor push all`.
"""
for image in BASE_IMAGE_NAMES:
remote_images.append((image, images.get_tag(config, image)))
for image, tag in BASE_IMAGE_NAMES:
remote_images.append((image, tutor_config.get_typed(config, tag, str)))
return remote_images
class ImageNameParam(ConfigLoaderParam):
"""
Convenient auto-completion of image names.
"""
def shell_complete(
self, ctx: click.Context, param: click.Parameter, incomplete: str
) -> list[click.shell_completion.CompletionItem]:
results = []
for name in self.iter_image_names():
if name.startswith(incomplete):
results.append(click.shell_completion.CompletionItem(name))
return results
def iter_image_names(self) -> t.Iterable["str"]:
raise NotImplementedError
class BuildImageNameParam(ImageNameParam):
def iter_image_names(self) -> t.Iterable["str"]:
for name, _path, _tag, _args in hooks.Filters.IMAGES_BUILD.iterate(self.config):
yield name
class PullImageNameParam(ImageNameParam):
def iter_image_names(self) -> t.Iterable["str"]:
for name, _tag in hooks.Filters.IMAGES_PULL.iterate(self.config):
yield name
class PushImageNameParam(ImageNameParam):
def iter_image_names(self) -> t.Iterable["str"]:
for name, _tag in hooks.Filters.IMAGES_PUSH.iterate(self.config):
yield name
@click.group(name="images", short_help="Manage docker images")
def images_command() -> None:
pass
@click.command(
short_help="Build docker images",
help="Build the docker images necessary for an Open edX platform.",
@click.command()
@click.argument(
"image_names",
metavar="image",
nargs=-1,
type=BuildImageNameParam(),
)
@click.argument("image_names", metavar="image", nargs=-1)
@click.option(
"--no-cache", is_flag=True, help="Do not use cache when building the image"
)
@click.option(
"--no-registry-cache",
is_flag=True,
help="Do not use registry cache when building the image",
)
@click.option(
"--cache-to-registry",
is_flag=True,
help="Push the build cache to the remote registry. You should only enable this option if you have push rights to the remote registry.",
)
@click.option(
"--output",
"docker_output",
# Export image to docker. This is necessary to make the image available to docker-compose.
# The `--load` option is a shorthand for `--output=type=docker`.
default="type=docker",
help="Same as `docker build --output=...`. This option will only be used when BuildKit is enabled.",
)
@click.option(
"-a",
"--build-arg",
@ -105,11 +187,20 @@ def build(
context: Context,
image_names: list[str],
no_cache: bool,
no_registry_cache: bool,
cache_to_registry: bool,
docker_output: str,
build_args: list[str],
add_hosts: list[str],
target: str,
docker_args: list[str],
) -> None:
"""
Build docker images
Build the docker images necessary for an Open edX platform. By default, the remote
registry cache will be used for better performance.
"""
config = tutor_config.load(context.root)
command_args = []
if no_cache:
@ -120,20 +211,77 @@ def build(
command_args += ["--add-host", add_host]
if target:
command_args += ["--target", target]
if utils.is_buildkit_enabled() and docker_output:
command_args.append(f"--output={docker_output}")
if docker_args:
command_args += docker_args
# Build context mounts
build_contexts = get_image_build_contexts(config)
for image in image_names:
for _name, path, tag, custom_args in find_images_to_build(config, image):
for name, path, tag, custom_args in find_images_to_build(config, image):
image_build_args = [*command_args, *custom_args]
# Registry cache
if not no_registry_cache:
image_build_args.append(f"--cache-from=type=registry,ref={tag}-cache")
if cache_to_registry:
image_build_args.append(
f"--cache-to=type=registry,mode=max,ref={tag}-cache"
)
# Build contexts
for host_path, stage_name in build_contexts.get(name, []):
image_build_args.append(f"--build-context={stage_name}={host_path}")
# Build
images.build(
tutor_env.pathjoin(context.root, *path),
tutor_env.pathjoin(context.root, path),
tag,
*command_args,
*custom_args,
*image_build_args,
)
def get_image_build_contexts(config: Config) -> dict[str, list[tuple[str, str]]]:
"""
Return all build contexts for all images.
A build context is to bind-mount a host directory at build-time. This is useful, for
instance to build a Docker image with a local git checkout of a remote repo.
Users configure bind-mounts with the `MOUNTS` config setting. Plugins can then
automatically add build contexts based on these values.
"""
build_contexts: dict[str, list[tuple[str, str]]] = {}
for user_mount in bindmount.get_mounts(config):
for image_name, stage_name in hooks.Filters.IMAGES_BUILD_MOUNTS.iterate(
user_mount
):
fmt.echo_info(
f"Adding {user_mount} to the build context '{stage_name}' of image '{image_name}'"
)
if image_name not in build_contexts:
build_contexts[image_name] = []
build_contexts[image_name].append((user_mount, stage_name))
return build_contexts
@hooks.Filters.IMAGES_BUILD_MOUNTS.add()
def _mount_edx_platform(
volumes: list[tuple[str, str]], path: str
) -> list[tuple[str, str]]:
"""
Automatically add an edx-platform repo from the host to the build context whenever
it is added to the `MOUNTS` setting.
"""
if os.path.basename(path) == "edx-platform":
volumes.append(("openedx", "edx-platform"))
volumes.append(("openedx-dev", "edx-platform"))
return volumes
@click.command(short_help="Pull images from the Docker registry")
@click.argument("image_names", metavar="image", nargs=-1)
@click.argument("image_names", metavar="image", type=PullImageNameParam(), nargs=-1)
@click.pass_obj
def pull(context: Context, image_names: list[str]) -> None:
config = tutor_config.load_full(context.root)
@ -143,7 +291,7 @@ def pull(context: Context, image_names: list[str]) -> None:
@click.command(short_help="Push images to the Docker registry")
@click.argument("image_names", metavar="image", nargs=-1)
@click.argument("image_names", metavar="image", type=PushImageNameParam(), nargs=-1)
@click.pass_obj
def push(context: Context, image_names: list[str]) -> None:
config = tutor_config.load_full(context.root)
@ -153,7 +301,7 @@ def push(context: Context, image_names: list[str]) -> None:
@click.command(short_help="Print tag associated to a Docker image")
@click.argument("image_names", metavar="image", nargs=-1)
@click.argument("image_names", metavar="image", type=BuildImageNameParam(), nargs=-1)
@click.pass_obj
def printtag(context: Context, image_names: list[str]) -> None:
config = tutor_config.load_full(context.root)
@ -164,7 +312,7 @@ def printtag(context: Context, image_names: list[str]) -> None:
def find_images_to_build(
config: Config, image: str
) -> t.Iterator[tuple[str, tuple[str, ...], str, tuple[str, ...]]]:
) -> t.Iterator[tuple[str, str, str, tuple[str, ...]]]:
"""
Iterate over all images to build.
@ -174,10 +322,11 @@ def find_images_to_build(
"""
found = False
for name, path, tag, args in hooks.Filters.IMAGES_BUILD.iterate(config):
relative_path = path if isinstance(path, str) else os.path.join(*path)
if image in [name, "all"]:
found = True
tag = tutor_env.render_str(config, tag)
yield (name, path, tag, args)
yield (name, relative_path, tag, args)
if not found:
raise ImageNotFoundError(image)

View File

@ -4,13 +4,14 @@ Common jobs that must be added both to local, dev and k8s commands.
from __future__ import annotations
import functools
import shlex
import typing as t
import click
from typing_extensions import ParamSpec
from tutor import config as tutor_config
from tutor import env, fmt, hooks, utils
from tutor import env, fmt, hooks
from tutor.hooks import priorities
@ -36,11 +37,11 @@ def _add_core_init_tasks() -> None:
The context is important, because it allows us to select the init scripts based on
the --limit argument.
"""
with hooks.Contexts.APP("mysql").enter():
with hooks.Contexts.app("mysql").enter():
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
("mysql", env.read_core_template_file("jobs", "init", "mysql.sh"))
)
with hooks.Contexts.APP("lms").enter():
with hooks.Contexts.app("lms").enter():
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
(
"lms",
@ -53,7 +54,7 @@ def _add_core_init_tasks() -> None:
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
("lms", env.read_core_template_file("jobs", "init", "lms.sh"))
)
with hooks.Contexts.APP("cms").enter():
with hooks.Contexts.app("cms").enter():
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
("cms", env.read_core_template_file("jobs", "init", "cms.sh"))
)
@ -63,33 +64,14 @@ def _add_core_init_tasks() -> None:
@click.option("-l", "--limit", help="Limit initialisation to this service or plugin")
def initialise(limit: t.Optional[str]) -> t.Iterator[tuple[str, str]]:
fmt.echo_info("Initialising all services...")
filter_context = hooks.Contexts.APP(limit).name if limit else None
filter_context = hooks.Contexts.app(limit).name if limit else None
# Deprecated pre-init tasks
for service, path in hooks.Filters.COMMANDS_PRE_INIT.iterate_from_context(
filter_context
):
fmt.echo_alert(
f"Running deprecated pre-init task: {'/'.join(path)}. Init tasks should no longer be added to the COMMANDS_PRE_INIT filter. Plugin developers should use the CLI_DO_INIT_TASKS filter instead, with a high priority."
)
yield service, env.read_template_file(*path)
# Init tasks
for service, task in hooks.Filters.CLI_DO_INIT_TASKS.iterate_from_context(
filter_context
):
fmt.echo_info(f"Running init task in {service}")
yield service, task
# Deprecated init tasks
for service, path in hooks.Filters.COMMANDS_INIT.iterate_from_context(
filter_context
):
fmt.echo_alert(
f"Running deprecated init task: {'/'.join(path)}. Init tasks should no longer be added to the COMMANDS_INIT filter. Plugin developers should use the CLI_DO_INIT_TASKS filter instead."
)
yield service, env.read_template_file(*path)
fmt.echo_info("All services initialised.")
@ -146,19 +128,26 @@ u.save()"
show_default=True,
help="Git repository that contains the course to be imported",
)
@click.option(
"-d",
"--repo-dir",
default="",
show_default=True,
help="Git relative subdirectory to import data from",
)
@click.option(
"-v",
"--version",
help="Git branch, tag or sha1 identifier. If unspecified, will default to the value of the OPENEDX_COMMON_VERSION setting.",
)
def importdemocourse(
repo: str, version: t.Optional[str]
repo: str, repo_dir: str, version: t.Optional[str]
) -> t.Iterable[tuple[str, str]]:
version = version or "{{ OPENEDX_COMMON_VERSION }}"
template = f"""
# Import demo course
git clone {repo} --branch {version} --depth 1 /tmp/course
python ./manage.py cms import ../data /tmp/course
python ./manage.py cms import ../data /tmp/course/{repo_dir}
# Re-index courses
./manage.py cms reindex_course --all --setup"""
@ -253,7 +242,7 @@ def sqlshell(args: list[str]) -> t.Iterable[tuple[str, str]]:
"""
command = "mysql --user={{ MYSQL_ROOT_USERNAME }} --password={{ MYSQL_ROOT_PASSWORD }} --host={{ MYSQL_HOST }} --port={{ MYSQL_PORT }}"
if args:
command += " " + utils._shlex_join(*args) # pylint: disable=protected-access
command += " " + shlex.join(args) # pylint: disable=protected-access
yield ("lms", command)

View File

@ -1,18 +1,10 @@
from __future__ import annotations
import typing as t
import click
from tutor import config as tutor_config
from tutor import env as tutor_env
from tutor import exceptions, fmt, hooks
from tutor import interactive as interactive_config
from tutor import utils
from tutor import hooks
from tutor.commands import compose
from tutor.commands.config import save as config_save_command
from tutor.commands.upgrade import OPENEDX_RELEASE_NAMES
from tutor.commands.upgrade.local import upgrade_from
from tutor.types import Config, get_typed
@ -23,37 +15,20 @@ class LocalTaskRunner(compose.ComposeTaskRunner):
"""
super().__init__(root, config)
self.project_name = get_typed(self.config, "LOCAL_PROJECT_NAME", str)
docker_compose_tmp_path = tutor_env.pathjoin(
self.root, "local", "docker-compose.tmp.yml"
)
docker_compose_jobs_tmp_path = tutor_env.pathjoin(
self.root, "local", "docker-compose.jobs.tmp.yml"
)
self.docker_compose_files += [
tutor_env.pathjoin(self.root, "local", "docker-compose.yml"),
tutor_env.pathjoin(self.root, "local", "docker-compose.prod.yml"),
docker_compose_tmp_path,
tutor_env.pathjoin(self.root, "local", "docker-compose.override.yml"),
]
self.docker_compose_job_files += [
tutor_env.pathjoin(self.root, "local", "docker-compose.jobs.yml"),
docker_compose_jobs_tmp_path,
tutor_env.pathjoin(self.root, "local", "docker-compose.jobs.override.yml"),
]
# Update docker-compose.tmp files
self.update_docker_compose_tmp(
hooks.Filters.COMPOSE_LOCAL_TMP,
hooks.Filters.COMPOSE_LOCAL_JOBS_TMP,
docker_compose_tmp_path,
docker_compose_jobs_tmp_path,
)
# pylint: disable=too-few-public-methods
class LocalContext(compose.BaseComposeContext):
COMPOSE_TMP_FILTER = hooks.Filters.COMPOSE_LOCAL_TMP
COMPOSE_JOBS_TMP_FILTER = hooks.Filters.COMPOSE_LOCAL_JOBS_TMP
NAME = "local"
def job_runner(self, config: Config) -> LocalTaskRunner:
return LocalTaskRunner(self.root, config)
@ -65,128 +40,6 @@ def local(context: click.Context) -> None:
context.obj = LocalContext(context.obj.root)
@click.command(help="Configure and run Open edX from scratch")
@compose.mount_option
@click.option("-I", "--non-interactive", is_flag=True, help="Run non-interactively")
@click.option("-p", "--pullimages", is_flag=True, help="Update docker images")
@click.pass_context
def launch(
context: click.Context,
mounts: tuple[list[compose.MountParam.MountType]],
non_interactive: bool,
pullimages: bool,
) -> None:
compose.mount_tmp_volumes(mounts, context.obj)
try:
utils.check_macos_docker_memory()
except exceptions.TutorError as e:
fmt.echo_alert(
f"""Could not verify sufficient RAM allocation in Docker:
{e}
Tutor may not work if Docker is configured with < 4 GB RAM. Please follow instructions from:
https://docs.tutor.overhang.io/install.html"""
)
run_upgrade_from_release = tutor_env.should_upgrade_from_release(context.obj.root)
if run_upgrade_from_release is not None:
click.echo(fmt.title("Upgrading from an older release"))
if not non_interactive:
to_release = tutor_env.get_current_open_edx_release_name()
question = f"""You are about to upgrade your Open edX platform from {run_upgrade_from_release.capitalize()} to {to_release.capitalize()}
It is strongly recommended to make a backup before upgrading. To do so, run:
tutor local stop
sudo rsync -avr "$(tutor config printroot)"/ /tmp/tutor-backup/
In case of problem, to restore your backup you will then have to run: sudo rsync -avr /tmp/tutor-backup/ "$(tutor config printroot)"/
Are you sure you want to continue?"""
click.confirm(
fmt.question(question), default=True, abort=True, prompt_suffix=" "
)
context.invoke(
upgrade,
from_release=run_upgrade_from_release,
)
click.echo(fmt.title("Interactive platform configuration"))
config = tutor_config.load_minimal(context.obj.root)
if not non_interactive:
interactive_config.ask_questions(config)
tutor_config.save_config_file(context.obj.root, config)
config = tutor_config.load_full(context.obj.root)
tutor_env.save(context.obj.root, config)
if run_upgrade_from_release and not non_interactive:
question = f"""Your platform is being upgraded from {run_upgrade_from_release.capitalize()}.
If you run custom Docker images, you must rebuild them now by running the following command in a different shell:
tutor images build all # list your custom images here
See the documentation for more information:
https://docs.tutor.overhang.io/install.html#upgrading-to-a-new-open-edx-release
Press enter when you are ready to continue"""
click.confirm(
fmt.question(question), default=True, abort=True, prompt_suffix=" "
)
click.echo(fmt.title("Stopping any existing platform"))
context.invoke(compose.stop)
if pullimages:
click.echo(fmt.title("Docker image updates"))
context.invoke(compose.dc_command, command="pull")
click.echo(fmt.title("Starting the platform in detached mode"))
context.invoke(compose.start, detach=True)
click.echo(fmt.title("Database creation and migrations"))
context.invoke(compose.do.commands["init"])
config = tutor_config.load(context.obj.root)
fmt.echo_info(
"""The Open edX platform is now running in detached mode
Your Open edX platform is ready and can be accessed at the following urls:
{http}://{lms_host}
{http}://{cms_host}
""".format(
http="https" if config["ENABLE_HTTPS"] else "http",
lms_host=config["LMS_HOST"],
cms_host=config["CMS_HOST"],
)
)
@click.command(
short_help="Perform release-specific upgrade tasks",
help="Perform release-specific upgrade tasks. To perform a full upgrade remember to run `launch`.",
)
@click.option(
"--from",
"from_release",
type=click.Choice(OPENEDX_RELEASE_NAMES),
)
@click.pass_context
def upgrade(context: click.Context, from_release: t.Optional[str]) -> None:
fmt.echo_alert(
"This command only performs a partial upgrade of your Open edX platform. "
"To perform a full upgrade, you should run `tutor local launch`."
)
if from_release is None:
from_release = tutor_env.get_env_release(context.obj.root)
if from_release is None:
fmt.echo_info("Your environment is already up-to-date")
else:
upgrade_from(context, from_release)
# We update the environment to update the version
context.invoke(config_save_command)
@hooks.Actions.COMPOSE_PROJECT_STARTED.add()
def _stop_on_dev_start(root: str, config: Config, project_name: str) -> None:
"""
@ -198,6 +51,4 @@ def _stop_on_dev_start(root: str, config: Config, project_name: str) -> None:
runner.docker_compose("stop")
local.add_command(launch)
local.add_command(upgrade)
compose.add_commands(local)

138
tutor/commands/mounts.py Normal file
View File

@ -0,0 +1,138 @@
from __future__ import annotations
import os
import click
import yaml
from tutor import bindmount
from tutor import config as tutor_config
from tutor import exceptions, fmt, hooks
from tutor.commands.config import save as config_save
from tutor.commands.context import Context
from tutor.commands.params import ConfigLoaderParam
class MountParamType(ConfigLoaderParam):
name = "mount"
def shell_complete(
self, ctx: click.Context, param: click.Parameter, incomplete: str
) -> list[click.shell_completion.CompletionItem]:
mounts = bindmount.get_mounts(self.config)
return [
click.shell_completion.CompletionItem(mount)
for mount in mounts
if mount.startswith(incomplete)
]
@click.group(name="mounts")
def mounts_command() -> None:
"""
Manage host bind-mounts
Bind-mounted folders are used both in image building, development (`dev` commands)
and `local` deployments.
"""
@click.command(name="list")
@click.pass_obj
def mounts_list(context: Context) -> None:
"""
List bind-mounted folders
Entries will be fetched from the `MOUNTS` project setting.
"""
config = tutor_config.load(context.root)
mounts = []
for mount_name in bindmount.get_mounts(config):
build_mounts = [
{"image": image_name, "context": stage_name}
for image_name, stage_name in hooks.Filters.IMAGES_BUILD_MOUNTS.iterate(
mount_name
)
]
compose_mounts = [
{
"service": service,
"container_path": container_path,
}
for service, _host_path, container_path in bindmount.parse_mount(mount_name)
]
mounts.append(
{
"name": mount_name,
"build_mounts": build_mounts,
"compose_mounts": compose_mounts,
}
)
fmt.echo(yaml.dump(mounts, default_flow_style=False, sort_keys=False))
@click.command(name="add")
@click.argument("mounts", metavar="mount", type=click.Path(), nargs=-1)
@click.pass_context
def mounts_add(context: click.Context, mounts: list[str]) -> None:
"""
Add a bind-mounted folder
The bind-mounted folder will be added to the project configuration, in the ``MOUNTS``
setting.
Values passed to this command can take one of two forms. The first is explicit::
tutor mounts add myservice:/host/path:/container/path
The second is implicit::
tutor mounts add /host/path
With the explicit form, the value means "bind-mount the host folder /host/path to
/container/path in the "myservice" container at run time".
With the implicit form, plugins are in charge of automatically detecting in which
containers and locations the /host/path folder should be bind-mounted. In this case,
folders can be bind-mounted at build-time -- which cannot be achieved with the
explicit form.
"""
new_mounts = []
for mount in mounts:
if not bindmount.parse_explicit_mount(mount):
# Path is implicit: check that this path is valid
# (we don't try to validate explicit mounts)
mount = os.path.abspath(os.path.expanduser(mount))
if not os.path.exists(mount):
raise exceptions.TutorError(f"Path {mount} does not exist on the host")
new_mounts.append(mount)
fmt.echo_info(f"Adding bind-mount: {mount}")
context.invoke(config_save, append_vars=[("MOUNTS", mount) for mount in new_mounts])
@click.command(name="remove")
@click.argument("mounts", metavar="mount", type=MountParamType(), nargs=-1)
@click.pass_context
def mounts_remove(context: click.Context, mounts: list[str]) -> None:
"""
Remove a bind-mounted folder
The bind-mounted folder will be removed from the ``MOUNTS`` project setting.
"""
removed_mounts = []
for mount in mounts:
if not bindmount.parse_explicit_mount(mount):
# Path is implicit: expand it
mount = os.path.abspath(os.path.expanduser(mount))
removed_mounts.append(mount)
fmt.echo_info(f"Removing bind-mount: {mount}")
context.invoke(
config_save, remove_vars=[("MOUNTS", mount) for mount in removed_mounts]
)
mounts_command.add_command(mounts_list)
mounts_command.add_command(mounts_add)
mounts_command.add_command(mounts_remove)

29
tutor/commands/params.py Normal file
View File

@ -0,0 +1,29 @@
import typing as t
import click
from tutor import config as tutor_config
from tutor import hooks
from tutor.types import Config
class ConfigLoaderParam(click.ParamType):
"""
Convenient param child class that automatically loads the user configuration on auto-complete.
"""
def __init__(self) -> None:
self.root = None
self._config: t.Optional[Config] = None
@hooks.Actions.PROJECT_ROOT_READY.add()
def _on_root_ready(root: str) -> None:
self.root = root
@property
def config(self) -> Config:
if self.root is None:
return {}
if self._config is None:
self._config = tutor_config.load_full(self.root)
return self._config

View File

@ -1,2 +1,10 @@
# Note: don't forget to change this when we upgrade from olive
OPENEDX_RELEASE_NAMES = ["ironwood", "juniper", "koa", "lilac", "maple", "nutmeg"]
# Note: don't forget to change this when we upgrade from palm
OPENEDX_RELEASE_NAMES = [
"ironwood",
"juniper",
"koa",
"lilac",
"maple",
"nutmeg",
"olive",
]

View File

@ -39,3 +39,11 @@ def upgrade_from_nutmeg(context: click.Context, config: Config) -> None:
context.obj.job_runner(config).run_task(
"lms", "./manage.py lms compute_grades -v1 --all_courses"
)
PALM_RENAME_ORA2_FOLDER_COMMAND = """
if stat '/openedx/data/ora2/SET-ME-PLEASE (ex. bucket-name)' 2> /dev/null; then
echo "Renaming ora2 folder..."
mv '/openedx/data/ora2/SET-ME-PLEASE (ex. bucket-name)' /openedx/data/ora2/openedxuploads
fi
"""

View File

@ -39,6 +39,10 @@ def upgrade_from(context: click.Context, from_release: str) -> None:
common_upgrade.upgrade_from_nutmeg(context, config)
running_release = "olive"
if running_release == "olive":
upgrade_from_olive(context, config)
running_release = "palm"
def upgrade_from_ironwood(context: click.Context, config: Config) -> None:
click.echo(fmt.title("Upgrading from Ironwood"))
@ -47,18 +51,8 @@ def upgrade_from_ironwood(context: click.Context, config: Config) -> None:
click.echo(fmt.title("Stopping any existing platform"))
context.invoke(compose.stop)
if not config["RUN_MONGODB"]:
fmt.echo_info(
"You are not running MongoDB (RUN_MONGODB=false). It is your "
"responsibility to upgrade your MongoDb instance to v3.6. There is "
"nothing left to do to upgrade from Ironwood to Juniper."
)
return
upgrade_mongodb(context, config, "3.4", "3.4")
context.invoke(compose.stop)
upgrade_mongodb(context, config, "3.6", "3.6")
context.invoke(compose.stop)
def upgrade_from_juniper(context: click.Context, config: Config) -> None:
@ -146,12 +140,31 @@ def upgrade_from_maple(context: click.Context, config: Config) -> None:
)
def upgrade_from_olive(context: click.Context, config: Config) -> None:
# Note that we need to exec because the ora2 folder is not bind-mounted in the job
# services.
context.invoke(compose.start, detach=True, services=["lms"])
context.invoke(
compose.execute,
args=["lms", "sh", "-e", "-c", common_upgrade.PALM_RENAME_ORA2_FOLDER_COMMAND],
)
upgrade_mongodb(context, config, "4.2.17", "4.2")
upgrade_mongodb(context, config, "4.4.22", "4.4")
def upgrade_mongodb(
context: click.Context,
config: Config,
to_docker_version: str,
to_compatibility_version: str,
) -> None:
if not config["RUN_MONGODB"]:
fmt.echo_info(
f"You are not running MongoDB (RUN_MONGODB=false). It is your "
f"responsibility to upgrade your MongoDb instance to {to_docker_version}."
)
return
click.echo(fmt.title(f"Upgrading MongoDb to v{to_docker_version}"))
# Note that the DOCKER_IMAGE_MONGODB value is never saved, because we only save the
# environment, not the configuration.

View File

@ -38,32 +38,14 @@ def upgrade_from(context: click.Context, from_release: str) -> None:
common_upgrade.upgrade_from_nutmeg(context, config)
running_release = "olive"
if running_release == "olive":
upgrade_from_olive(context.obj, config)
running_release = "palm"
def upgrade_from_ironwood(config: Config) -> None:
if not config["RUN_MONGODB"]:
fmt.echo_info(
"You are not running MongoDB (RUN_MONGODB=false). It is your "
"responsibility to upgrade your MongoDb instance to v3.6. There is "
"nothing left to do to upgrade from Ironwood."
)
return
message = """Automatic release upgrade is unsupported in Kubernetes. To upgrade from Ironwood, you should upgrade
your MongoDb cluster from v3.2 to v3.6. You should run something similar to:
# Upgrade from v3.2 to v3.4
tutor k8s stop
tutor config save --set DOCKER_IMAGE_MONGODB=mongo:3.4.24
tutor k8s start
tutor k8s exec mongodb mongo --eval 'db.adminCommand({ setFeatureCompatibilityVersion: "3.4" })'
# Upgrade from v3.4 to v3.6
tutor k8s stop
tutor config save --set DOCKER_IMAGE_MONGODB=mongo:3.6.18
tutor k8s start
tutor k8s exec mongodb mongo --eval 'db.adminCommand({ setFeatureCompatibilityVersion: "3.6" })'
tutor config save --unset DOCKER_IMAGE_MONGODB"""
fmt.echo_info(message)
upgrade_mongodb(config, "3.4.24", "3.4")
upgrade_mongodb(config, "3.6.18", "3.6")
def upgrade_from_juniper(config: Config) -> None:
@ -87,23 +69,7 @@ your MySQL database from v5.6 to v5.7. You should run something similar to:
def upgrade_from_koa(config: Config) -> None:
if not config["RUN_MONGODB"]:
fmt.echo_info(
"You are not running MongoDB (RUN_MONGODB=false). It is your "
"responsibility to upgrade your MongoDb instance to v4.0. There is "
"nothing left to do to upgrade to Lilac from Koa."
)
return
message = """Automatic release upgrade is unsupported in Kubernetes. To upgrade from Koa to Lilac, you should upgrade
your MongoDb cluster from v3.6 to v4.0. You should run something similar to:
tutor k8s stop
tutor config save --set DOCKER_IMAGE_MONGODB=mongo:4.0.25
tutor k8s start
tutor k8s exec mongodb mongo --eval 'db.adminCommand({ setFeatureCompatibilityVersion: "4.0" })'
tutor config save --unset DOCKER_IMAGE_MONGODB
"""
fmt.echo_info(message)
upgrade_mongodb(config, "4.0.25", "4.0")
def upgrade_from_lilac(config: Config) -> None:
@ -173,3 +139,42 @@ def upgrade_from_maple(context: Context, config: Config) -> None:
k8s.kubectl_exec(
config, "cms", ["sh", "-e", "-c", "./manage.py cms simulate_publish"]
)
def upgrade_from_olive(context: Context, config: Config) -> None:
# Note that we need to exec because the ora2 folder is not bind-mounted in the job
# services.
k8s.kubectl_apply(
context.root,
"--selector",
"app.kubernetes.io/name=lms",
)
k8s.wait_for_deployment_ready(config, "lms")
k8s.kubectl_exec(
config,
"lms",
["sh", "-e", "-c", common_upgrade.PALM_RENAME_ORA2_FOLDER_COMMAND],
)
upgrade_mongodb(config, "4.2.17", "4.2")
upgrade_mongodb(config, "4.4.22", "4.4")
def upgrade_mongodb(
config: Config, to_docker_version: str, to_compatibility_version: str
) -> None:
if not config["RUN_MONGODB"]:
fmt.echo_info(
"You are not running MongoDB (RUN_MONGODB=false). It is your "
"responsibility to upgrade your MongoDb instance to {to_docker_version}."
)
return
message = f"""Automatic release upgrade is unsupported in Kubernetes. You should manually upgrade
your MongoDb cluster to {to_docker_version} by running something similar to:
tutor k8s stop
tutor config save --set DOCKER_IMAGE_MONGODB=mongo:{to_docker_version}
tutor k8s start
tutor k8s exec mongodb mongo --eval 'db.adminCommand({{ setFeatureCompatibilityVersion: "{to_compatibility_version}" }})'
tutor config save --unset DOCKER_IMAGE_MONGODB
"""
fmt.echo_info(message)

View File

@ -304,7 +304,7 @@ def _remove_plugin_config_overrides_on_unload(
# Find the configuration entries that were overridden by the plugin and
# remove them from the current config
for key, _value in hooks.Filters.CONFIG_OVERRIDES.iterate_from_context(
hooks.Contexts.APP(plugin).name
hooks.Contexts.app(plugin).name
):
value = config.pop(key, None)
value = env.render_unknown(config, value)

View File

@ -1,15 +1,13 @@
import typing as t
from .actions import Action, ActionTemplate
from .actions import clear_all as _clear_all_actions
from .contexts import Context, ContextTemplate
from .filters import Filter, FilterTemplate
from .filters import clear_all as _clear_all_filters
from .actions import Action
from .contexts import Context
from .filters import Filter
def clear_all(context: t.Optional[str] = None) -> None:
"""
Clear both actions and filters.
"""
_clear_all_actions(context=context)
_clear_all_filters(context=context)
Action.clear_all(context=context)
Filter.clear_all(context=context)

View File

@ -5,6 +5,7 @@ __license__ = "Apache 2.0"
import sys
import typing as t
from weakref import WeakSet
from typing_extensions import ParamSpec
@ -44,32 +45,25 @@ class Action(t.Generic[T]):
This is the typical action lifecycle:
1. Create an action with method :py:meth:`get`.
2. Add callbacks with method :py:meth:`add`.
3. Call the action callbacks with method :py:meth:`do`.
1. Create an action with ``Action()``.
2. Add callbacks with :py:meth:`add`.
3. Call the action callbacks with :py:meth:`do`.
The ``P`` type parameter of the Action class corresponds to the expected signature of
The ``T`` type parameter of the Action class corresponds to the expected signature of
the action callbacks. For instance, ``Action[[str, int]]`` means that the action
callbacks are expected to take two arguments: one string and one integer.
This strong typing makes it easier for plugin developers to quickly check whether they are adding and calling action callbacks correctly.
This strong typing makes it easier for plugin developers to quickly check whether
they are adding and calling action callbacks correctly.
"""
INDEX: dict[str, "Action[t.Any]"] = {}
# Keep a weak reference to all created filters. This allows us to clear them when
# necessary.
INSTANCES: WeakSet[Action[t.Any]] = WeakSet()
def __init__(self, name: str) -> None:
self.name = name
def __init__(self) -> None:
self.callbacks: list[ActionCallback[T]] = []
def __repr__(self) -> str:
return f"{self.__class__.__name__}('{self.name}')"
@classmethod
def get(cls, name: str) -> "Action[t.Any]":
"""
Get an existing action with the given name from the index, or create one.
"""
return cls.INDEX.setdefault(name, cls(name))
self.INSTANCES.add(self)
def add(
self, priority: t.Optional[int] = None
@ -144,7 +138,7 @@ class Action(t.Generic[T]):
)
except:
sys.stderr.write(
f"Error applying action '{self.name}': func={callback.func} contexts={callback.contexts}'\n"
f"Error applying action: func={callback.func} contexts={callback.contexts}'\n"
)
raise
@ -168,98 +162,10 @@ class Action(t.Generic[T]):
if not callback.is_in_context(context)
]
class ActionTemplate(t.Generic[T]):
"""
Action templates are for actions for which the name needs to be formatted
before the action can be applied.
Action templates can generate different :py:class:`Action` objects for which the
name matches a certain template.
Templated actions must be formatted with ``(*args)`` before being applied. For example::
action_template = ActionTemplate("namespace:{0}")
# Return the action named "namespace:name"
my_action = action_template("name")
@my_action.add()
def my_callback():
...
my_action.do()
"""
def __init__(self, name: str):
self.template = name
def __repr__(self) -> str:
return f"{self.__class__.__name__}('{self.template}')"
def __call__(self, *args: t.Any, **kwargs: t.Any) -> Action[T]:
name = self.template.format(*args, **kwargs)
action: Action[T] = Action.get(name)
return action
# Syntactic sugar
get = Action.get
def get_template(name: str) -> ActionTemplate[t.Any]:
"""
Create an action with a template name.
"""
return ActionTemplate(name)
def add(
name: str, priority: t.Optional[int] = None
) -> t.Callable[[ActionCallbackFunc[T]], ActionCallbackFunc[T]]:
"""
Decorator to add a callback action associated to a name.
"""
return get(name).add(priority=priority)
def do(
name: str,
*args: T.args,
**kwargs: T.kwargs,
) -> None:
"""
Run action callbacks associated to a name/context.
"""
action: Action[T] = Action.get(name)
action.do(*args, **kwargs)
def do_from_context(
context: str,
name: str,
*args: T.args,
**kwargs: T.kwargs,
) -> None:
"""
Same as :py:func:`do` but only run the callbacks that were created in a given context.
"""
action: Action[T] = Action.get(name)
action.do_from_context(context, *args, **kwargs)
def clear_all(context: t.Optional[str] = None) -> None:
"""
Clear any previously defined filter with the given context.
This will call :py:func:`clear` with all action names.
"""
for name in Action.INDEX:
clear(name, context=context)
def clear(name: str, context: t.Optional[str] = None) -> None:
"""
Clear any previously defined action with the given name and context.
"""
Action.get(name).clear(context=context)
@classmethod
def clear_all(cls, context: t.Optional[str] = None) -> None:
"""
Clear any previously defined action with the given context.
"""
for action in cls.INSTANCES:
action.clear(context)

View File

@ -43,22 +43,6 @@ class Context:
Context.CURRENT.pop()
class ContextTemplate:
"""
Context templates are for filters for which the name needs to be formatted
before the filter can be applied.
"""
def __init__(self, name: str):
self.template = name
def __repr__(self) -> str:
return f"{self.__class__.__name__}('{self.template}')"
def __call__(self, *args: t.Any, **kwargs: t.Any) -> Context:
return Context(self.template.format(*args, **kwargs))
class Contextualized:
"""
This is a simple class to store the current context in hooks.

View File

@ -5,6 +5,7 @@ __license__ = "Apache 2.0"
import sys
import typing as t
from weakref import WeakSet
from typing_extensions import Concatenate, ParamSpec
@ -43,16 +44,16 @@ class Filter(t.Generic[T1, T2]):
This is the typical filter lifecycle:
1. Create an action with method :py:meth:`get`.
2. Add callbacks with method :py:meth:`add`.
1. Create a filter with ``Filter()``.
2. Add callbacks with :py:meth:`add`.
3. Call the filter callbacks with method :py:meth:`apply`.
The result of each callback is passed as the first argument to the next one. Thus,
the type of the first argument must match the callback return type.
The `T` and `P` type parameters of the Filter class correspond to the expected
signature of the filter callbacks. `T` is the type of the first argument (and thus
the return value type as well) and `P` is the signature of the other arguments.
The ``T1`` and ``T2`` type parameters of the Filter class correspond to the expected
signature of the filter callbacks. ``T1`` is the type of the first argument (and thus
the return value type as well) and ``T2`` is the signature of the other arguments.
For instance, `Filter[str, [int]]` means that the filter callbacks are expected to
take two arguments: one string and one integer. Each callback must then return a
@ -62,21 +63,13 @@ class Filter(t.Generic[T1, T2]):
they are adding and calling filter callbacks correctly.
"""
INDEX: dict[str, "Filter[t.Any, t.Any]"] = {}
# Keep a weak reference to all created filters. This allows us to clear them when
# necessary.
INSTANCES: WeakSet[Filter[t.Any, t.Any]] = WeakSet()
def __init__(self, name: str) -> None:
self.name = name
def __init__(self) -> None:
self.callbacks: list[FilterCallback[T1, T2]] = []
def __repr__(self) -> str:
return f"{self.__class__.__name__}('{self.name}')"
@classmethod
def get(cls, name: str) -> "Filter[t.Any, t.Any]":
"""
Get an existing action with the given name from the index, or create one.
"""
return cls.INDEX.setdefault(name, cls(name))
self.INSTANCES.add(self)
def add(
self, priority: t.Optional[int] = None
@ -156,7 +149,7 @@ class Filter(t.Generic[T1, T2]):
)
except:
sys.stderr.write(
f"Error applying filter '{self.name}': func={callback.func} contexts={callback.contexts}'\n"
f"Error applying filter: func={callback.func} contexts={callback.contexts}'\n"
)
raise
return value
@ -171,6 +164,14 @@ class Filter(t.Generic[T1, T2]):
if not callback.is_in_context(context)
]
@classmethod
def clear_all(cls, context: t.Optional[str] = None) -> None:
"""
Clear any previously defined filter with the given context.
"""
for filtre in cls.INSTANCES:
filtre.clear(context)
# The methods below are specific to filters which take lists as first arguments
def add_item(
self: "Filter[list[L], T2]", item: L, priority: t.Optional[int] = None
@ -205,8 +206,8 @@ class Filter(t.Generic[T1, T2]):
``add_item`` multiple times on the same filter, then you probably want to use a
single call to ``add_items`` instead.
:param name: filter name.
:param list[object] items: items that will be appended to the resulting list.
:param int priority: optional priority.
Usage::
@ -225,15 +226,10 @@ class Filter(t.Generic[T1, T2]):
my_filter.add_item("item2")
"""
# Unfortunately we have to type-ignore this line. If not, mypy complains with:
#
# Argument 1 has incompatible type "Callable[[Arg(List[E], 'values'), **T2], List[E]]"; expected "Callable[[List[E], **T2], List[E]]"
# This is likely because "callback" has named arguments: "values". Consider marking them positional-only
#
# But we are unable to mark arguments positional-only (by adding / after values arg) in Python 3.7.
# Get rid of this statement after Python 3.7 EOL.
@self.add(priority=priority) # type: ignore
def callback(values: list[L], *_args: T2.args, **_kwargs: T2.kwargs) -> list[L]:
@self.add(priority=priority)
def callback(
values: list[L], /, *_args: T2.args, **_kwargs: T2.kwargs
) -> list[L]:
return values + items
def iterate(
@ -266,114 +262,3 @@ class Filter(t.Generic[T1, T2]):
Same as :py:func:`Filter.iterate` but apply only callbacks from a given context.
"""
yield from self.apply_from_context(context, [], *args, **kwargs)
class FilterTemplate(t.Generic[T1, T2]):
"""
Filter templates are for filters for which the name needs to be formatted
before the filter can be applied.
Similar to :py:class:`tutor.core.hooks.ActionTemplate`, filter templates are used to generate
:py:class:`Filter` objects for which the name matches a certain template.
Templated filters must be formatted with ``(*args)`` before being applied. For example::
filter_template = FilterTemplate("namespace:{0}")
named_filter = filter_template("name")
@named_filter.add()
def my_callback(x: int) -> int:
...
named_filter.apply(42)
"""
def __init__(self, name: str):
self.template = name
def __repr__(self) -> str:
return f"{self.__class__.__name__}('{self.template}')"
def __call__(self, *args: t.Any, **kwargs: t.Any) -> Filter[T1, T2]:
return get(self.template.format(*args, **kwargs))
# Syntactic sugar
get = Filter.get
def get_template(name: str) -> FilterTemplate[t.Any, t.Any]:
"""
Create a filter with a template name.
"""
return FilterTemplate(name)
def add(
name: str, priority: t.Optional[int] = None
) -> t.Callable[[FilterCallbackFunc[T1, T2]], FilterCallbackFunc[T1, T2]]:
"""
Decorator for functions that will be applied to a single named filter.
"""
return Filter.get(name).add(priority=priority)
def add_item(name: str, item: T1, priority: t.Optional[int] = None) -> None:
"""
Convenience function to add a single item to a filter that returns a list of items.
"""
get(name).add_item(item, priority=priority)
def add_items(name: str, items: list[T1], priority: t.Optional[int] = None) -> None:
"""
Convenience decorator to add multiple item to a filter that returns a list of items.
"""
get(name).add_items(items, priority=priority)
def iterate(name: str, *args: t.Any, **kwargs: t.Any) -> t.Iterator[T1]:
"""
Convenient function to iterate over the results of a filter result list.
"""
yield from iterate_from_context(None, name, *args, **kwargs)
def iterate_from_context(
context: t.Optional[str], name: str, *args: t.Any, **kwargs: t.Any
) -> t.Iterator[T1]:
yield from Filter.get(name).iterate_from_context(context, *args, **kwargs)
def apply(name: str, value: T1, *args: t.Any, **kwargs: t.Any) -> T1:
"""
Apply all declared filters to a single value, passing along the additional arguments.
"""
return apply_from_context(None, name, value, *args, **kwargs)
def apply_from_context(
context: t.Optional[str], name: str, value: T1, *args: T2.args, **kwargs: T2.kwargs
) -> T1:
"""
Same as :py:func:`apply` but only run the callbacks that were created in a given context.
"""
filtre: Filter[T1, T2] = Filter.get(name)
return filtre.apply_from_context(context, value, *args, **kwargs)
def clear_all(context: t.Optional[str] = None) -> None:
"""
Clear any previously defined filter with the given context.
"""
for name in Filter.INDEX:
clear(name, context=context)
def clear(name: str, context: t.Optional[str] = None) -> None:
"""
Clear any previously defined filter with the given name and context.
"""
filtre = Filter.INDEX.get(name)
if filtre:
filtre.clear(context=context)

View File

@ -54,6 +54,7 @@ def _prepare_environment() -> None:
("HOST_USER_ID", utils.get_user_id()),
("TUTOR_APP", __app__.replace("-", "_")),
("TUTOR_VERSION", __version__),
("is_buildkit_enabled", utils.is_buildkit_enabled),
],
)
@ -161,7 +162,7 @@ class Renderer:
try:
patches.append(self.render_str(patch))
except exceptions.TutorError:
fmt.echo_error(f"Error rendering patch '{name}': {patch}")
fmt.echo_error(f"Error rendering patch '{name}':\n{patch}")
raise
rendered = separator.join(patches)
if rendered:
@ -169,7 +170,10 @@ class Renderer:
return rendered
def render_str(self, text: str) -> str:
template = self.environment.from_string(text)
try:
template = self.environment.from_string(text)
except jinja2.exceptions.TemplateSyntaxError as e:
raise exceptions.TutorError(f"Template syntax error: {e.args[0]}")
return self.__render(template)
def render_template(self, template_name: str) -> t.Union[str, bytes]:
@ -450,6 +454,7 @@ def get_release(version: str) -> str:
"13": "maple",
"14": "nutmeg",
"15": "olive",
"16": "palm",
}[version.split(".", maxsplit=1)[0]]
@ -521,7 +526,7 @@ def _delete_plugin_templates(plugin: str, root: str, _config: Config) -> None:
Delete plugin env files on unload.
"""
targets = hooks.Filters.ENV_TEMPLATE_TARGETS.iterate_from_context(
hooks.Contexts.APP(plugin).name
hooks.Contexts.app(plugin).name
)
for src, dst in targets:
path = pathjoin(root, dst.replace("/", os.sep), src.replace("/", os.sep))

View File

@ -7,20 +7,11 @@ from __future__ import annotations
# The Tutor plugin system is licensed under the terms of the Apache 2.0 license.
__license__ = "Apache 2.0"
from typing import Any, Callable, Iterable
from typing import Any, Callable, Iterable, Literal, Union
import click
from tutor.core.hooks import (
Action,
ActionTemplate,
Context,
ContextTemplate,
Filter,
FilterTemplate,
actions,
filters,
)
from tutor.core.hooks import Action, Context, Filter
from tutor.types import Config
__all__ = ["Actions", "Filters", "Contexts"]
@ -64,9 +55,7 @@ class Actions:
#: :parameter str root: project root.
#: :parameter dict config: project configuration.
#: :parameter str name: docker-compose project name.
COMPOSE_PROJECT_STARTED: Action[[str, Config, str]] = actions.get(
"compose:project:started"
)
COMPOSE_PROJECT_STARTED: Action[[str, Config, str]] = Action()
#: Called whenever the core project is ready to run. This action is called as soon
#: as possible. This is the right time to discover plugins, for instance. In
@ -81,14 +70,14 @@ class Actions:
#: developers probably don't have to implement this action themselves.
#:
#: This action does not have any parameter.
CORE_READY: Action[[]] = actions.get("core:ready")
CORE_READY: Action[[]] = Action()
#: Called just before triggering the job tasks of any ``... do <job>`` command.
#:
#: :parameter str job: job name.
#: :parameter args: job positional arguments.
#: :parameter kwargs: job named arguments.
DO_JOB: Action[[str, Any]] = actions.get("do:job")
DO_JOB: Action[[str, Any]] = Action()
#: Triggered when a single plugin needs to be loaded. Only plugins that have previously been
#: discovered can be loaded (see :py:data:`CORE_READY`).
@ -100,14 +89,14 @@ class Actions:
#: Most plugin developers will not have to implement this action themselves, unless
#: they want to perform a specific action at the moment the plugin is enabled.
#:
#: This action does not have any parameter.
PLUGIN_LOADED: ActionTemplate[[]] = actions.get_template("plugins:loaded:{0}")
#: :parameter str plugin: plugin name.
PLUGIN_LOADED: Action[[str]] = Action()
#: Triggered after all plugins have been loaded. At this point the list of loaded
#: plugins may be obtained from the :py:data:`Filters.PLUGINS_LOADED` filter.
#:
#: This action does not have any parameter.
PLUGINS_LOADED: Action[[]] = actions.get("plugins:loaded")
PLUGINS_LOADED: Action[[]] = Action()
#: Triggered when a single plugin is unloaded. Only plugins that have previously been
#: loaded can be unloaded (see :py:data:`PLUGIN_LOADED`).
@ -120,12 +109,12 @@ class Actions:
#: :parameter str plugin: plugin name.
#: :parameter str root: absolute path to the project root.
#: :parameter config: full project configuration
PLUGIN_UNLOADED: Action[str, str, Config] = actions.get("plugins:unloaded")
PLUGIN_UNLOADED: Action[str, str, Config] = Action()
#: Called as soon as we have access to the Tutor project root.
#:
#: :parameter str root: absolute path to the project root.
PROJECT_ROOT_READY: Action[str] = actions.get("project:root:ready")
PROJECT_ROOT_READY: Action[str] = Action()
class Filters:
@ -174,11 +163,20 @@ class Filters:
:py:class:`tutor.core.hooks.Filter` API.
"""
#: Hostnames of user-facing applications.
#:
#: So far this filter is only used to inform the user of application urls after they have run ``launch``.
#:
#: :parameter list[str] hostnames: items from this list are templates that will be
#: rendered by the environment.
#: :parameter str context_name: either "local" or "dev", depending on the calling context.
APP_PUBLIC_HOSTS: Filter[list[str], [Literal["local", "dev"]]] = Filter()
#: List of command line interface (CLI) commands.
#:
#: :parameter list commands: commands are instances of ``click.Command``. They will
#: all be added as subcommands of the main ``tutor`` command.
CLI_COMMANDS: Filter[list[click.Command], []] = filters.get("cli:commands")
CLI_COMMANDS: Filter[list[click.Command], []] = Filter()
#: List of `do ...` commands.
#:
@ -188,7 +186,7 @@ class Filters:
#: in the "service" container, both in local, dev and k8s mode.
CLI_DO_COMMANDS: Filter[
list[Callable[[Any], Iterable[tuple[str, str]]]], []
] = filters.get("cli:commands:do")
] = Filter()
#: List of initialization tasks (scripts) to be run in the `init` job. This job
#: includes all database migrations, setting up, etc. To run some tasks before or
@ -197,63 +195,22 @@ class Filters:
#: :parameter list[tuple[str, str]] tasks: list of ``(service, task)`` tuples. Each
#: task is essentially a bash script to be run in the "service" container. Scripts
#: may contain Jinja markup, similar to templates.
CLI_DO_INIT_TASKS: Filter[list[tuple[str, str]], []] = filters.get(
"cli:commands:do:init"
)
#: DEPRECATED use :py:data:`CLI_DO_INIT_TASKS` instead.
#:
#: List of commands to be executed during initialization. These commands typically
#: include database migrations, setting feature flags, etc.
#:
#: :parameter list[tuple[str, tuple[str, ...]]] tasks: list of ``(service, path)`` tasks.
#:
#: - ``service`` is the name of the container in which the task will be executed.
#: - ``path`` is a tuple that corresponds to a template relative path.
#: Example: ``("myplugin", "hooks", "myservice", "pre-init")`` (see :py:data:`IMAGES_BUILD`).
#: The command to execute will be read from that template, after it is rendered.
COMMANDS_INIT: Filter[list[tuple[str, tuple[str, ...]]], []] = filters.get(
"commands:init"
)
#: DEPRECATED use :py:data:`CLI_DO_INIT_TASKS` instead with a lower priority score.
#:
#: List of commands to be executed prior to initialization. These commands are run even
#: before the mysql databases are created and the migrations are applied.
#:
#: :parameter list[tuple[str, tuple[str, ...]]] tasks: list of ``(service, path)``
#: tasks. (see :py:data:`COMMANDS_INIT`).
COMMANDS_PRE_INIT: Filter[list[tuple[str, tuple[str, ...]]], []] = filters.get(
"commands:pre-init"
)
#: Same as :py:data:`COMPOSE_LOCAL_JOBS_TMP` but for the development environment.
COMPOSE_DEV_JOBS_TMP: Filter[Config, []] = filters.get("compose:dev-jobs:tmp")
#: Same as :py:data:`COMPOSE_LOCAL_TMP` but for the development environment.
COMPOSE_DEV_TMP: Filter[Config, []] = filters.get("compose:dev:tmp")
#: Same as :py:data:`COMPOSE_LOCAL_TMP` but for jobs
COMPOSE_LOCAL_JOBS_TMP: Filter[Config, []] = filters.get("compose:local-jobs:tmp")
#: Contents of the (local|dev)/docker-compose.tmp.yml files that will be generated at
#: runtime. This is used for instance to bind-mount folders from the host (see
#: :py:data:`COMPOSE_MOUNTS`)
#:
#: :parameter dict[str, ...] docker_compose_tmp: values which will be serialized to local/docker-compose.tmp.yml.
#: Keys and values will be rendered before saving, such that you may include ``{{ ... }}`` statements.
COMPOSE_LOCAL_TMP: Filter[Config, []] = filters.get("compose:local:tmp")
CLI_DO_INIT_TASKS: Filter[list[tuple[str, str]], []] = Filter()
#: List of folders to bind-mount in docker-compose containers, either in ``tutor local`` or ``tutor dev``.
#:
#: Many ``tutor local`` and ``tutor dev`` commands support ``--mounts`` options
#: that allow plugins to define custom behaviour at runtime. For instance
#: ``--mount=/path/to/edx-platform`` would cause this host folder to be
#: bind-mounted in different containers (lms, lms-worker, cms, cms-worker) at the
#: This filter is for processing values of the ``MOUNTS`` setting such as::
#:
#: tutor mounts add /path/to/edx-platform
#:
#: In this example, this host folder would be bind-mounted in different containers
#: (lms, lms-worker, cms, cms-worker, lms-job, cms-job) at the
#: /openedx/edx-platform location. Plugin developers may implement this filter to
#: define custom behaviour when mounting folders that relate to their plugins. For
#: instance, the ecommerce plugin may process the ``--mount=/path/to/ecommerce``
#: option.
#: instance, the ecommerce plugin may process the ``/path/to/ecommerce`` value.
#:
#: To also bind-mount these folder at build time, implement also the
#: :py:data:`IMAGES_BUILD_MOUNTS` filter.
#:
#: :parameter list[tuple[str, str]] mounts: each item is a ``(service, path)``
#: tuple, where ``service`` is the name of the docker-compose service and ``path`` is
@ -262,8 +219,8 @@ class Filters:
#: the ``path`` because it will fail on Windows.
#: :parameter str name: basename of the host-mounted folder. In the example above,
#: this is "edx-platform". When implementing this filter you should check this name to
#: conditionnally add mounts.
COMPOSE_MOUNTS: Filter[list[tuple[str, str]], [str]] = filters.get("compose:mounts")
#: conditionally add mounts.
COMPOSE_MOUNTS: Filter[list[tuple[str, str]], [str]] = Filter()
#: Declare new default configuration settings that don't necessarily have to be saved in the user
#: ``config.yml`` file. Default settings may be overridden with ``tutor config save --set=...``, in which
@ -271,44 +228,34 @@ class Filters:
#:
#: :parameter list[tuple[str, ...]] items: list of (name, value) new settings. All
#: new entries must be prefixed with the plugin name in all-caps.
CONFIG_DEFAULTS: Filter[list[tuple[str, Any]], []] = filters.get("config:defaults")
CONFIG_DEFAULTS: Filter[list[tuple[str, Any]], []] = Filter()
#: Modify existing settings, either from Tutor core or from other plugins. Beware not to override any
#: important setting, such as passwords! Overridden setting values will be printed to stdout when the plugin
#: is disabled, such that users have a chance to back them up.
#:
#: :parameter list[tuple[str, ...]] items: list of (name, value) settings.
CONFIG_OVERRIDES: Filter[list[tuple[str, Any]], []] = filters.get(
"config:overrides"
)
CONFIG_OVERRIDES: Filter[list[tuple[str, Any]], []] = Filter()
#: Declare unique configuration settings that must be saved in the user ``config.yml`` file. This is where
#: you should declare passwords and randomly-generated values that are different from one environment to the next.
#:
#: :parameter list[tuple[str, ...]] items: list of (name, value) new settings. All
#: names must be prefixed with the plugin name in all-caps.
CONFIG_UNIQUE: Filter[list[tuple[str, Any]], []] = filters.get("config:unique")
CONFIG_UNIQUE: Filter[list[tuple[str, Any]], []] = Filter()
#: Use this filter to modify the ``docker build`` command. For instance, to replace
#: the ``build`` subcommand by ``buildx build``.
#:
#: :parameter list[str] command: the full build command, including options and
#: arguments. Note that these arguments do not include the leading ``docker`` command.
DOCKER_BUILD_COMMAND: Filter[list[str], []] = filters.get("docker:build:command")
DOCKER_BUILD_COMMAND: Filter[list[str], []] = Filter()
#: List of patches that should be inserted in a given location of the templates. The
#: filter name must be formatted with the patch name.
#: This filter is not so convenient and plugin developers will probably
#: prefer :py:data:`ENV_PATCHES`.
#:
#: :parameter list[str] patches: each item is the unrendered patch content.
ENV_PATCH: FilterTemplate[list[str], []] = filters.get_template("env:patches:{0}")
#: List of patches that should be inserted in a given location of the templates. This is very similar to :py:data:`ENV_PATCH`, except that the patch is added as a ``(name, content)`` tuple.
#: List of patches that should be inserted in a given location of the templates.
#:
#: :parameter list[tuple[str, str]] patches: pairs of (name, content) tuples. Use this
#: filter to modify the Tutor templates.
ENV_PATCHES: Filter[list[tuple[str, str]], []] = filters.get("env:patches")
ENV_PATCHES: Filter[list[tuple[str, str]], []] = Filter()
#: List of template path patterns to be ignored when rendering templates to the project root. By default, we ignore:
#:
@ -319,13 +266,13 @@ class Filters:
#: Ignored patterns are overridden by include patterns; see :py:data:`ENV_PATTERNS_INCLUDE`.
#:
#: :parameter list[str] patterns: list of regular expression patterns. E.g: ``r"(.*/)?ignored_file_name(/.*)?"``.
ENV_PATTERNS_IGNORE: Filter[list[str], []] = filters.get("env:patterns:ignore")
ENV_PATTERNS_IGNORE: Filter[list[str], []] = Filter()
#: List of template path patterns to be included when rendering templates to the project root.
#: Patterns from this list will take priority over the patterns from :py:data:`ENV_PATTERNS_IGNORE`.
#:
#: :parameter list[str] patterns: list of regular expression patterns. See :py:data:`ENV_PATTERNS_IGNORE`.
ENV_PATTERNS_INCLUDE: Filter[list[str], []] = filters.get("env:patterns:include")
ENV_PATTERNS_INCLUDE: Filter[list[str], []] = Filter()
#: List of `Jinja2 filters <https://jinja.palletsprojects.com/en/latest/templates/#filters>`__ that will be
#: available in templates. Jinja2 filters are basically functions that can be used
@ -356,16 +303,14 @@ class Filters:
#:
#: :parameter filters: list of (name, function) tuples. The function signature
#: should correspond to its usage in templates.
ENV_TEMPLATE_FILTERS: Filter[
list[tuple[str, Callable[..., Any]]], []
] = filters.get("env:templates:filters")
ENV_TEMPLATE_FILTERS: Filter[list[tuple[str, Callable[..., Any]]], []] = Filter()
#: List of all template root folders.
#:
#: :parameter list[str] templates_root: absolute paths to folders which contain templates.
#: The templates in these folders will then be accessible by the environment
#: renderer using paths that are relative to their template root.
ENV_TEMPLATE_ROOTS: Filter[list[str], []] = filters.get("env:templates:roots")
ENV_TEMPLATE_ROOTS: Filter[list[str], []] = Filter()
#: List of template source/destination targets.
#:
@ -374,33 +319,64 @@ class Filters:
#: is a path relative to the environment root. For instance: adding ``("c/d",
#: "a/b")`` to the filter will cause all files from "c/d" to be rendered to the ``a/b/c/d``
#: subfolder.
ENV_TEMPLATE_TARGETS: Filter[list[tuple[str, str]], []] = filters.get(
"env:templates:targets"
)
ENV_TEMPLATE_TARGETS: Filter[list[tuple[str, str]], []] = Filter()
#: List of extra variables to be included in all templates.
#:
#: Out of the box, this filter will include all configuration settings, but also the following:
#:
#: - `HOST_USER_ID`: the numerical ID of the user on the host.
#: - `TUTOR_APP`: the app name ("tutor" by default), used to determine the dev/local project names.
#: - `TUTOR_VERSION`: the current version of Tutor.
#: - `is_buildkit_enabled`: a boolean function that indicates whether BuildKit is available on the host.
#: - `iter_values_named`: a function to iterate on variables that start or end with a given string.
#: - `iter_mounts`: a function that yields compose-compatible bind-mounts for any given service.
#: - `patch`: a function to incorporate extra content into a template.
#:
#: :parameter filters: list of (name, value) tuples.
ENV_TEMPLATE_VARIABLES: Filter[list[tuple[str, Any]], []] = filters.get(
"env:templates:variables"
)
ENV_TEMPLATE_VARIABLES: Filter[list[tuple[str, Any]], []] = Filter()
#: List of images to be built when we run ``tutor images build ...``.
#:
#: :parameter list[tuple[str, tuple[str, ...], str, tuple[str, ...]]] tasks: list of ``(name, path, tag, args)`` tuples.
#:
#: - ``name`` is the name of the image, as in ``tutor images build myimage``.
#: - ``path`` is the relative path to the folder that contains the Dockerfile.
#: - ``path`` is the relative path to the folder that contains the Dockerfile. This can be either a string or a tuple of strings.
#: For instance ``("myplugin", "build", "myservice")`` indicates that the template will be read from
#: ``myplugin/build/myservice/Dockerfile``
#: ``myplugin/build/myservice/Dockerfile``. This argument value would be equivalent to "myplugin/build/myservice".
#: - ``tag`` is the Docker tag that will be applied to the image. It will be
#: rendered at runtime with the user configuration. Thus, the image tag could
#: be ``"{{ DOCKER_REGISTRY }}/myimage:{{ TUTOR_VERSION }}"``.
#: - ``args`` is a list of arguments that will be passed to ``docker build ...``.
#: :parameter Config config: user configuration.
IMAGES_BUILD: Filter[
list[tuple[str, tuple[str, ...], str, tuple[str, ...]]], [Config]
] = filters.get("images:build")
list[tuple[str, Union[str, tuple[str, ...]], str, tuple[str, ...]]], [Config]
] = Filter()
#: List of image names which must be built prior to launching the platform. These
#: images will be built on launch, in "dev" and "local" mode (but not in Kubernetes).
#:
#: :parameter list[str] names: list of image names.
#: :parameter str context_name: either "local" or "dev", depending on the calling context.
IMAGES_BUILD_REQUIRED: Filter[list[str], [Literal["local", "dev"]]] = Filter()
#: List of host directories to be automatically bind-mounted in Docker images at
#: build time. For instance, this is useful to build Docker images using a custom
#: repository on the host.
#:
#: This filter works similarly to the :py:data:`COMPOSE_MOUNTS` filter, with a few differences.
#:
#: :parameter list[tuple[str, str]] mounts: each item is a pair of ``(name, value)``
#: used to generate a build context at build time. See the corresponding `Docker
#: documentation <https://docs.docker.com/engine/reference/commandline/buildx_build/#build-context>`__.
#: The following option will be added to the ``docker buildx build`` command:
#: ``--build-context={name}={value}``. If the Dockerfile contains a "name" stage, then
#: that stage will be replaced by the corresponding directory on the host.
#: :parameter str name: full path to the host-mounted folder. As opposed to
#: :py:data:`COMPOSE_MOUNTS`, this is not just the basename, but the full path. When
#: implementing this filter you should check this path (for instance: with
#: ``os.path.basename(path)``) to conditionally add mounts.
IMAGES_BUILD_MOUNTS: Filter[list[tuple[str, str]], [str]] = Filter()
#: List of images to be pulled when we run ``tutor images pull ...``.
#:
@ -409,11 +385,11 @@ class Filters:
#: - ``name`` is the name of the image, as in ``tutor images pull myimage``.
#: - ``tag`` is the Docker tag that will be applied to the image. (see :py:data:`IMAGES_BUILD`).
#: :parameter Config config: user configuration.
IMAGES_PULL: Filter[list[tuple[str, str]], [Config]] = filters.get("images:pull")
IMAGES_PULL: Filter[list[tuple[str, str]], [Config]] = Filter()
#: List of images to be pushed when we run ``tutor images push ...``.
#: Parameters are the same as for :py:data:`IMAGES_PULL`.
IMAGES_PUSH: Filter[list[tuple[str, str]], [Config]] = filters.get("images:push")
IMAGES_PUSH: Filter[list[tuple[str, str]], [Config]] = Filter()
#: List of plugin indexes that are loaded when we run `tutor plugins update`. By
#: default, the plugin indexes are stored in the user configuration. This filter makes
@ -421,13 +397,13 @@ class Filters:
#:
#: :parameter list[str] indexes: list of index URLs. Remember that entries further
#: in the list have priority.
PLUGIN_INDEXES: Filter[list[str], []] = filters.get("plugins:indexes:entries")
PLUGIN_INDEXES: Filter[list[str], []] = Filter()
#: Filter to modify the url of a plugin index url. This is convenient to alias
#: plugin indexes with a simple name, such as "main" or "contrib".
#:
#: :parameter str url: value passed to the `index add/remove` commands.
PLUGIN_INDEX_URL: Filter[str, []] = filters.get("plugins:indexes:url")
PLUGIN_INDEX_URL: Filter[str, []] = Filter()
#: When installing an entry from a plugin index, the plugin data from the index will
#: go through this filter before it is passed along to `pip install`. Thus, this is a
@ -436,17 +412,13 @@ class Filters:
#:
#: :parameter dict[str, str] plugin: the dict entry from the plugin index. It
#: includes an additional "index" key which contains the plugin index URL.
PLUGIN_INDEX_ENTRY_TO_INSTALL: Filter[dict[str, str], []] = filters.get(
"plugins:indexes:entries:install"
)
PLUGIN_INDEX_ENTRY_TO_INSTALL: Filter[dict[str, str], []] = Filter()
#: Information about each installed plugin, including its version.
#: Keep this information to a single line for easier parsing by 3rd-party scripts.
#:
#: :param list[tuple[str, str]] versions: each pair is a ``(plugin, info)`` tuple.
PLUGINS_INFO: Filter[list[tuple[str, str]], []] = filters.get(
"plugins:installed:versions"
)
PLUGINS_INFO: Filter[list[tuple[str, str]], []] = Filter()
#: List of installed plugins. In order to be added to this list, a plugin must first
#: be discovered (see :py:data:`Actions.CORE_READY`).
@ -454,13 +426,13 @@ class Filters:
#: :param list[str] plugins: plugin developers probably don't have to implement this
#: filter themselves, but they can apply it to check for the presence of other
#: plugins.
PLUGINS_INSTALLED: Filter[list[str], []] = filters.get("plugins:installed")
PLUGINS_INSTALLED: Filter[list[str], []] = Filter()
#: List of loaded plugins.
#:
#: :param list[str] plugins: plugin developers probably don't have to modify this
#: filter themselves, but they can apply it to check whether other plugins are enabled.
PLUGINS_LOADED: Filter[list[str], []] = filters.get("plugins:loaded")
PLUGINS_LOADED: Filter[list[str], []] = Filter()
class Contexts:
@ -480,10 +452,16 @@ class Contexts:
hooks.Filters.MY_FILTER.apply_from_context(hooks.Contexts.SOME_CONTEXT.name)
"""
#: We enter this context whenever we create hooks for a specific application or :
#: plugin. For instance, plugin "myplugin" will be enabled within the "app:myplugin"
#: context.
APP = ContextTemplate("app:{0}")
#: Dictionary of name/contexts. Each value is a context that we enter whenever we
#: create hooks for a specific application or : : plugin. For instance, plugin
#: "myplugin" will be enabled within the "app:myplugin" : context.
APP: dict[str, Context] = {}
@classmethod
def app(cls, name: str) -> Context:
if name not in cls.APP:
cls.APP[name] = Context(f"app:{name}")
return cls.APP[name]
#: Plugins will be installed and enabled within this context.
PLUGINS = Context("plugins")

View File

@ -1,17 +1,12 @@
from tutor import fmt, hooks, utils
from tutor.types import Config, get_typed
def get_tag(config: Config, name: str) -> str:
key = "DOCKER_IMAGE_" + name.upper().replace("-", "_")
return get_typed(config, key, str)
def build(path: str, tag: str, *args: str) -> None:
fmt.echo_info(f"Building image {tag}")
command = hooks.Filters.DOCKER_BUILD_COMMAND.apply(
["build", "-t", tag, *args, path]
)
build_command = ["build", f"--tag={tag}", *args, path]
if utils.is_buildkit_enabled():
build_command.insert(0, "buildx")
command = hooks.Filters.DOCKER_BUILD_COMMAND.apply(build_command)
utils.docker(*command)

View File

@ -12,19 +12,24 @@ from tutor.types import Config, get_typed
# Import modules to trigger hook creation
from . import v0, v1
# Cache of plugin patches, for efficiency
ENV_PATCHES_DICT: dict[str, list[str]] = {}
@hooks.Actions.PLUGINS_LOADED.add()
def _convert_plugin_patches() -> None:
"""
Some patches are added as (name, content) tuples with the ENV_PATCHES
filter. We convert these patches to add them to ENV_PATCH. This makes it
filter. We convert these patches to add them to ENV_PATCHES_DICT. This makes it
easier for end-user to declare patches, and it's more performant.
This action is run after plugins have been loaded.
"""
ENV_PATCHES_DICT.clear()
patches: t.Iterable[tuple[str, str]] = hooks.Filters.ENV_PATCHES.iterate()
for name, content in patches:
hooks.Filters.ENV_PATCH(name).add_item(content)
ENV_PATCHES_DICT.setdefault(name, [])
ENV_PATCHES_DICT[name].append(content)
def is_installed(name: str) -> bool:
@ -89,8 +94,8 @@ def load(name: str) -> None:
if not is_installed(name):
raise exceptions.TutorError(f"plugin '{name}' is not installed.")
with hooks.Contexts.PLUGINS.enter():
with hooks.Contexts.APP(name).enter():
hooks.Actions.PLUGIN_LOADED(name).do()
with hooks.Contexts.app(name).enter():
hooks.Actions.PLUGIN_LOADED.do(name)
hooks.Filters.PLUGINS_LOADED.add_item(name)
@ -109,14 +114,14 @@ def iter_patches(name: str) -> t.Iterator[str]:
"""
Yields: patch (str)
"""
yield from hooks.Filters.ENV_PATCH(name).iterate()
yield from ENV_PATCHES_DICT.get(name, [])
def unload(plugin: str) -> None:
"""
Remove all filters and actions associated to a given plugin.
"""
hooks.clear_all(context=hooks.Contexts.APP(plugin).name)
hooks.clear_all(context=hooks.Contexts.app(plugin).name)
@hooks.Actions.PLUGIN_UNLOADED.add(priority=50)

View File

@ -60,7 +60,10 @@ class BasePlugin:
hooks.Filters.PLUGINS_INFO.add_item((self.name, self._version() or ""))
# Create actions and filters on load
hooks.Actions.PLUGIN_LOADED(self.name).add()(self.__load)
@hooks.Actions.PLUGIN_LOADED.add()
def _load_plugin(name: str) -> None:
if name == self.name:
self.__load()
def __load(self) -> None:
"""

View File

@ -43,16 +43,17 @@ def discover_module(path: str) -> None:
hooks.Filters.PLUGINS_INFO.add_item((name, path))
# Import module on enable
load_plugin_action = hooks.Actions.PLUGIN_LOADED(name)
@load_plugin_action.add()
def load() -> None:
# https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
spec = importlib.util.spec_from_file_location("tutor.plugin.v1.{name}", path)
if spec is None or spec.loader is None:
raise ValueError("Plugin could not be found: {path}")
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
@hooks.Actions.PLUGIN_LOADED.add()
def load(plugin_name: str) -> None:
if name == plugin_name:
# https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly
spec = importlib.util.spec_from_file_location(
"tutor.plugin.v1.{name}", path
)
if spec is None or spec.loader is None:
raise ValueError("Plugin could not be found: {path}")
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
def discover_package(entrypoint: pkg_resources.EntryPoint) -> None:
@ -70,8 +71,7 @@ def discover_package(entrypoint: pkg_resources.EntryPoint) -> None:
hooks.Filters.PLUGINS_INFO.add_item((name, entrypoint.dist.version))
# Import module on enable
load_plugin_action = hooks.Actions.PLUGIN_LOADED(name)
@load_plugin_action.add()
def load() -> None:
entrypoint.load()
@hooks.Actions.PLUGIN_LOADED.add()
def load(plugin_name: str) -> None:
if name == plugin_name:
entrypoint.load()

View File

@ -18,19 +18,36 @@ def load_all(stream: str) -> t.Iterator[t.Any]:
def dump_all(documents: t.Sequence[t.Any], fileobj: TextIOWrapper) -> None:
yaml.safe_dump_all(documents, stream=fileobj, default_flow_style=False)
yaml.safe_dump_all(
documents, stream=fileobj, default_flow_style=False, allow_unicode=True
)
def dump(content: t.Any, fileobj: TextIOWrapper) -> None:
yaml.dump(content, stream=fileobj, default_flow_style=False)
yaml.dump(content, stream=fileobj, default_flow_style=False, allow_unicode=True)
def dumps(content: t.Any) -> str:
result = yaml.dump(content, default_flow_style=False)
result = yaml.dump(content, default_flow_style=False, allow_unicode=True)
assert isinstance(result, str)
return result
def str_format(content: t.Any) -> str:
"""
Convert a value to str.
This is almost like json, but more convenient for printing to the standard output.
"""
if content is True:
return "true"
if content is False:
return "false"
if content is None:
return "null"
return str(content)
def parse(v: t.Union[str, t.IO[str]]) -> t.Any:
"""
Parse a yaml-formatted string.

View File

@ -118,8 +118,10 @@ GRADES_DOWNLOAD = {
},
}
# ORA2
ORA2_FILEUPLOAD_BACKEND = "filesystem"
ORA2_FILEUPLOAD_ROOT = "/openedx/data/ora2"
FILE_UPLOAD_STORAGE_BUCKET_NAME = "openedxuploads"
ORA2_FILEUPLOAD_CACHE_NAME = "ora2-storage"
# Change syslog-based loggers which don't work inside docker containers
@ -135,18 +137,21 @@ LOGGING["handlers"]["tracking"] = {
"formatter": "standard",
}
LOGGING["loggers"]["tracking"]["handlers"] = ["console", "local", "tracking"]
# Silence some loggers (note: we must attempt to get rid of these when upgrading from one release to the next)
# Silence some loggers (note: we must attempt to get rid of these when upgrading from one release to the next)
LOGGING["loggers"]["blockstore.apps.bundles.storage"] = {"handlers": ["console"], "level": "WARNING"}
# These warnings are visible in simple commands and init tasks
import warnings
from django.utils.deprecation import RemovedInDjango40Warning, RemovedInDjango41Warning
warnings.filterwarnings("ignore", category=RemovedInDjango40Warning)
warnings.filterwarnings("ignore", category=RemovedInDjango41Warning)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="lms.djangoapps.course_wiki.plugins.markdownedx.wiki_plugin")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="wiki.plugins.links.wiki_plugin")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="boto.plugin")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="botocore.vendored.requests.packages.urllib3._collections")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="storages.backends.s3boto")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="openedx.core.types.admin")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="pkg_resources")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="fs")
warnings.filterwarnings("ignore", category=DeprecationWarning, module="fs.opener")
SILENCED_SYSTEM_CHECKS = ["2_0.W001", "fields.W903"]
# Email

View File

@ -4,7 +4,7 @@
LOGIN_REDIRECT_WHITELIST = ["{{ CMS_HOST }}"]
# Better layout of honor code/tos links during registration
REGISTRATION_EXTRA_FIELDS["terms_of_service"] = "required"
REGISTRATION_EXTRA_FIELDS["terms_of_service"] = "hidden"
REGISTRATION_EXTRA_FIELDS["honor_code"] = "hidden"
# Fix media files paths
@ -36,11 +36,6 @@ CACHES["staticfiles"] = {
"BACKEND": "django.core.cache.backends.locmem.LocMemCache",
"LOCATION": "staticfiles_lms",
}
CACHES["ora2-storage"] = {
"KEY_PREFIX": "ora2-storage",
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": "redis://{% if REDIS_USERNAME and REDIS_PASSWORD %}{{ REDIS_USERNAME }}:{{ REDIS_PASSWORD }}{% endif %}@{{ REDIS_HOST }}:{{ REDIS_PORT }}/{{ OPENEDX_CACHE_REDIS_DB }}",
}
# Create folders if necessary
for folder in [DATA_DIR, LOG_DIR, MEDIA_ROOT, STATIC_ROOT_BASE, ORA2_FILEUPLOAD_ROOT]:

View File

@ -0,0 +1,8 @@
#! /bin/sh
setowner $OPENEDX_USER_ID /mounts/lms /mounts/cms /mounts/openedx
{% if RUN_ELASTICSEARCH %}setowner 1000 /mounts/elasticsearch{% endif %}
{% if RUN_MONGODB %}setowner 999 /mounts/mongodb{% endif %}
{% if RUN_MYSQL %}setowner 999 /mounts/mysql{% endif %}
{% if RUN_REDIS %}setowner 1000 /mounts/redis{% endif %}
{{ patch("local-docker-compose-permissions-command") }}

View File

@ -1,9 +1,12 @@
{% if is_buildkit_enabled() %}# syntax=docker/dockerfile:1.4{% endif %}
###### Minimal image with base system requirements for most stages
FROM docker.io/ubuntu:20.04 as minimal
LABEL maintainer="Overhang.io <contact@overhang.io>"
ENV DEBIAN_FRONTEND=noninteractive
RUN apt update && \
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked{% endif %} \
apt update && \
apt install -y build-essential curl git language-pack-en
ENV LC_ALL en_US.UTF-8
{{ patch("openedx-dockerfile-minimal") }}
@ -11,14 +14,23 @@ ENV LC_ALL en_US.UTF-8
###### Install python with pyenv in /opt/pyenv and create virtualenv in /openedx/venv
FROM minimal as python
# https://github.com/pyenv/pyenv/wiki/Common-build-problems#prerequisites
RUN apt update && \
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked {% endif %}apt update && \
apt install -y libssl-dev zlib1g-dev libbz2-dev \
libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev \
xz-utils tk-dev libffi-dev liblzma-dev python-openssl git
ARG PYTHON_VERSION=3.8.12
# Install pyenv
# https://www.python.org/downloads/
# https://github.com/pyenv/pyenv/releases
ARG PYTHON_VERSION=3.8.15
ENV PYENV_ROOT /opt/pyenv
RUN git clone https://github.com/pyenv/pyenv $PYENV_ROOT --branch v2.2.2 --depth 1
RUN git clone https://github.com/pyenv/pyenv $PYENV_ROOT --branch v2.3.17 --depth 1
# Install Python
RUN $PYENV_ROOT/bin/pyenv install $PYTHON_VERSION
# Create virtualenv
RUN $PYENV_ROOT/versions/$PYTHON_VERSION/bin/python -m venv /openedx/venv
###### Checkout edx-platform code
@ -40,9 +52,15 @@ RUN git config --global user.email "tutor@overhang.io" \
# Patch edx-platform
{%- endif %}
{# Example: RUN curl -fsSL https://github.com/openedx/edx-platform/commit/<GITSHA1> | git am #}
{# Example: RUN curl -fsSL https://github.com/openedx/edx-platform/commit/<GITSHA1>.patch | git am #}
{{ patch("openedx-dockerfile-post-git-checkout") }}
##### Empty layer with just the repo at the root.
# This is useful when overriding the build context with a host repo:
# docker build --build-context edx-platform=/path/to/edx-platform
FROM scratch as edx-platform
COPY --from=code /openedx/edx-platform /
###### Download extra locales to /openedx/locale/contrib/locale
FROM minimal as locales
ARG OPENEDX_I18N_VERSION={{ OPENEDX_COMMON_VERSION }}
@ -57,36 +75,39 @@ RUN cd /tmp \
FROM python as python-requirements
ENV PATH /openedx/venv/bin:${PATH}
ENV VIRTUAL_ENV /openedx/venv/
ENV XDG_CACHE_HOME /openedx/.cache
RUN apt update && apt install -y software-properties-common libmysqlclient-dev libxmlsec1-dev libgeos-dev
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked {% endif %}apt update \
&& apt install -y software-properties-common libmysqlclient-dev libxmlsec1-dev libgeos-dev
# Install the right version of pip/setuptools
# https://pypi.org/project/setuptools/
# https://pypi.org/project/pip/
# https://pypi.org/project/wheel/
RUN pip install setuptools==65.5.1 pip==22.3.1. wheel==0.38.4
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/openedx/.cache/pip,sharing=shared {% endif %}pip install \
# https://pypi.org/project/setuptools/
# https://pypi.org/project/pip/
# https://pypi.org/project/wheel/
setuptools==67.6.1 pip==23.0.1. wheel==0.40.0
# Install base requirements
COPY --from=code /openedx/edx-platform/requirements/edx/base.txt /tmp/base.txt
RUN pip install -r /tmp/base.txt
RUN {% if is_buildkit_enabled() %}--mount=type=bind,from=edx-platform,source=/requirements/edx/base.txt,target=/openedx/edx-platform/requirements/edx/base.txt \
--mount=type=cache,target=/openedx/.cache/pip,sharing=shared {% endif %}pip install -r /openedx/edx-platform/requirements/edx/base.txt
# Install django-redis for using redis as a django cache
# https://pypi.org/project/django-redis/
RUN pip install django-redis==5.2.0
# Install uwsgi
# https://pypi.org/project/uWSGI/
RUN pip install uwsgi==2.0.21
# Install extra requirements
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/openedx/.cache/pip,sharing=shared {% endif %}pip install \
# Use redis as a django cache https://pypi.org/project/django-redis/
django-redis==5.2.0 \
# uwsgi server https://pypi.org/project/uWSGI/
uwsgi==2.0.21
{{ patch("openedx-dockerfile-post-python-requirements") }}
# Install private requirements: this is useful for installing custom xblocks.
COPY ./requirements/ /openedx/requirements
RUN cd /openedx/requirements/ \
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/openedx/.cache/pip,sharing=shared {% endif %}cd /openedx/requirements/ \
&& touch ./private.txt \
&& pip install -r ./private.txt
{% for extra_requirements in OPENEDX_EXTRA_PIP_REQUIREMENTS %}RUN pip install '{{ extra_requirements }}'
{% for extra_requirements in OPENEDX_EXTRA_PIP_REQUIREMENTS %}RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/openedx/.cache/pip,sharing=shared {% endif %}pip install '{{ extra_requirements }}'
{% endfor %}
###### Install nodejs with nodeenv in /openedx/nodeenv
@ -95,23 +116,24 @@ ENV PATH /openedx/nodeenv/bin:/openedx/venv/bin:${PATH}
# Install nodeenv with the version provided by edx-platform
# https://github.com/openedx/edx-platform/blob/master/requirements/edx/base.txt
# https://github.com/pyenv/pyenv/releases
RUN pip install nodeenv==1.7.0
RUN nodeenv /openedx/nodeenv --node=16.14.0 --prebuilt
# Install nodejs requirements
ARG NPM_REGISTRY={{ NPM_REGISTRY }}
COPY --from=code /openedx/edx-platform/package.json /openedx/edx-platform/package.json
COPY --from=code /openedx/edx-platform/package-lock.json /openedx/edx-platform/package-lock.json
WORKDIR /openedx/edx-platform
RUN npm clean-install --verbose --registry=$NPM_REGISTRY
RUN {% if is_buildkit_enabled() %}--mount=type=bind,from=edx-platform,source=/package.json,target=/openedx/edx-platform/package.json \
--mount=type=bind,from=edx-platform,source=/package-lock.json,target=/openedx/edx-platform/package-lock.json \
--mount=type=cache,target=/root/.npm,sharing=shared {% endif %}npm clean-install --no-audit --registry=$NPM_REGISTRY
###### Production image with system and python requirements
FROM minimal as production
# Install system requirements
RUN apt update && \
apt install -y gettext gfortran graphviz graphviz-dev libffi-dev libfreetype6-dev libgeos-dev libjpeg8-dev liblapack-dev libmysqlclient-dev libpng-dev libsqlite3-dev libxmlsec1-dev lynx mysql-client ntp pkg-config rdfind && \
rm -rf /var/lib/apt/lists/*
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked {% endif %}apt update \
&& apt install -y gettext gfortran graphviz graphviz-dev libffi-dev libfreetype6-dev libgeos-dev libjpeg8-dev liblapack-dev libmysqlclient-dev libpng-dev libsqlite3-dev libxmlsec1-dev lynx mysql-client ntp pkg-config rdfind
# From then on, run as unprivileged "app" user
# Note that this must always be different from root (APP_USER_ID=0)
@ -121,14 +143,17 @@ RUN useradd --home-dir /openedx --create-home --shell /bin/bash --uid ${APP_USER
USER ${APP_USER_ID}
# https://hub.docker.com/r/powerman/dockerize/tags
COPY --from=docker.io/powerman/dockerize:0.19.0 /usr/local/bin/dockerize /usr/local/bin/dockerize
COPY --chown=app:app --from=code /openedx/edx-platform /openedx/edx-platform
COPY {% if is_buildkit_enabled() %}--link {% endif %}--from=docker.io/powerman/dockerize:0.19.0 /usr/local/bin/dockerize /usr/local/bin/dockerize
COPY --chown=app:app --from=edx-platform / /openedx/edx-platform
COPY --chown=app:app --from=locales /openedx/locale /openedx/locale
COPY --chown=app:app --from=python /opt/pyenv /opt/pyenv
COPY --chown=app:app --from=python-requirements /openedx/venv /openedx/venv
COPY --chown=app:app --from=python-requirements /openedx/requirements /openedx/requirements
COPY --chown=app:app --from=nodejs-requirements /openedx/nodeenv /openedx/nodeenv
COPY --chown=app:app --from=nodejs-requirements /openedx/edx-platform/node_modules /openedx/edx-platform/node_modules
COPY --chown=app:app --from=nodejs-requirements /openedx/edx-platform/node_modules /openedx/node_modules
# Symlink node_modules such that we can bind-mount the edx-platform repository
RUN ln -s /openedx/node_modules /openedx/edx-platform/node_modules
ENV PATH /openedx/venv/bin:./node_modules/.bin:/openedx/nodeenv/bin:${PATH}
ENV VIRTUAL_ENV /openedx/venv/
@ -212,16 +237,16 @@ FROM production as development
# Install useful system requirements (as root)
USER root
RUN apt update && \
apt install -y vim iputils-ping dnsutils telnet \
&& rm -rf /var/lib/apt/lists/*
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked {% endif %}apt update && \
apt install -y vim iputils-ping dnsutils telnet
USER app
# Install dev python requirements
RUN pip install -r requirements/edx/development.txt
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/openedx/.cache/pip,sharing=shared {% endif %}pip install -r requirements/edx/development.txt
# https://pypi.org/project/ipdb/
# https://pypi.org/project/ipython
RUN pip install ipdb==0.13.9 ipython==8.7.0
RUN {% if is_buildkit_enabled() %}--mount=type=cache,target=/openedx/.cache/pip,sharing=shared {% endif %}pip install ipdb==0.13.13 ipython==8.12.0
# Add ipdb as default PYTHONBREAKPOINT
ENV PYTHONBREAKPOINT=ipdb.set_trace

View File

@ -1 +1 @@
EDX_PLATFORM_REVISION: olive
EDX_PLATFORM_REVISION: palm

View File

@ -12,13 +12,13 @@ DOCKER_COMPOSE_VERSION: "3.7"
DOCKER_REGISTRY: "docker.io/"
DOCKER_IMAGE_OPENEDX: "{{ DOCKER_REGISTRY }}overhangio/openedx:{{ TUTOR_VERSION }}"
DOCKER_IMAGE_OPENEDX_DEV: "openedx-dev:{{ TUTOR_VERSION }}"
DOCKER_IMAGE_CADDY: "docker.io/caddy:2.6.3"
DOCKER_IMAGE_ELASTICSEARCH: "docker.io/elasticsearch:7.10.1"
DOCKER_IMAGE_MONGODB: "docker.io/mongo:4.2.24"
DOCKER_IMAGE_MYSQL: "docker.io/mysql:5.7.35"
DOCKER_IMAGE_CADDY: "docker.io/caddy:2.6.4"
DOCKER_IMAGE_ELASTICSEARCH: "docker.io/elasticsearch:7.17.9"
DOCKER_IMAGE_MONGODB: "docker.io/mongo:4.4.22"
DOCKER_IMAGE_MYSQL: "docker.io/mysql:8.0.33"
DOCKER_IMAGE_PERMISSIONS: "{{ DOCKER_REGISTRY }}overhangio/openedx-permissions:{{ TUTOR_VERSION }}"
DOCKER_IMAGE_REDIS: "docker.io/redis:6.2.6"
DOCKER_IMAGE_SMTP: "docker.io/devture/exim-relay:4.95-r0-2"
DOCKER_IMAGE_REDIS: "docker.io/redis:7.0.11"
DOCKER_IMAGE_SMTP: "docker.io/devture/exim-relay:4.96-r1-0"
EDX_PLATFORM_REPOSITORY: "https://github.com/openedx/edx-platform.git"
EDX_PLATFORM_VERSION: "{{ OPENEDX_COMMON_VERSION }}"
ELASTICSEARCH_HOST: "elasticsearch"
@ -43,6 +43,7 @@ MONGODB_USERNAME: ""
MONGODB_PASSWORD: ""
MONGODB_REPLICA_SET: ""
MONGODB_USE_SSL: false
MOUNTS: []
OPENEDX_AWS_ACCESS_KEY: ""
OPENEDX_AWS_SECRET_ACCESS_KEY: ""
OPENEDX_CACHE_REDIS_DB: 1
@ -53,7 +54,7 @@ OPENEDX_MYSQL_DATABASE: "openedx"
OPENEDX_MYSQL_USERNAME: "openedx"
OPENEDX_COMMON_VERSION: "master"
OPENEDX_EXTRA_PIP_REQUIREMENTS:
- "openedx-scorm-xblock>=15.0.0,<16.0.0"
- "openedx-scorm-xblock>=16.0.0,<17.0.0"
MYSQL_HOST: "mysql"
MYSQL_PORT: 3306
MYSQL_ROOT_USERNAME: "root"
@ -64,9 +65,7 @@ REDIS_HOST: "redis"
REDIS_PORT: 6379
REDIS_USERNAME: ""
REDIS_PASSWORD: ""
RUN_CMS: true
RUN_ELASTICSEARCH: true
RUN_LMS: true
RUN_MONGODB: true
RUN_MYSQL: true
RUN_REDIS: true

View File

@ -3,12 +3,6 @@ version: "{{ DOCKER_COMPOSE_VERSION }}"
x-openedx-service:
&openedx-service
image: {{ DOCKER_IMAGE_OPENEDX_DEV }}
build:
context: ../build/openedx/
target: development
args:
# Note that we never build the openedx-dev image with root user ID, as it would simply fail.
APP_USER_ID: "{{ HOST_USER_ID or 1000 }}"
stdin_open: true
tty: true
volumes:
@ -22,11 +16,9 @@ x-openedx-service:
- ../build/openedx/requirements:/openedx/requirements
services:
lms-permissions:
command: ["{{ HOST_USER_ID }}", "/openedx/data", "/openedx/media"]
cms-permissions:
command: ["{{ HOST_USER_ID }}", "/openedx/data", "/openedx/media"]
permissions:
environment:
OPENEDX_USER_ID: "{{ HOST_USER_ID }}"
lms:
<<: *openedx-service

View File

@ -57,7 +57,6 @@ spec:
persistentVolumeClaim:
claimName: caddy
{%- endif %}
{% if RUN_CMS %}
---
apiVersion: apps/v1
kind: Deployment
@ -167,8 +166,6 @@ spec:
- name: config
configMap:
name: openedx-config
{% endif %}
{% if RUN_LMS %}
---
apiVersion: apps/v1
kind: Deployment
@ -278,7 +275,6 @@ spec:
- name: config
configMap:
name: openedx-config
{% endif %}
{% if RUN_ELASTICSEARCH %}
---
apiVersion: apps/v1

View File

@ -34,7 +34,6 @@ spec:
selector:
app.kubernetes.io/name: caddy
{% endif %}
{% if RUN_CMS %}
---
apiVersion: v1
kind: Service
@ -49,8 +48,6 @@ spec:
protocol: TCP
selector:
app.kubernetes.io/name: cms
{% endif %}
{% if RUN_LMS %}
---
apiVersion: v1
kind: Service
@ -65,7 +62,6 @@ spec:
protocol: TCP
selector:
app.kubernetes.io/name: lms
{% endif %}
{% if RUN_ELASTICSEARCH %}
---
apiVersion: v1

View File

@ -27,6 +27,9 @@ services:
- ../apps/openedx/settings/lms:/openedx/edx-platform/lms/envs/tutor:ro
- ../apps/openedx/settings/cms:/openedx/edx-platform/cms/envs/tutor:ro
- ../apps/openedx/config:/openedx/config:ro
{%- for mount in iter_mounts(MOUNTS, "lms-job") %}
- {{ mount }}
{%- endfor %}
depends_on: {{ [("mysql", RUN_MYSQL), ("mongodb", RUN_MONGODB)]|list_if }}
cms-job:
@ -38,6 +41,9 @@ services:
- ../apps/openedx/settings/lms:/openedx/edx-platform/lms/envs/tutor:ro
- ../apps/openedx/settings/cms:/openedx/edx-platform/cms/envs/tutor:ro
- ../apps/openedx/config:/openedx/config:ro
{%- for mount in iter_mounts(MOUNTS, "cms-job") %}
- {{ mount }}
{%- endfor %}
depends_on: {{ [("mysql", RUN_MYSQL), ("mongodb", RUN_MONGODB), ("elasticsearch", RUN_ELASTICSEARCH), ("redis", RUN_REDIS)]|list_if }}
{{ patch("local-docker-compose-jobs-services")|indent(4) }}

View File

@ -1,48 +1,55 @@
version: "{{ DOCKER_COMPOSE_VERSION }}"
services:
# Set bind-mounted folder ownership
permissions:
image: {{ DOCKER_IMAGE_PERMISSIONS }}
restart: on-failure
entrypoint: []
command: ["sh", "/usr/local/bin/setowners.sh"]
environment:
OPENEDX_USER_ID: "1000"
volumes:
# Command script
- ../apps/permissions/setowners.sh:/usr/local/bin/setowners.sh:ro
# Bind-mounted volumes to set ownership
- ../../data/lms:/mounts/lms
- ../../data/cms:/mounts/cms
- ../../data/openedx-media:/mounts/openedx
{% if RUN_MONGODB %}- ../../data/mongodb:/mounts/mongodb{% endif %}
{% if RUN_MYSQL %}- ../../data/mysql:/mounts/mysql{% endif %}
{% if RUN_ELASTICSEARCH %}- ../../data/elasticsearch:/mounts/elasticsearch{% endif %}
{% if RUN_REDIS %}- ../../data/redis:/mounts/redis{% endif %}
{{ patch("local-docker-compose-permissions-volumes")|indent(6) }}
############# External services
{% if RUN_MONGODB %}
{% if RUN_MONGODB -%}
mongodb:
image: {{ DOCKER_IMAGE_MONGODB }}
# Use WiredTiger in all environments, just like at edx.org
command: mongod --nojournal --storageEngine wiredTiger
restart: unless-stopped
user: "999:999"
privileged: false
volumes:
- ../../data/mongodb:/data/db
depends_on:
- mongodb-permissions
mongodb-permissions:
image: {{ DOCKER_IMAGE_PERMISSIONS }}
command: ["999", "/data/db"]
restart: on-failure
volumes:
- ../../data/mongodb:/data/db
{% endif %}
- permissions
{%- endif %}
{% if RUN_MYSQL %}
{% if RUN_MYSQL -%}
mysql:
image: {{ DOCKER_IMAGE_MYSQL }}
command: mysqld --character-set-server=utf8 --collation-server=utf8_general_ci
restart: unless-stopped
user: "999:999"
privileged: false
volumes:
- ../../data/mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: "{{ MYSQL_ROOT_PASSWORD }}"
mysql-permissions:
image: {{ DOCKER_IMAGE_PERMISSIONS }}
command: ["999", "/var/lib/mysql"]
restart: on-failure
volumes:
- ../../data/mysql:/var/lib/mysql
{% endif %}
{%- endif %}
{% if RUN_ELASTICSEARCH %}
{% if RUN_ELASTICSEARCH -%}
elasticsearch:
image: {{ DOCKER_IMAGE_ELASTICSEARCH }}
environment:
@ -59,16 +66,10 @@ services:
volumes:
- ../../data/elasticsearch:/usr/share/elasticsearch/data
depends_on:
- elasticsearch-permissions
elasticsearch-permissions:
image: {{ DOCKER_IMAGE_PERMISSIONS }}
command: ["1000", "/usr/share/elasticsearch/data"]
restart: on-failure
volumes:
- ../../data/elasticsearch:/usr/share/elasticsearch/data
{% endif %}
- permissions
{%- endif %}
{% if RUN_REDIS %}
{% if RUN_REDIS -%}
redis:
image: {{ DOCKER_IMAGE_REDIS }}
working_dir: /openedx/redis/data
@ -79,27 +80,20 @@ services:
command: redis-server /openedx/redis/config/redis.conf
restart: unless-stopped
depends_on:
- redis-permissions
redis-permissions:
image: {{ DOCKER_IMAGE_PERMISSIONS }}
command: ["1000", "/openedx/redis/data"]
restart: on-failure
volumes:
- ../../data/redis:/openedx/redis/data
{% endif %}
- permissions
{%- endif %}
{% if RUN_SMTP %}
{% if RUN_SMTP -%}
smtp:
image: {{ DOCKER_IMAGE_SMTP }}
restart: unless-stopped
user: "100:101"
environment:
HOSTNAME: "{{ LMS_HOST }}"
{% endif %}
{%- endif %}
############# LMS and CMS
{% if RUN_LMS %}
lms:
image: {{ DOCKER_IMAGE_OPENEDX }}
environment:
@ -114,24 +108,18 @@ services:
- ../apps/openedx/uwsgi.ini:/openedx/edx-platform/uwsgi.ini:ro
- ../../data/lms:/openedx/data
- ../../data/openedx-media:/openedx/media
{%- for mount in iter_mounts(MOUNTS, "lms") %}
- {{ mount }}
{%- endfor %}
depends_on:
- lms-permissions
- permissions
{% if RUN_MYSQL %}- mysql{% endif %}
{% if RUN_ELASTICSEARCH %}- elasticsearch{% endif %}
{% if RUN_MONGODB %}- mongodb{% endif %}
{% if RUN_REDIS %}- redis{% endif %}
{% if RUN_SMTP %}- smtp{% endif %}
{{ patch("local-docker-compose-lms-dependencies")|indent(6) }}
lms-permissions:
image: {{ DOCKER_IMAGE_PERMISSIONS }}
command: ["1000", "/openedx/data", "/openedx/media"]
restart: on-failure
volumes:
- ../../data/lms:/openedx/data
- ../../data/openedx-media:/openedx/media
{% endif %}
{% if RUN_CMS %}
cms:
image: {{ DOCKER_IMAGE_OPENEDX }}
environment:
@ -146,27 +134,21 @@ services:
- ../apps/openedx/uwsgi.ini:/openedx/edx-platform/uwsgi.ini:ro
- ../../data/cms:/openedx/data
- ../../data/openedx-media:/openedx/media
{%- for mount in iter_mounts(MOUNTS, "cms") %}
- {{ mount }}
{%- endfor %}
depends_on:
- cms-permissions
- permissions
- lms
{% if RUN_MYSQL %}- mysql{% endif %}
{% if RUN_ELASTICSEARCH %}- elasticsearch{% endif %}
{% if RUN_MONGODB %}- mongodb{% endif %}
{% if RUN_REDIS %}- redis{% endif %}
{% if RUN_SMTP %}- smtp{% endif %}
{% if RUN_LMS %}- lms{% endif %}
{{ patch("local-docker-compose-cms-dependencies")|indent(6) }}
cms-permissions:
image: {{ DOCKER_IMAGE_PERMISSIONS }}
command: ["1000", "/openedx/data", "/openedx/media"]
restart: on-failure
volumes:
- ../../data/cms:/openedx/data
- ../../data/openedx-media:/openedx/media
{% endif %}
############# LMS and CMS workers
{% if RUN_LMS %}
lms-worker:
image: {{ DOCKER_IMAGE_OPENEDX }}
environment:
@ -180,11 +162,12 @@ services:
- ../apps/openedx/config:/openedx/config:ro
- ../../data/lms:/openedx/data
- ../../data/openedx-media:/openedx/media
{%- for mount in iter_mounts(MOUNTS, "lms-worker") %}
- {{ mount }}
{%- endfor %}
depends_on:
- lms
{% endif %}
{% if RUN_CMS %}
cms-worker:
image: {{ DOCKER_IMAGE_OPENEDX }}
environment:
@ -198,8 +181,10 @@ services:
- ../apps/openedx/config:/openedx/config:ro
- ../../data/cms:/openedx/data
- ../../data/openedx-media:/openedx/media
{%- for mount in iter_mounts(MOUNTS, "cms-worker") %}
- {{ mount }}
{%- endfor %}
depends_on:
- cms
{% endif %}
{{ patch("local-docker-compose-services")|indent(2) }}

View File

@ -174,29 +174,26 @@ def docker(*command: str) -> int:
@lru_cache(maxsize=None)
def _docker_compose_command() -> Tuple[str, ...]:
def is_buildkit_enabled() -> bool:
"""
A helper function to determine which program to call when running docker compose
A helper function to determine whether we can run `docker buildx` with BuildKit.
"""
if os.environ.get("TUTOR_USE_COMPOSE_SUBCOMMAND") is not None:
return ("docker", "compose")
if shutil.which("docker-compose") is not None:
return ("docker-compose",)
if shutil.which("docker") is not None:
if (
subprocess.run(
["docker", "compose"], capture_output=True, check=False
).returncode
== 0
):
return ("docker", "compose")
raise exceptions.TutorError(
"docker-compose is not installed. Please follow instructions from https://docs.docker.com/compose/install/"
)
# First, we respect the DOCKER_BUILDKIT environment variable
enabled_by_env = {
"1": True,
"0": False,
}.get(os.environ.get("DOCKER_BUILDKIT", ""))
if enabled_by_env is not None:
return enabled_by_env
try:
subprocess.run(["docker", "buildx", "version"], capture_output=True, check=True)
return True
except subprocess.CalledProcessError:
return False
def docker_compose(*command: str) -> int:
return execute(*_docker_compose_command(), *command)
return execute("docker", "compose", *command)
def kubectl(*command: str) -> int:
@ -216,7 +213,7 @@ def is_a_tty() -> bool:
def execute(*command: str) -> int:
click.echo(fmt.command(_shlex_join(*command)))
click.echo(fmt.command(shlex.join(command)))
return execute_silent(*command)
@ -239,21 +236,8 @@ def execute_silent(*command: str) -> int:
return result
def _shlex_join(*split_command: str) -> str:
"""
Return a shell-escaped string from *split_command.
TODO: REMOVE THIS FUNCTION AFTER 2023-06-27.
This function is a shim for the ``shlex.join`` standard library function,
which becomes available in Python 3.8 The end-of-life date for Python 3.7
is in Jan 2023 (https://endoflife.date/python). After that point, it
would be good to delete this function and just use Py3.8's ``shlex.join``.
"""
return " ".join(shlex.quote(arg) for arg in split_command)
def check_output(*command: str) -> bytes:
literal_command = _shlex_join(*command)
literal_command = shlex.join(command)
click.echo(fmt.command(literal_command))
try:
return subprocess.check_output(command)
@ -261,6 +245,20 @@ def check_output(*command: str) -> bytes:
raise exceptions.TutorError(f"Command failed: {literal_command}") from e
def warn_macos_docker_memory() -> None:
try:
check_macos_docker_memory()
except exceptions.TutorError as e:
fmt.echo_alert(
f"""Could not verify sufficient RAM allocation in Docker:
{e}
Tutor may not work if Docker is configured with < 4 GB RAM. Please follow instructions from:
https://docs.tutor.overhang.io/install.html"""
)
def check_macos_docker_memory() -> None:
"""
Try to check that the RAM allocated to the Docker VM on macOS is at least 4 GB.