feat: pluggable `local/dev/k8s do <job>` commands

We introduce a new filter to implement custom commands in arbitrary containers.
It becomes easy to write convenient ad-hoc commands that users will
then be able to run either on Kubernetes or locally using a documented CLI.

Pluggable jobs are declared as Click commands and are responsible for
parsing their own arguments. See the new CLI_DO_COMMANDS filter.

Close https://github.com/overhangio/2u-tutor-adoption/issues/75
This commit is contained in:
Régis Behmo 2022-10-19 17:46:31 +02:00 committed by Régis Behmo
parent b6dc65cc64
commit 16e6131f96
26 changed files with 493 additions and 294 deletions

View File

@ -8,6 +8,9 @@ will be backported to the master branch at every major release.
When backporting changes to master, we should keep only the entries that correspond to user-
facing changes.
-->
- 💥[Feature] Add an extensible `local/dev/k8s do ...` command to trigger custom job commands. These commands are used to run a series of bash scripts in designated containers. Any plugin can add custom jobs thanks to the `CLI_DO_COMMANDS` filter. This causes the following breaking changes:
- The "init", "createuser", "settheme", "importdemocourse" commands were all migrated to this new interface. For instance, `tutor local init` was replaced by `tutor local do init`.
- Plugin developers are encouraged to replace calls to the `COMMANDS_INIT` and `COMMANDS_PRE_INIT` filters by `CLI_DO_INIT_TASKS`.
- [Feature] Implement hook filter priorities, which work like action priorities. (by @regisb)
- 💥[Improvement] Remove the `local/dev bindmount` commands, which have been marked as deprecated for some time. The `--mount` option should be used instead.
- 💥[Bugfix] Fix local installation requirements. Plugins that implemented the "openedx-dockerfile-post-python-requirements" patch and that needed access to the edx-platform repo will no longer work. Instead, these plugins should implement the "openedx-dockerfile-pre-assets" patch. This scenario should be very rare, though. (by @regisb)

View File

@ -80,7 +80,7 @@ Service initialisation
::
tutor local init
tutor local do init
This command should be run just once. It will initialise all applications in a running platform. In particular, this will create the required databases tables and apply database migrations for all applications.
@ -120,7 +120,7 @@ Creating a new user with staff and admin rights
You will most certainly need to create a user to administer the platform. Just run::
tutor local createuser --staff --superuser yourusername user@email.com
tutor local do createuser --staff --superuser yourusername user@email.com
You will be asked to set the user password interactively.
@ -131,7 +131,7 @@ Importing the demo course
After a fresh installation, your platform will not have a single course. To import the `Open edX demo course <https://github.com/openedx/edx-demo-course>`_, run::
tutor local importdemocourse
tutor local do importdemocourse
.. _settheme:
@ -140,7 +140,7 @@ Setting a new theme
The default Open edX theme is rather bland, so Tutor makes it easy to switch to a different theme::
tutor local settheme mytheme
tutor local do settheme mytheme
Out of the box, only the default "open-edx" theme is available. We also developed `Indigo, a beautiful, customizable theme <https://github.com/overhangio/indigo>`__ which is easy to install with Tutor.

View File

@ -48,9 +48,9 @@ Finish setup and start Tutor
From this point on, use Tutor as normal. For example, start Open edX and run migrations with::
tutor local start -d
tutor local init
tutor local do init
Or for a development environment::
tutor dev start -d
tutor dev init
tutor dev do init

View File

@ -218,7 +218,7 @@ You can now run the "myservice" container which will execute the ``CMD`` stateme
Declaring initialisation tasks
------------------------------
Services often need to run specific tasks before they can be started. For instance, the LMS and the CMS need to apply database migrations. These commands are written in shell scripts that are executed whenever we run ``launch``. We call these scripts "init tasks". To add a new local init task, we must first add the corresponding service to the ``docker-compose-jobs.yml`` file by implementing the :patch:`local-docker-compose-jobs-services` patch::
Services often need to run specific tasks before they can be started. For instance, the LMS and the CMS need to apply database migrations. These commands are written in shell scripts that are executed whenever we run ``launch``. We call these scripts "init tasks". To add a new local initialisation task, we must first add the corresponding service to the ``docker-compose-jobs.yml`` file by implementing the :patch:`local-docker-compose-jobs-services` patch::
hooks.Filters.ENV_PATCHES.add_item(
(
@ -234,24 +234,22 @@ The patch above defined the "myservice-job" container which will run our initial
$ tutor config save
Next, we create the folder which will contain our init task script::
Next, we create an initialisation task by adding an item to the :py:data:`tutor.hooks.Filters.CLI_DO_INIT_TASKS` filter::
$ mkdir "$(tutor plugins printroot)/templates/myplugin/tasks"
Edit ``$(tutor plugins printroot)/templates/myplugin/tasks/init.sh``::
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
(
"myservice",
"""
echo "++++++ initialising my plugin..."
echo "++++++ done!"
Add our init task script to the :py:data:`tutor.hooks.Filters.COMMANDS_INIT` filter::
hooks.Filters.COMMANDS_INIT.add_item(
("myservice", ("myplugin", "tasks", "init.sh")),
"""
)
)
Run this initialisation task with::
$ tutor local init --limit=myplugin
$ tutor local do init --limit=myplugin
...
Running init task: myplugin/tasks/init.sh
...
@ -354,8 +352,14 @@ Eventually, our plugin is composed of the following files, all stored within the
)
hooks.Filters.IMAGES_PUSH.add_item(("myservice", "myservice:latest"))
hooks.Filters.IMAGES_PULL.add_item(("myservice", "myservice:latest"))
hooks.Filters.COMMANDS_INIT.add_item(
("myservice", ("myplugin", "tasks", "init.sh")),
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
(
"myservice",
"""
echo "++++++ initialising my plugin..."
echo "++++++ done!"
"""
)
)
``templates/myplugin/build/myservice/Dockerfile``

View File

@ -48,7 +48,7 @@ Then, run a local webserver::
The LMS can then be accessed at http://local.overhang.io:8000. You will then have to :ref:`enable that theme <settheme>`::
tutor dev settheme mythemename
tutor dev do settheme mythemename
Watch the themes folders for changes (in a different terminal)::

View File

@ -17,7 +17,9 @@ class TestCommandMixin:
return TestCommandMixin.invoke_in_root(root, args)
@staticmethod
def invoke_in_root(root: str, args: t.List[str]) -> click.testing.Result:
def invoke_in_root(
root: str, args: t.List[str], catch_exceptions: bool = True
) -> click.testing.Result:
"""
Use this method for commands that all need to run in the same root:
@ -32,4 +34,6 @@ class TestCommandMixin:
"TUTOR_IGNORE_DICT_PLUGINS": "1",
}
)
return runner.invoke(cli, args, obj=TestContext(root))
return runner.invoke(
cli, args, obj=TestContext(root), catch_exceptions=catch_exceptions
)

View File

@ -1,7 +1,7 @@
import os
import unittest
from tests.helpers import TestContext, TestJobRunner, temporary_root
from tests.helpers import TestContext, TestTaskRunner, temporary_root
from tutor import config as tutor_config
@ -15,4 +15,4 @@ class TestContextTests(unittest.TestCase):
self.assertFalse(
os.path.exists(os.path.join(context.root, tutor_config.CONFIG_FILENAME))
)
self.assertTrue(isinstance(runner, TestJobRunner))
self.assertTrue(isinstance(runner, TestTaskRunner))

View File

@ -1,71 +1,74 @@
import re
import unittest
from io import StringIO
from unittest.mock import patch
from tests.helpers import TestContext, temporary_root
from tutor import config as tutor_config
from tests.helpers import PluginsTestCase, temporary_root
from tutor.commands import jobs
from .base import TestCommandMixin
class JobsTests(unittest.TestCase):
@patch("sys.stdout", new_callable=StringIO)
def test_initialise(self, mock_stdout: StringIO) -> None:
class JobsTests(PluginsTestCase, TestCommandMixin):
def test_initialise(self) -> None:
with temporary_root() as root:
context = TestContext(root)
config = tutor_config.load_full(root)
runner = context.job_runner(config)
jobs.initialise(runner)
output = mock_stdout.getvalue().strip()
self.assertTrue(output.startswith("Initialising all services..."))
self.assertTrue(output.endswith("All services initialised."))
self.invoke_in_root(root, ["config", "save"])
result = self.invoke_in_root(root, ["local", "do", "init"])
self.assertIsNone(result.exception)
self.assertEqual(0, result.exit_code)
self.assertIn("All services initialised.", result.output)
def test_create_user_command_without_staff(self) -> None:
command = jobs.create_user_template("superuser", False, "username", "email", "p4ssw0rd")
def test_create_user_template_without_staff(self) -> None:
command = jobs.create_user_template(
"superuser", False, "username", "email", "p4ssw0rd"
)
self.assertNotIn("--staff", command)
self.assertIn("set_password", command)
def test_create_user_command_with_staff(self) -> None:
command = jobs.create_user_template("superuser", True, "username", "email", "p4ssw0rd")
def test_create_user_template_with_staff(self) -> None:
command = jobs.create_user_template(
"superuser", True, "username", "email", "p4ssw0rd"
)
self.assertIn("--staff", command)
@patch("sys.stdout", new_callable=StringIO)
def test_import_demo_course(self, mock_stdout: StringIO) -> None:
def test_import_demo_course(self) -> None:
with temporary_root() as root:
context = TestContext(root)
config = tutor_config.load_full(root)
runner = context.job_runner(config)
runner.run_job_from_str("cms", jobs.import_demo_course_template())
self.invoke_in_root(root, ["config", "save"])
with patch("tutor.utils.docker_compose") as mock_docker_compose:
result = self.invoke_in_root(root, ["local", "do", "importdemocourse"])
dc_args, _dc_kwargs = mock_docker_compose.call_args
self.assertIsNone(result.exception)
self.assertEqual(0, result.exit_code)
self.assertIn("cms-job", dc_args)
self.assertTrue(
dc_args[-1]
.strip()
.startswith('echo "Loading settings $DJANGO_SETTINGS_MODULE"')
)
output = mock_stdout.getvalue()
service = re.search(r"Service: (\w*)", output)
commands = re.search(r"(-----)([\S\s]+)(-----)", output)
assert service is not None
assert commands is not None
self.assertEqual(service.group(1), "cms")
def test_set_theme(self) -> None:
with temporary_root() as root:
self.invoke_in_root(root, ["config", "save"])
with patch("tutor.utils.docker_compose") as mock_docker_compose:
result = self.invoke_in_root(
root,
[
"local",
"do",
"settheme",
"--domain",
"domain1",
"--domain",
"domain2",
"beautiful",
],
)
dc_args, _dc_kwargs = mock_docker_compose.call_args
self.assertIsNone(result.exception)
self.assertEqual(0, result.exit_code)
self.assertIn("lms-job", dc_args)
self.assertTrue(
commands.group(2)
.strip()
.startswith('echo "Loading settings $DJANGO_SETTINGS_MODULE"')
)
@patch("sys.stdout", new_callable=StringIO)
def test_set_theme(self, mock_stdout: StringIO) -> None:
with temporary_root() as root:
context = TestContext(root)
config = tutor_config.load_full(root)
runner = context.job_runner(config)
command = jobs.set_theme_template("sample_theme", ["domain1", "domain2"])
runner.run_job_from_str("lms", command)
output = mock_stdout.getvalue()
service = re.search(r"Service: (\w*)", output)
commands = re.search(r"(-----)([\S\s]+)(-----)", output)
assert service is not None
assert commands is not None
self.assertEqual(service.group(1), "lms")
self.assertTrue(
commands.group(2)
dc_args[-1]
.strip()
.startswith('echo "Loading settings $DJANGO_SETTINGS_MODULE"')
)
self.assertIn("assign_theme('beautiful', 'domain1')", dc_args[-1])
self.assertIn("assign_theme('beautiful', 'domain2')", dc_args[-1])

View File

@ -5,12 +5,12 @@ import unittest
import unittest.result
from tutor import hooks
from tutor.commands.context import BaseJobContext
from tutor.jobs import BaseJobRunner
from tutor.commands.context import BaseTaskContext
from tutor.tasks import BaseTaskRunner
from tutor.types import Config
class TestJobRunner(BaseJobRunner):
class TestTaskRunner(BaseTaskRunner):
"""
Mock job runner for unit testing.
@ -18,7 +18,7 @@ class TestJobRunner(BaseJobRunner):
separated by dashes.
"""
def run_job(self, service: str, command: str) -> int:
def run_task(self, service: str, command: str) -> int:
print(os.linesep.join([f"Service: {service}", "-----", command, "----- "]))
return 0
@ -36,13 +36,13 @@ def temporary_root() -> "tempfile.TemporaryDirectory[str]":
return tempfile.TemporaryDirectory(prefix="tutor-test-root-")
class TestContext(BaseJobContext):
class TestContext(BaseTaskContext):
"""
Click context that will use only test job runners.
"""
def job_runner(self, config: Config) -> TestJobRunner:
return TestJobRunner(self.root, config)
def job_runner(self, config: Config) -> TestTaskRunner:
return TestTaskRunner(self.root, config)
class PluginsTestCase(unittest.TestCase):

View File

@ -83,7 +83,7 @@ class EnvTests(PluginsTestCase):
tutor_config.render_full(config)
config["MYSQL_ROOT_PASSWORD"] = "testpassword"
rendered = env.render_file(config, "hooks", "mysql", "init")
rendered = env.render_file(config, "jobs", "init", "mysql.sh")
self.assertIn("testpassword", rendered)
@patch.object(tutor_config.fmt, "echo")

View File

@ -167,10 +167,17 @@ class PluginsTests(PluginsTestCase):
def test_init_tasks(self) -> None:
plugins_v0.DictPlugin({"name": "plugin1", "hooks": {"init": ["myclient"]}})
plugins.load("plugin1")
with patch.object(
plugins_v0.env, "read_template_file", return_value="echo hello"
) as mock_read_template:
plugins.load("plugin1")
mock_read_template.assert_called_once_with(
"plugin1", "hooks", "myclient", "init"
)
self.assertIn(
("myclient", ("plugin1", "hooks", "myclient", "init")),
list(hooks.Filters.COMMANDS_INIT.iterate()),
("myclient", "echo hello"),
list(hooks.Filters.CLI_DO_INIT_TASKS.iterate()),
)
def test_plugins_are_updated_on_config_change(self) -> None:

View File

@ -1,12 +1,12 @@
import os
from .exceptions import TutorError
from .jobs import BaseComposeJobRunner
from .utils import get_user_id
from tutor.exceptions import TutorError
from tutor.tasks import BaseComposeTaskRunner
from tutor.utils import get_user_id
def create(
runner: BaseComposeJobRunner,
runner: BaseComposeTaskRunner,
service: str,
path: str,
) -> str:

View File

@ -9,13 +9,13 @@ from tutor import config as tutor_config
from tutor import env as tutor_env
from tutor import fmt, hooks, serialize, utils
from tutor.commands import jobs
from tutor.commands.context import BaseJobContext
from tutor.commands.context import BaseTaskContext
from tutor.exceptions import TutorError
from tutor.jobs import BaseComposeJobRunner
from tutor.tasks import BaseComposeTaskRunner
from tutor.types import Config
class ComposeJobRunner(BaseComposeJobRunner):
class ComposeTaskRunner(BaseComposeTaskRunner):
def __init__(self, root: str, config: Config):
super().__init__(root, config)
self.project_name = ""
@ -81,7 +81,7 @@ class ComposeJobRunner(BaseComposeJobRunner):
docker_compose_jobs_tmp_path,
)
def run_job(self, service: str, command: str) -> int:
def run_task(self, service: str, command: str) -> int:
"""
Run the "{{ service }}-job" service from local/docker-compose.jobs.yml with the
specified command.
@ -105,11 +105,11 @@ class ComposeJobRunner(BaseComposeJobRunner):
)
class BaseComposeContext(BaseJobContext):
class BaseComposeContext(BaseTaskContext):
COMPOSE_TMP_FILTER: hooks.filters.Filter = NotImplemented
COMPOSE_JOBS_TMP_FILTER: hooks.filters.Filter = NotImplemented
def job_runner(self, config: Config) -> ComposeJobRunner:
def job_runner(self, config: Config) -> ComposeTaskRunner:
raise NotImplementedError
@ -297,19 +297,22 @@ def restart(context: BaseComposeContext, services: t.List[str]) -> None:
context.job_runner(config).docker_compose(*command)
@click.command(help="Initialise all applications")
@click.option("-l", "--limit", help="Limit initialisation to this service or plugin")
@jobs.do_group
@mount_option
@click.pass_obj
def init(
context: BaseComposeContext,
limit: str,
mounts: t.Tuple[t.List[MountParam.MountType]],
def do(
context: BaseComposeContext, mounts: t.Tuple[t.List[MountParam.MountType]]
) -> None:
mount_tmp_volumes(mounts, context)
config = tutor_config.load(context.root)
runner = context.job_runner(config)
jobs.initialise(runner, limit_to=limit)
"""
Run a custom job in the right container(s).
"""
@hooks.Actions.DO_JOB.add()
def _mount_tmp_volumes(_job_name: str, *_args: t.Any, **_kwargs: t.Any) -> None:
"""
We add this logic to an action callback because we do not want to trigger it
whenever we run `tutor local do <job> --help`.
"""
mount_tmp_volumes(mounts, context)
@click.command(
@ -470,11 +473,14 @@ def add_commands(command_group: click.Group) -> None:
command_group.add_command(stop)
command_group.add_command(restart)
command_group.add_command(reboot)
command_group.add_command(init)
command_group.add_command(dc_command)
command_group.add_command(run)
command_group.add_command(copyfrom)
command_group.add_command(execute)
command_group.add_command(logs)
command_group.add_command(status)
jobs.add_commands(command_group)
@hooks.Actions.PLUGINS_LOADED.add()
def _add_do_commands() -> None:
jobs.add_job_commands(do)
command_group.add_command(do)

View File

@ -1,5 +1,5 @@
from ..jobs import BaseJobRunner
from ..types import Config
from tutor.tasks import BaseTaskRunner
from tutor.types import Config
class Context:
@ -16,14 +16,14 @@ class Context:
self.root = root
class BaseJobContext(Context):
class BaseTaskContext(Context):
"""
Specialized context that subcommands may use.
For instance `dev`, `local` and `k8s` define custom runners to run jobs.
"""
def job_runner(self, config: Config) -> BaseJobRunner:
def job_runner(self, config: Config) -> BaseTaskRunner:
"""
Return a runner capable of running docker-compose/kubectl commands.
"""

View File

@ -11,7 +11,7 @@ from tutor.commands import compose
from tutor.types import Config, get_typed
class DevJobRunner(compose.ComposeJobRunner):
class DevTaskRunner(compose.ComposeTaskRunner):
def __init__(self, root: str, config: Config):
"""
Load docker-compose files from dev/ and local/
@ -51,8 +51,8 @@ class DevContext(compose.BaseComposeContext):
COMPOSE_TMP_FILTER = hooks.Filters.COMPOSE_DEV_TMP
COMPOSE_JOBS_TMP_FILTER = hooks.Filters.COMPOSE_DEV_JOBS_TMP
def job_runner(self, config: Config) -> DevJobRunner:
return DevJobRunner(self.root, config)
def job_runner(self, config: Config) -> DevTaskRunner:
return DevTaskRunner(self.root, config)
@click.group(help="Run Open edX locally with development settings")
@ -105,7 +105,7 @@ Tutor may not work if Docker is configured with < 4 GB RAM. Please follow instru
context.invoke(compose.start, detach=True)
click.echo(fmt.title("Database creation and migrations"))
context.invoke(compose.init)
context.invoke(compose.do.commands["init"])
fmt.echo_info(
"""The Open edX platform is now running in detached mode
@ -148,7 +148,7 @@ def _stop_on_local_start(root: str, config: Config, project_name: str) -> None:
Stop the dev platform as soon as a platform with a different project name is
started.
"""
runner = DevJobRunner(root, config)
runner = DevTaskRunner(root, config)
if project_name != runner.project_name:
runner.docker_compose("stop")

View File

@ -1,19 +1,38 @@
"""
Common jobs that must be added both to local, dev and k8s commands.
"""
import functools
import typing as t
import click
from typing_extensions import ParamSpec
from tutor import config as tutor_config
from tutor import fmt, hooks, jobs
from tutor import env, fmt, hooks
from .context import BaseJobContext
BASE_OPENEDX_COMMAND = """
echo "Loading settings $DJANGO_SETTINGS_MODULE"
"""
class DoGroup(click.Group):
"""
A Click group that prints subcommands under 'Jobs' instead of 'Commands' when we run
`.. do --help`. Hackish but it works.
"""
def get_help(self, ctx: click.Context) -> str:
return super().get_help(ctx).replace("Commands:\n", "Jobs:\n")
# A convenient easy-to-use decorator for creating `do` commands.
do_group = click.group(cls=DoGroup, subcommand_metavar="JOB [ARGS]...")
def add_job_commands(do_command_group: click.Group) -> None:
"""
This is meant to be called with the `local/dev/k8s do` group commands, to add the
different `do` subcommands.
"""
subcommands: t.Iterator[click.Command] = hooks.Filters.CLI_DO_COMMANDS.iterate()
for subcommand in subcommands:
do_command_group.add_command(subcommand)
@hooks.Actions.CORE_READY.add()
@ -25,32 +44,52 @@ def _add_core_init_tasks() -> None:
the --limit argument.
"""
with hooks.Contexts.APP("mysql").enter():
hooks.Filters.COMMANDS_INIT.add_item(("mysql", ("hooks", "mysql", "init")))
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
("mysql", env.read_template_file("jobs", "init", "mysql.sh"))
)
with hooks.Contexts.APP("lms").enter():
hooks.Filters.COMMANDS_INIT.add_item(("lms", ("hooks", "lms", "init")))
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
("lms", env.read_template_file("jobs", "init", "lms.sh"))
)
with hooks.Contexts.APP("cms").enter():
hooks.Filters.COMMANDS_INIT.add_item(("cms", ("hooks", "cms", "init")))
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
("cms", env.read_template_file("jobs", "init", "cms.sh"))
)
def initialise(runner: jobs.BaseJobRunner, limit_to: t.Optional[str] = None) -> None:
@click.command("init", help="Initialise all applications")
@click.option("-l", "--limit", help="Limit initialisation to this service or plugin")
def initialise(limit: t.Optional[str]) -> t.Iterator[t.Tuple[str, str]]:
fmt.echo_info("Initialising all services...")
filter_context = hooks.Contexts.APP(limit_to).name if limit_to else None
filter_context = hooks.Contexts.APP(limit).name if limit else None
# Pre-init tasks
iter_pre_init_tasks: t.Iterator[
# Deprecated pre-init tasks
depr_iter_pre_init_tasks: t.Iterator[
t.Tuple[str, t.Iterable[str]]
] = hooks.Filters.COMMANDS_PRE_INIT.iterate(context=filter_context)
for service, path in iter_pre_init_tasks:
fmt.echo_info(f"Running pre-init task: {'/'.join(path)}")
runner.run_job_from_template(service, *path)
for service, path in depr_iter_pre_init_tasks:
fmt.echo_alert(
f"Running deprecated pre-init task: {'/'.join(path)}. Init tasks should no longer be added to the COMMANDS_PRE_INIT filter. Plugin developers should use the CLI_DO_INIT_TASKS filter instead, with a high priority."
)
yield service, env.read_template_file(*path)
# Init tasks
iter_init_tasks: t.Iterator[
t.Tuple[str, str]
] = hooks.Filters.CLI_DO_INIT_TASKS.iterate(context=filter_context)
for service, task in iter_init_tasks:
fmt.echo_info(f"Running init task in {service}")
yield service, task
# Deprecated init tasks
depr_iter_init_tasks: t.Iterator[
t.Tuple[str, t.Iterable[str]]
] = hooks.Filters.COMMANDS_INIT.iterate(context=filter_context)
for service, path in iter_init_tasks:
fmt.echo_info(f"Running init task: {'/'.join(path)}")
runner.run_job_from_template(service, *path)
for service, path in depr_iter_init_tasks:
fmt.echo_alert(
f"Running deprecated init task: {'/'.join(path)}. Init tasks should no longer be added to the COMMANDS_INIT filter. Plugin developers should use the CLI_DO_INIT_TASKS filter instead."
)
yield service, env.read_template_file(*path)
fmt.echo_info("All services initialised.")
@ -67,29 +106,52 @@ def initialise(runner: jobs.BaseJobRunner, limit_to: t.Optional[str] = None) ->
)
@click.argument("name")
@click.argument("email")
@click.pass_obj
def createuser(
context: BaseJobContext,
superuser: str,
staff: bool,
password: str,
name: str,
email: str,
) -> None:
run_job(
context, "lms", create_user_template(superuser, staff, name, email, password)
)
) -> t.Iterable[t.Tuple[str, str]]:
"""
Create an Open edX user
Password can be passed as an option or will be set interactively.
"""
yield ("lms", create_user_template(superuser, staff, name, email, password))
def create_user_template(
superuser: str, staff: bool, username: str, email: str, password: str
) -> str:
opts = ""
if superuser:
opts += " --superuser"
if staff:
opts += " --staff"
return f"""
./manage.py lms manage_user {opts} {username} {email}
./manage.py lms shell -c "
from django.contrib.auth import get_user_model
u = get_user_model().objects.get(username='{username}')
u.set_password('{password}')
u.save()"
"""
@click.command(help="Import the demo course")
@click.pass_obj
def importdemocourse(context: BaseJobContext) -> None:
run_job(context, "cms", import_demo_course_template())
def importdemocourse() -> t.Iterable[t.Tuple[str, str]]:
template = """
# Import demo course
git clone https://github.com/openedx/edx-demo-course --branch {{ OPENEDX_COMMON_VERSION }} --depth 1 ../edx-demo-course
python ./manage.py cms import ../data ../edx-demo-course
# Re-index courses
./manage.py cms reindex_course --all --setup"""
yield ("cms", template)
@click.command(
help="Assign a theme to the LMS and the CMS. To reset to the default theme , use 'default' as the theme name."
)
@click.command()
@click.option(
"-d",
"--domain",
@ -101,49 +163,13 @@ def importdemocourse(context: BaseJobContext) -> None:
),
)
@click.argument("theme_name")
@click.pass_obj
def settheme(context: BaseJobContext, domains: t.List[str], theme_name: str) -> None:
run_job(context, "lms", set_theme_template(theme_name, domains))
def settheme(domains: t.List[str], theme_name: str) -> t.Iterable[t.Tuple[str, str]]:
"""
Assign a theme to the LMS and the CMS.
def run_job(context: BaseJobContext, service: str, command: str) -> None:
config = tutor_config.load(context.root)
runner = context.job_runner(config)
runner.run_job_from_str(service, command)
def create_user_template(
superuser: str, staff: bool, username: str, email: str, password: str
) -> str:
opts = ""
if superuser:
opts += " --superuser"
if staff:
opts += " --staff"
return (
BASE_OPENEDX_COMMAND
+ f"""
./manage.py lms manage_user {opts} {username} {email}
./manage.py lms shell -c "
from django.contrib.auth import get_user_model
u = get_user_model().objects.get(username='{username}')
u.set_password('{password}')
u.save()"
"""
)
def import_demo_course_template() -> str:
return (
BASE_OPENEDX_COMMAND
+ """
# Import demo course
git clone https://github.com/openedx/edx-demo-course --branch {{ OPENEDX_COMMON_VERSION }} --depth 1 ../edx-demo-course
python ./manage.py cms import ../data ../edx-demo-course
# Re-index courses
./manage.py cms reindex_course --all --setup"""
)
To reset to the default theme , use 'default' as the theme name.
"""
yield ("lms", set_theme_template(theme_name, domains))
def set_theme_template(theme_name: str, domain_names: t.List[str]) -> str:
@ -179,9 +205,76 @@ def assign_theme(name, domain):
]
for domain_name in domain_names:
python_command += f"assign_theme('{theme_name}', '{domain_name}')\n"
return BASE_OPENEDX_COMMAND + f'./manage.py lms shell -c "{python_command}"'
return f'./manage.py lms shell -c "{python_command}"'
def add_commands(command_group: click.Group) -> None:
for job_command in [createuser, importdemocourse, settheme]:
command_group.add_command(job_command)
hooks.Filters.CLI_DO_COMMANDS.add_items(
[
createuser,
importdemocourse,
initialise,
settheme,
]
)
def do_callback(service_commands: t.Iterable[t.Tuple[str, str]]) -> None:
"""
This function must be added as a callback to all `do` subcommands.
`do` subcommands don't actually run any task. They just yield tuples of (service
name, unrendered script string). This function is responsible for actually running
the scripts. It does the following:
- Prefix the script with a base command
- Render the script string
- Run a job in the right container
In order to be added as a callback to the do subcommands, the
`_patch_do_commands_callbacks` must be called.
"""
context = click.get_current_context().obj
config = tutor_config.load(context.root)
runner = context.job_runner(config)
base_openedx_command = """
echo "Loading settings $DJANGO_SETTINGS_MODULE"
"""
for service, command in service_commands:
runner.run_task_from_str(service, base_openedx_command + command)
@hooks.Actions.PLUGINS_LOADED.add()
def _patch_do_commands_callbacks() -> None:
"""
After plugins have been loaded, patch `do` subcommands such that their output is
forwarded to `do_callback`.
"""
subcommands: t.Iterator[click.Command] = hooks.Filters.CLI_DO_COMMANDS.iterate()
for subcommand in subcommands:
# Modify the subcommand callback such that job results are processed by do_callback
if subcommand.callback is None:
raise ValueError("Cannot patch None callback")
if subcommand.name is None:
raise ValueError("Defined job with None name")
subcommand.callback = _patch_callback(subcommand.name, subcommand.callback)
P = ParamSpec("P")
def _patch_callback(
job_name: str,
func: t.Callable[P, t.Iterable[t.Tuple[str, str]]]
) -> t.Callable[P, None]:
"""
Modify a subcommand callback function such that its results are processed by `do_callback`.
"""
def new_callback(*args: P.args, **kwargs: P.kwargs) -> None:
hooks.Actions.DO_JOB.do(job_name, *args, context=None, **kwargs)
do_callback(func(*args, **kwargs))
# Make the new callback behave like the old one
functools.update_wrapper(new_callback, func)
return new_callback

View File

@ -6,14 +6,14 @@ import click
from tutor import config as tutor_config
from tutor import env as tutor_env
from tutor import exceptions, fmt
from tutor import exceptions, fmt, hooks
from tutor import interactive as interactive_config
from tutor import serialize, utils
from tutor.commands import jobs
from tutor.commands.config import save as config_save_command
from tutor.commands.context import BaseJobContext
from tutor.commands.context import BaseTaskContext
from tutor.commands.upgrade.k8s import upgrade_from
from tutor.jobs import BaseJobRunner
from tutor.tasks import BaseTaskRunner
from tutor.types import Config, get_typed
@ -22,7 +22,8 @@ class K8sClients:
def __init__(self) -> None:
# Loading the kubernetes module here to avoid import overhead
from kubernetes import client, config # pylint: disable=import-outside-toplevel
# pylint: disable=import-outside-toplevel
from kubernetes import client, config
config.load_kube_config()
self._batch_api = None
@ -48,33 +49,20 @@ class K8sClients:
return self._core_api
class K8sJobRunner(BaseJobRunner):
def load_job(self, name: str) -> Any:
all_jobs = self.render("k8s", "jobs.yml")
for job in serialize.load_all(all_jobs):
job_name = job["metadata"]["name"]
if not isinstance(job_name, str):
raise exceptions.TutorError(
f"Invalid job name: '{job_name}'. Expected str."
)
if job_name == name:
return job
raise exceptions.TutorError(f"Could not find job '{name}'")
class K8sTaskRunner(BaseTaskRunner):
"""
Run tasks (bash commands) in Kubernetes-managed services.
def active_job_names(self) -> List[str]:
"""
Return a list of active job names
Docs:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#list-job-v1-batch
"""
api = K8sClients.instance().batch_api
return [
job.metadata.name
for job in api.list_namespaced_job(k8s_namespace(self.config)).items
if job.status.active
]
Note: a single Tutor "task" correspond to a Kubernetes "job":
https://kubernetes.io/docs/concepts/workloads/controllers/job/
A Tutor "job" is composed of multiple Tutor tasks run in different services.
def run_job(self, service: str, command: str) -> int:
In Kubernetes, each task that is expected to run in a "myservice" container will
trigger the "myservice-job" Kubernetes job. This job definition must be present in
the "k8s/jobs.yml" template.
"""
def run_task(self, service: str, command: str) -> int:
job_name = f"{service}-job"
job = self.load_job(job_name)
# Create a unique job name to make it deduplicate jobs and make it easier to
@ -150,10 +138,41 @@ class K8sJobRunner(BaseJobRunner):
sleep(5)
return 0
def load_job(self, name: str) -> Any:
"""
Find a given job definition in the rendered k8s/jobs.yml template.
"""
all_jobs = self.render("k8s", "jobs.yml")
for job in serialize.load_all(all_jobs):
job_name = job["metadata"]["name"]
if not isinstance(job_name, str):
raise exceptions.TutorError(
f"Invalid job name: '{job_name}'. Expected str."
)
if job_name == name:
return job
raise exceptions.TutorError(f"Could not find job '{name}'")
class K8sContext(BaseJobContext):
def job_runner(self, config: Config) -> K8sJobRunner:
return K8sJobRunner(self.root, config)
def active_job_names(self) -> List[str]:
"""
Return a list of active job names
Docs:
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#list-job-v1-batch
This is necessary to make sure that we don't run the same job multiple times at
the same time.
"""
api = K8sClients.instance().batch_api
return [
job.metadata.name
for job in api.list_namespaced_job(k8s_namespace(self.config)).items
if job.status.active
]
class K8sContext(BaseTaskContext):
def job_runner(self, config: Config) -> K8sTaskRunner:
return K8sTaskRunner(self.root, config)
@click.group(help="Run Open edX on Kubernetes")
@ -341,17 +360,33 @@ def delete(context: K8sContext, yes: bool) -> None:
)
@jobs.do_group
@click.pass_obj
def do(context: K8sContext) -> None:
"""
Run a custom job in the right container(s).
We make sure that some essential containers (databases, proxy) are up before we
launch the jobs.
"""
@hooks.Actions.DO_JOB.add()
def _start_base_deployments(_job_name: str, *_args: Any, **_kwargs: Any) -> None:
"""
We add this logic to an action callback because we do not want to trigger it
whenever we run `tutor k8s do <job> --help`.
"""
config = tutor_config.load(context.root)
wait_for_deployment_ready(config, "caddy")
for name in ["elasticsearch", "mysql", "mongodb"]:
if tutor_config.is_service_activated(config, name):
wait_for_deployment_ready(config, name)
@click.command(help="Initialise all applications")
@click.option("-l", "--limit", help="Limit initialisation to this service or plugin")
@click.pass_obj
def init(context: K8sContext, limit: Optional[str]) -> None:
config = tutor_config.load(context.root)
runner = context.job_runner(config)
wait_for_deployment_ready(config, "caddy")
for name in ["elasticsearch", "mysql", "mongodb"]:
if tutor_config.is_service_activated(config, name):
wait_for_deployment_ready(config, name)
jobs.initialise(runner, limit_to=limit)
@click.pass_context
def init(context: click.Context, limit: Optional[str]) -> None:
context.invoke(do.commands["init"], limit=limit)
@click.command(help="Scale the number of replicas of a given deployment")
@ -540,4 +575,9 @@ k8s.add_command(wait)
k8s.add_command(upgrade)
k8s.add_command(apply_command)
k8s.add_command(status)
jobs.add_commands(k8s)
@hooks.Actions.PLUGINS_LOADED.add()
def _add_k8s_do_commands() -> None:
jobs.add_job_commands(do)
k8s.add_command(do)

View File

@ -13,7 +13,7 @@ from tutor.commands.upgrade.local import upgrade_from
from tutor.types import Config, get_typed
class LocalJobRunner(compose.ComposeJobRunner):
class LocalTaskRunner(compose.ComposeTaskRunner):
def __init__(self, root: str, config: Config):
"""
Load docker-compose files from local/.
@ -52,8 +52,8 @@ class LocalContext(compose.BaseComposeContext):
COMPOSE_TMP_FILTER = hooks.Filters.COMPOSE_LOCAL_TMP
COMPOSE_JOBS_TMP_FILTER = hooks.Filters.COMPOSE_LOCAL_JOBS_TMP
def job_runner(self, config: Config) -> LocalJobRunner:
return LocalJobRunner(self.root, config)
def job_runner(self, config: Config) -> LocalTaskRunner:
return LocalTaskRunner(self.root, config)
@click.group(help="Run Open edX locally with docker-compose")
@ -142,7 +142,7 @@ Press enter when you are ready to continue"""
click.echo(fmt.title("Starting the platform in detached mode"))
context.invoke(compose.start, detach=True)
click.echo(fmt.title("Database creation and migrations"))
context.invoke(compose.init)
context.invoke(compose.do.commands["init"])
config = tutor_config.load(context.obj.root)
fmt.echo_info(
@ -212,7 +212,7 @@ def _stop_on_dev_start(root: str, config: Config, project_name: str) -> None:
Stop the local platform as soon as a platform with a different project name is
started.
"""
runner = LocalJobRunner(root, config)
runner = LocalTaskRunner(root, config)
if project_name != runner.project_name:
runner.docker_compose("stop")

View File

@ -4,7 +4,7 @@ __license__ = "Apache 2.0"
import typing as t
# These imports are the hooks API
from . import actions, contexts, filters
from . import actions, contexts, filters, priorities
from .consts import *

View File

@ -47,6 +47,13 @@ class Actions:
#: This action does not have any parameter.
CORE_READY = actions.get("core:ready")
#: Called just before triggering the job tasks of any `... do <job>` command.
#:
#: :parameter: str job: job name.
#: :parameter: args: job positional arguments.
#: :parameter: kwargs: job named arguments.
DO_JOB = actions.get("do:job")
#: Called as soon as we have access to the Tutor project root.
#:
#: :parameter str root: absolute path to the project root.
@ -104,6 +111,31 @@ class Filters:
return items
"""
#: List of command line interface (CLI) commands.
#:
#: :parameter list commands: commands are instances of ``click.Command``. They will
#: all be added as subcommands of the main ``tutor`` command.
CLI_COMMANDS = filters.get("cli:commands")
#: List of `do ...` commands.
#:
#: :parameter list commands: see :py:data:`CLI_COMMANDS`. These commands will be
#: added as subcommands to the `local/dev/k8s do` commands. They must return a list of
#: ("service name", "service command") tuples. Each "service command" will be executed
#: in the "service" container, both in local, dev and k8s mode.
CLI_DO_COMMANDS = filters.get("cli:commands:do")
#: List of initialization tasks (scripts) to be run in the `init` job. This job
#: includes all database migrations, setting up, etc. To run some tasks before or
#: after others, they should be assigned a different priority.
#:
#: :parameter list[tuple[str, str]] tasks: list of ``(service, task)`` tuples. Each
#: task is essentially a bash script to be run in the "service" container. Scripts
#: may contain Jinja markup, similar to templates.
CLI_DO_INIT_TASKS = filters.get("cli:commands:do:init")
#: DEPRECATED use :py:data:`CLI_DO_INIT_TASKS` instead.
#:
#: List of commands to be executed during initialization. These commands typically
#: include database migrations, setting feature flags, etc.
#:
@ -111,14 +143,17 @@ class Filters:
#:
#: - ``service`` is the name of the container in which the task will be executed.
#: - ``path`` is a tuple that corresponds to a template relative path.
#: Example: ``("myplugin", "hooks", "myservice", "pre-init")`` (see:py:data:`IMAGES_BUILD`).
#: Example: ``("myplugin", "hooks", "myservice", "pre-init")`` (see :py:data:`IMAGES_BUILD`).
#: The command to execute will be read from that template, after it is rendered.
COMMANDS_INIT = filters.get("commands:init")
#: DEPRECATED use :py:data:`CLI_DO_INIT_TASKS` instead with a lower priority score.
#:
#: List of commands to be executed prior to initialization. These commands are run even
#: before the mysql databases are created and the migrations are applied.
#:
#: :parameter list[tuple[str, tuple[str, ...]]] tasks: list of ``(service, path)`` tasks. (see :py:data:`COMMANDS_INIT`).
#: :parameter list[tuple[str, tuple[str, ...]]] tasks: list of ``(service, path)``
#: tasks. (see :py:data:`COMMANDS_INIT`).
COMMANDS_PRE_INIT = filters.get("commands:pre-init")
#: Same as :py:data:`COMPOSE_LOCAL_TMP` but for the development environment.
@ -159,40 +194,6 @@ class Filters:
#: Same as :py:data:`COMPOSE_LOCAL_TMP` but for jobs
COMPOSE_LOCAL_JOBS_TMP = filters.get("compose:local-jobs:tmp")
#: List of images to be built when we run ``tutor images build ...``.
#:
#: :parameter list[tuple[str, tuple[str, ...], str, tuple[str, ...]]] tasks: list of ``(name, path, tag, args)`` tuples.
#:
#: - ``name`` is the name of the image, as in ``tutor images build myimage``.
#: - ``path`` is the relative path to the folder that contains the Dockerfile.
#: For instance ``("myplugin", "build", "myservice")`` indicates that the template will be read from
#: ``myplugin/build/myservice/Dockerfile``
#: - ``tag`` is the Docker tag that will be applied to the image. It will be
#: rendered at runtime with the user configuration. Thus, the image tag could
#: be ``"{{ DOCKER_REGISTRY }}/myimage:{{ TUTOR_VERSION }}"``.
#: - ``args`` is a list of arguments that will be passed to ``docker build ...``.
#: :parameter dict config: user configuration.
IMAGES_BUILD = filters.get("images:build")
#: List of images to be pulled when we run ``tutor images pull ...``.
#:
#: :parameter list[tuple[str, str]] tasks: list of ``(name, tag)`` tuples.
#:
#: - ``name`` is the name of the image, as in ``tutor images pull myimage``.
#: - ``tag`` is the Docker tag that will be applied to the image. (see :py:data:`IMAGES_BUILD`).
#: :parameter dict config: user configuration.
IMAGES_PULL = filters.get("images:pull")
#: List of images to be pushed when we run ``tutor images push ...``.
#: Parameters are the same as for :py:data:`IMAGES_PULL`.
IMAGES_PUSH = filters.get("images:push")
#: List of command line interface (CLI) commands.
#:
#: :parameter list commands: commands are instances of ``click.Command``. They will
#: all be added as subcommands of the main ``tutor`` command.
CLI_COMMANDS = filters.get("cli:commands")
#: Declare new default configuration settings that don't necessarily have to be saved in the user
#: ``config.yml`` file. Default settings may be overridden with ``tutor config save --set=...``, in which
#: case they will automatically be added to ``config.yml``.
@ -298,6 +299,34 @@ class Filters:
#: :parameter filters: list of (name, value) tuples.
ENV_TEMPLATE_VARIABLES = filters.get("env:templates:variables")
#: List of images to be built when we run ``tutor images build ...``.
#:
#: :parameter list[tuple[str, tuple[str, ...], str, tuple[str, ...]]] tasks: list of ``(name, path, tag, args)`` tuples.
#:
#: - ``name`` is the name of the image, as in ``tutor images build myimage``.
#: - ``path`` is the relative path to the folder that contains the Dockerfile.
#: For instance ``("myplugin", "build", "myservice")`` indicates that the template will be read from
#: ``myplugin/build/myservice/Dockerfile``
#: - ``tag`` is the Docker tag that will be applied to the image. It will be
#: rendered at runtime with the user configuration. Thus, the image tag could
#: be ``"{{ DOCKER_REGISTRY }}/myimage:{{ TUTOR_VERSION }}"``.
#: - ``args`` is a list of arguments that will be passed to ``docker build ...``.
#: :parameter dict config: user configuration.
IMAGES_BUILD = filters.get("images:build")
#: List of images to be pulled when we run ``tutor images pull ...``.
#:
#: :parameter list[tuple[str, str]] tasks: list of ``(name, tag)`` tuples.
#:
#: - ``name`` is the name of the image, as in ``tutor images pull myimage``.
#: - ``tag`` is the Docker tag that will be applied to the image. (see :py:data:`IMAGES_BUILD`).
#: :parameter dict config: user configuration.
IMAGES_PULL = filters.get("images:pull")
#: List of images to be pushed when we run ``tutor images push ...``.
#: Parameters are the same as for :py:data:`IMAGES_PULL`.
IMAGES_PUSH = filters.get("images:push")
#: List of installed plugins. In order to be added to this list, a plugin must first
#: be discovered (see :py:data:`Actions.CORE_READY`).
#:

View File

@ -7,7 +7,7 @@ from glob import glob
import click
import pkg_resources
from tutor import exceptions, fmt, hooks, serialize
from tutor import env, exceptions, fmt, hooks, serialize
from tutor.__about__ import __app__
from tutor.types import Config
@ -179,12 +179,18 @@ class BasePlugin:
)
# Pre-init scripts: hooks = {"pre-init": ["myservice1", "myservice2"]}
for service in pre_init_tasks:
path = (self.name, "hooks", service, "pre-init")
hooks.Filters.COMMANDS_PRE_INIT.add_item((service, path))
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
(
service,
env.read_template_file(self.name, "hooks", service, "pre-init"),
),
priority=hooks.priorities.HIGH,
)
# Init scripts: hooks = {"init": ["myservice1", "myservice2"]}
for service in init_tasks:
path = (self.name, "hooks", service, "init")
hooks.Filters.COMMANDS_INIT.add_item((service, path))
hooks.Filters.CLI_DO_INIT_TASKS.add_item(
(service, env.read_template_file(self.name, "hooks", service, "init"))
)
def _load_templates_root(self) -> None:
templates_root = get_callable_attr(self.obj, "templates", default=None)

View File

@ -2,11 +2,11 @@ from tutor import env
from tutor.types import Config
class BaseJobRunner:
class BaseTaskRunner:
"""
A job runner is responsible for running bash commands in the right context.
A task runner is responsible for running bash commands in the right context.
Commands may be loaded from string or template files. The `run_job` method must be
Commands may be loaded from string or template files. The `run_task` method must be
implemented by child classes.
"""
@ -14,13 +14,13 @@ class BaseJobRunner:
self.root = root
self.config = config
def run_job_from_template(self, service: str, *path: str) -> None:
def run_task_from_template(self, service: str, *path: str) -> None:
command = self.render(*path)
self.run_job(service, command)
self.run_task(service, command)
def run_job_from_str(self, service: str, command: str) -> None:
def run_task_from_str(self, service: str, command: str) -> None:
rendered = env.render_str(self.config, command).strip()
self.run_job(service, rendered)
self.run_task(service, rendered)
def render(self, *path: str) -> str:
rendered = env.render_file(self.config, *path).strip()
@ -28,7 +28,7 @@ class BaseJobRunner:
raise TypeError("Cannot load job from binary file")
return rendered
def run_job(self, service: str, command: str) -> int:
def run_task(self, service: str, command: str) -> int:
"""
Given a (potentially large) string command, run it with the
corresponding service. Implementations will differ depending on the
@ -37,6 +37,6 @@ class BaseJobRunner:
raise NotImplementedError
class BaseComposeJobRunner(BaseJobRunner):
class BaseComposeTaskRunner(BaseTaskRunner):
def docker_compose(self, *command: str) -> int:
raise NotImplementedError

View File

@ -203,6 +203,10 @@ def is_a_tty() -> bool:
def execute(*command: str) -> int:
click.echo(fmt.command(_shlex_join(*command)))
return execute_silent(*command)
def execute_silent(*command: str) -> int:
with subprocess.Popen(command) as p:
try:
result = p.wait(timeout=None)