7
0
mirror of https://github.com/ChristianLight/tutor.git synced 2024-06-07 00:20:49 +00:00

Merge remote-tracking branch 'origin/master' into nightly

This commit is contained in:
Overhang.IO 2022-03-10 18:59:44 +00:00
commit 829e89ef3e
10 changed files with 107 additions and 55 deletions

View File

@ -4,6 +4,8 @@ Note: Breaking changes between versions are indicated by "💥".
## Unreleased
- [Feature] Add `tutor k8s apply` comand, which is a direct interface with `kubectl apply`.
## v13.1.5 (2022-02-14)
- [Improvement] Upgrade all services to open-release/maple.2.

View File

@ -43,6 +43,9 @@ test-types: ## Check type definitions
test-pythonpackage: build-pythonpackage ## Test that package can be uploaded to pypi
twine check dist/tutor-$(shell make version).tar.gz
test-k8s: ## Validate the k8s format with kubectl. Not part of the standard test suite.
tutor k8s apply --dry-run=client --validate=true
format: ## Format code automatically
black $(BLACK_OPTS)

View File

@ -20,7 +20,7 @@ Tutor was tested with server version 1.14.1 and client 1.14.3.
Memory
~~~~~~
In the following, we assume you have access to a working Kubernetes cluster. `kubectl` should use your cluster configuration by default. To launch a cluster locally, you may try out Minikube. Just follow the `official installation instructions <https://kubernetes.io/docs/setup/minikube/>`_.
In the following, we assume you have access to a working Kubernetes cluster. ``kubectl`` should use your cluster configuration by default. To launch a cluster locally, you may try out Minikube. Just follow the `official installation instructions <https://kubernetes.io/docs/setup/minikube/>`__.
The Kubernetes cluster should have at least 4Gb of RAM on each node. When running Minikube, the virtual machine should have that much allocated memory. See below for an example with VirtualBox:
@ -44,12 +44,12 @@ Use this external IP to configure your DNS records. Once the DNS records are con
If, for some reason, you would like to deploy your own load balancer, you should set ``ENABLE_WEB_PROXY=false`` just like in the :ref:`local installation <web_proxy>`. Then, point your load balancer at the "caddy" service, which will be a `ClusterIP <https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types>`__.
S3-like object storage with `MinIO <https://www.minio.io/>`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
S3-like object storage with `MinIO <https://www.minio.io/>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Like many web applications, Open edX needs to persist data. In particular, it needs to persist files uploaded by students and course designers. In the local installation, these files are persisted to disk, on the host filesystem. But on Kubernetes, it is difficult to share a single filesystem between different pods. This would require persistent volume claims with `ReadWriteMany` access mode, and these are difficult to setup.
Luckily, there is another solution: at `edx.org <edx.org>`_, uploaded files are persisted on AWS S3: Open edX is compatible out-of-the-box with the S3 API for storing user-generated files. The problem with S3 is that it introduces a dependency on AWS. To solve this problem, Tutor comes with a plugin that emulates the S3 API but stores files on premises. This is achieved thanks to `MinIO <https://www.minio.io/>`_. If you want to deploy a production platform to Kubernetes, you will most certainly need to enable the ``minio`` plugin::
Luckily, there is another solution: at `edx.org <edx.org>`_, uploaded files are persisted on AWS S3: Open edX is compatible out-of-the-box with the S3 API for storing user-generated files. The problem with S3 is that it introduces a dependency on AWS. To solve this problem, Tutor comes with a plugin that emulates the S3 API but stores files on premises. This is achieved thanks to `MinIO <https://www.minio.io/>`__. If you want to deploy a production platform to Kubernetes, you will most certainly need to enable the ``minio`` plugin::
tutor plugins enable minio
@ -58,7 +58,7 @@ The "minio.LMS_HOST" domain name will have to point to your Kubernetes cluster.
Kubernetes dashboard
~~~~~~~~~~~~~~~~~~~~
This is not a requirement per se, but it's very convenient to have a visual interface of the Kubernetes cluster. We suggest the official `Kubernetes dashboard <https://github.com/kubernetes/dashboard/>`_. Depending on your Kubernetes provider, you may need to install a dashboard yourself. There are generic instructions on the `project's README <https://github.com/kubernetes/dashboard/blob/master/README.md>`_. AWS provides `specific instructions <https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html>`_.
This is not a requirement per se, but it's very convenient to have a visual interface of the Kubernetes cluster. We suggest the official `Kubernetes dashboard <https://github.com/kubernetes/dashboard/>`__. Depending on your Kubernetes provider, you may need to install a dashboard yourself. There are generic instructions on the `project's README <https://github.com/kubernetes/dashboard/blob/master/README.md>`__. AWS provides `specific instructions <https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html>`__.
On Minikube, the dashboard is already installed. To access the dashboard, run::
@ -102,7 +102,7 @@ As with the :ref:`local installation <local>`, there are multiple commands to ru
tutor k8s -h
In particular, the `tutor k8s start` command restarts and reconfigures all services by running ``kubectl apply``. That means that you can delete containers, deployments or just any other kind of resources, and Tutor will re-create them automatically. You should just beware of not deleting any persistent data stored in persistent volume claims. For instance, to restart from a "blank slate", run::
In particular, the ``tutor k8s start`` command restarts and reconfigures all services by running ``kubectl apply``. That means that you can delete containers, deployments or just any other kind of resources, and Tutor will re-create them automatically. You should just beware of not deleting any persistent data stored in persistent volume claims. For instance, to restart from a "blank slate", run::
tutor k8s stop
tutor k8s start
@ -112,6 +112,15 @@ All non-persisting data will be deleted, and then re-created.
Common tasks
------------
Executing commands inside service pods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Tutor and plugin documentation usually often instructions to execute some ``tutor local run ...`` commands. These commands are only valid when running Tutor locally with docker-compose, and will not work on Kubernetes. Instead, you should run ``tutor k8s exec ...`` commands. Arguments and options should be identical.
For instance, to run a Python shell in the lms container, run::
tutor k8s exec lms ./manage.py lms shell
Running a custom "openedx" Docker image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -124,6 +133,6 @@ Some Tutor plugins and customization procedures require that the "openedx" image
Updating docker images
~~~~~~~~~~~~~~~~~~~~~~
Kubernetes does not provide a single command for updating docker images out of the box. A `commonly used trick <https://github.com/kubernetes/kubernetes/issues/33664>`_ is to modify an innocuous label on all resources::
Kubernetes does not provide a single command for updating docker images out of the box. A `commonly used trick <https://github.com/kubernetes/kubernetes/issues/33664>`__ is to modify an innocuous label on all resources::
kubectl patch -k "$(tutor config printroot)/env" --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"date\": \"`date +'%Y%m%d-%H%M%S'`\"}}}}}"

View File

@ -1,7 +1,13 @@
Plugin API
==========
Plugins can affect the behaviour of Tutor at multiple levels. First, plugins can define new services with their Docker images, settings and the right initialisation commands. To do so you will have to define custom :ref:`config <plugin_config>`, :ref:`patches <plugin_patches>`, :ref:`hooks <plugin_hooks>` and :ref:`templates <plugin_templates>`. Then, plugins can also extend the CLI by defining their own :ref:`commands <plugin_command>`.
Plugins can affect the behaviour of Tutor at multiple levels. They can:
* Add new settings or modify existing ones in the Tutor configuration (see :ref:`config <plugin_config>`).
* Add new templates to the Tutor project environment or modify existing ones (see :ref:`patches <plugin_patches>`, :ref:`templates <plugin_templates>` and :ref:`hooks <plugin_hooks>`).
* Add custom commands to the Tutor CLI (see :ref:`command <plugin_command>`).
There exists two different APIs to create Tutor plugins: either with YAML files or Python packages. YAML files are more simple to create, but are limited to just configuration and template patches.
.. _plugin_config:
@ -60,7 +66,8 @@ Example::
This will add a Redis instance to the services run with ``tutor local`` commands.
.. note::
The ``patches`` attribute can be a callable function instead of a static dict value.
In Python plugins, remember that ``patches`` can be a callable function instead of a static dict value.
One can use this to dynamically load a list of patch files from a folder.
.. _plugin_hooks:
@ -176,7 +183,9 @@ When saving the environment, template files that are stored in a template root w
command
~~~~~~~
A plugin can provide custom command line commands. Commands are assumed to be `click.Command <https://click.palletsprojects.com/en/8.0.x/api/#commands>`__ objects, and you typically implement them using the `click.command <https://click.palletsprojects.com/en/8.0.x/api/#click.command>`__ decorator.
Python plugins can provide a custom command line interface.
The ``command`` attribute is assumed to be a `click.Command <https://click.palletsprojects.com/en/8.0.x/api/#commands>`__ object,
and you typically implement them using the `click.command <https://click.palletsprojects.com/en/8.0.x/api/#click.command>`__ decorator.
You may also use the `click.pass_obj <https://click.palletsprojects.com/en/8.0.x/api/#click.pass_obj>`__ decorator to pass the CLI `context <https://click.palletsprojects.com/en/8.0.x/api/#click.Context>`__, such as when you want to access Tutor configuration settings from your command.
@ -207,11 +216,21 @@ You can even define subcommands by creating `command groups <https://click.palle
def command():
pass
@click.command(help="I'm a plugin subcommand")
@command.command(help="I'm a plugin subcommand")
def dosomething():
click.echo("This subcommand is awesome")
This would allow any user to run::
This would allow any user to see your sub-commands::
$ tutor myplugin
Usage: tutor myplugin [OPTIONS] COMMAND [ARGS]...
I'm a plugin command group
Commands:
dosomething I'm a plugin subcommand
and then run them::
$ tutor myplugin dosomething
This subcommand is awesome

View File

@ -14,7 +14,7 @@ YAML files that are stored in the tutor plugins root folder will be automaticall
On Linux, this points to ``~/.local/share/tutor-plugins``. The location of the plugin root folder can be modified by setting the ``TUTOR_PLUGINS_ROOT`` environment variable.
YAML plugins need to define two extra keys: "name" and "version". Custom CLI commands are not supported by YAML plugins.
YAML plugins must define two special top-level keys: ``name`` and ``version``. Then, YAML plugins may use two more top-level keys to customize Tutor's behavior: ``config`` and ``patches``. Custom CLI commands, templates, and hooks are not supported by YAML plugins.
Let's create a simple plugin that adds your own `Google Analytics <https://analytics.google.com/>`__ tracking code to your Open edX platform. We need to add the ``GOOGLE_ANALYTICS_ACCOUNT`` and ``GOOGLE_ANALYTICS_TRACKING_ID`` settings to both the LMS and the CMS settings. To do so, we will only have to create the ``openedx-common-settings`` patch, which is shared by the development and the production settings both for the LMS and the CMS. First, create the plugin directory::
@ -58,17 +58,34 @@ That's it! And it's very easy to share your plugins. Just upload them to your Gi
Python package
~~~~~~~~~~~~~~
Creating a plugin as a Python package allows you to define more complex logic and to store your patches in a more structured way. Python Tutor plugins are regular Python packages that define a specific entrypoint: ``tutor.plugin.v0``.
Creating a plugin as a Python package allows you to define more complex logic and to store your patches in a more structured way. Python Tutor plugins are regular Python packages that define an entrypoint within the ``tutor.plugin.v0`` group:
Example::
from setuptools import setup
setup(
...
entry_points={"tutor.plugin.v0": ["myplugin = myplugin.plugin"]},
entry_points={
"tutor.plugin.v0": ["myplugin = myplugin.plugin"]
},
)
The ``myplugin.plugin`` python module should then declare the ``config``, ``hooks``, etc. attributes that will define its behaviour.
The ``myplugin/plugin.py`` Python module can then define the attributes ``config``, ``patches``, ``hooks``, and ``templates`` to specify the plugin's behavior. The attributes may be defined either as dictionaries or as zero-argument callables returning dictionaries; in the latter case, the callable will be evaluated upon plugin load. Finally, the ``command`` attribute can be defined as an instance of ``click.Command`` to define the plugin's command line interface.
Example::
import click
templates = pkg_resources.resource_filename(...)
config = {...}
hooks = {...}
def patches():
...
@click.command(...)
def command():
...
To get started on the right foot, it is strongly recommended to create your first plugin with the `tutor plugin cookiecutter <https://github.com/overhangio/cookiecutter-tutor-plugin>`__::

View File

@ -1,5 +1,5 @@
Reference
=========
CLI Reference
=============
.. toctree::
:maxdepth: 2

View File

@ -44,7 +44,7 @@ class ComposeJobRunner(jobs.BaseComposeJobRunner):
run_command += ["run", "--rm"]
if not utils.is_a_tty():
run_command += ["-T"]
job_service_name = "{}-job".format(service)
job_service_name = f"{service}-job"
return self.docker_compose(
*run_command,
job_service_name,
@ -224,9 +224,8 @@ def bindmount_command(context: BaseComposeContext, service: str, path: str) -> N
config = tutor_config.load(context.root)
host_path = bindmounts.create(context.job_runner(config), service, path)
fmt.echo_info(
"Bind-mount volume created at {}. You can now use it in all `local` and `dev` commands with the `--volume={}` option.".format(
host_path, path
)
f"Bind-mount volume created at {host_path}. You can now use it in all `local` and `dev` "
f"commands with the `--volume={path}` option."
)
@ -286,12 +285,10 @@ def dc_command(context: BaseComposeContext, command: str, args: List[str]) -> No
host_bind_path = bindmounts.get_path(context.root, volume_arg)
if not os.path.exists(host_bind_path):
raise TutorError(
(
"Bind-mount volume directory {} does not exist. It must first be created"
" with the '{}' command."
).format(host_bind_path, bindmount_command.name)
f"Bind-mount volume directory {host_bind_path} does not exist. It must first be created "
f"with the '{bindmount_command.name}' command."
)
volume_arg = "{}:{}".format(host_bind_path, volume_arg)
volume_arg = f"{host_bind_path}:{volume_arg}"
volume_args += ["--volume", volume_arg]
context.job_runner(config).docker_compose(command, *volume_args, *non_volume_args)

View File

@ -112,10 +112,7 @@ class K8sJobRunner(jobs.BaseJobRunner):
serialize.dump(job, job_file)
# We cannot use the k8s API to create the job: configMap and volume names need
# to be found with the right suffixes.
utils.kubectl(
"apply",
"--kustomize",
tutor_env.pathjoin(self.root),
kubectl_apply(
"--selector",
f"app.kubernetes.io/name={job_name}",
)
@ -225,10 +222,7 @@ def start(context: Context, names: List[str]) -> None:
fmt.echo_info("Namespace already exists: skipping creation.")
except exceptions.TutorError:
fmt.echo_info("Namespace does not exist: now creating it...")
utils.kubectl(
"apply",
"--kustomize",
tutor_env.pathjoin(context.root),
kubectl_apply(
"--wait",
"--selector",
"app.kubernetes.io/component=namespace",
@ -475,6 +469,25 @@ def upgrade(context: click.Context, from_release: Optional[str]) -> None:
context.invoke(config_save_command)
@click.command(
short_help="Direct interface to `kubectl apply`.",
help=(
"Direct interface to `kubnectl-apply`. This is a wrapper around `kubectl apply`. A;; options and"
" arguments passed to this command will be forwarded as-is to `kubectl apply`."
),
context_settings={"ignore_unknown_options": True},
name="apply",
)
@click.argument("args", nargs=-1)
@click.pass_obj
def apply_command(context: Context, args: List[str]) -> None:
kubectl_apply(context.root, *args)
def kubectl_apply(root: str, *args: str) -> None:
utils.kubectl("apply", "--kustomize", tutor_env.pathjoin(root), *args)
def kubectl_exec(
config: Config, service: str, command: str, attach: bool = False
) -> int:
@ -551,3 +564,4 @@ k8s.add_command(exec_command)
k8s.add_command(logs)
k8s.add_command(wait)
k8s.add_command(upgrade)
k8s.add_command(apply_command)

View File

@ -152,7 +152,7 @@ COPY --chown=app:app settings/cms/*.py ./cms/envs/tutor/
RUN mkdir /openedx/locale/user
COPY --chown=app:app ./locale/ /openedx/locale/user/locale/
RUN cd /openedx/locale/user && \
django-admin.py compilemessages -v1
django-admin compilemessages -v1
# Compile i18n strings: in some cases, js locales are not properly compiled out of the box
# and we need to do a pass ourselves. Also, we need to compile the djangojs.js files for

View File

@ -45,15 +45,11 @@ def ensure_file_directory_exists(path: str) -> None:
directory = os.path.dirname(path)
if os.path.isfile(directory):
raise exceptions.TutorError(
"Attempting to create a directory, but a file with the same name already exists: {}".format(
directory
)
f"Attempting to create a directory, but a file with the same name already exists: {directory}"
)
if os.path.isdir(path):
raise exceptions.TutorError(
"Attempting to write to a file, but a directory with the same name already exists: {}".format(
directory
)
f"Attempting to write to a file, but a directory with the same name already exists: {directory}"
)
if not os.path.exists(directory):
os.makedirs(directory)
@ -123,7 +119,7 @@ def long_to_base64(n: int) -> str:
return _bytes
bys = long2intarr(n)
data = struct.pack("%sB" % len(bys), *bys)
data = struct.pack(f"{len(bys)}B", *bys)
if not data:
data = b"\x00"
s = base64.urlsafe_b64encode(data).rstrip(b"=")
@ -202,24 +198,21 @@ def execute(*command: str) -> int:
except Exception as e:
p.kill()
p.wait()
raise exceptions.TutorError(
"Command failed: {}".format(" ".join(command))
) from e
raise exceptions.TutorError(f"Command failed: {' '.join(command)}") from e
if result > 0:
raise exceptions.TutorError(
"Command failed with status {}: {}".format(result, " ".join(command))
f"Command failed with status {result}: {' '.join(command)}"
)
return result
def check_output(*command: str) -> bytes:
click.echo(fmt.command(" ".join(command)))
literal_command = " ".join(command)
click.echo(fmt.command(literal_command))
try:
return subprocess.check_output(command)
except Exception as e:
raise exceptions.TutorError(
"Command failed: {}".format(" ".join(command))
) from e
raise exceptions.TutorError(f"Command failed: {literal_command}") from e
def check_macos_docker_memory() -> None:
@ -237,7 +230,7 @@ def check_macos_docker_memory() -> None:
)
try:
with open(settings_path) as fp:
with open(settings_path, encoding="utf-8") as fp:
data = json.load(fp)
memory_mib = int(data["memoryMiB"])
except OSError as e:
@ -264,7 +257,5 @@ def check_macos_docker_memory() -> None:
if memory_mib < 4096:
raise exceptions.TutorError(
"Docker is configured to allocate {} MiB RAM, less than the recommended {} MiB".format(
memory_mib, 4096
)
f"Docker is configured to allocate {memory_mib} MiB RAM, less than the recommended {4096} MiB"
)