Working Kubernetes quickstart

The k8s quickstart command is now functional, with suppport for https,
xqueue, notes and minio. There are still a few bugs to get rid of,
though.
This commit is contained in:
Régis Behmo 2019-06-06 21:58:21 +02:00
parent d9b6895629
commit 84f2060d33
39 changed files with 785 additions and 250 deletions

View File

@ -1,5 +1,6 @@
.DEFAULT_GOAL := help
BLACK_OPTS = --exclude templates ./tutor ./tests ./plugins
SRC_DIRS = ./tutor ./tests ./plugins ./bin
BLACK_OPTS = --exclude templates ${SRC_DIRS}
###### Development
@ -14,7 +15,7 @@ test-format: ## Run code formatting tests
black --check --diff $(BLACK_OPTS)
test-lint: ## Run code linting tests
pylint --errors-only tutor tests plugins
pylint --errors-only ${SRC_DIRS}
test-unit: test-unit-core test-unit-plugins ## Run unit tests
@ -30,8 +31,7 @@ format: ## Format code automatically
###### Deployment
bundle: ## Bundle the tutor package in a single "dist/tutor" executable
# TODO bundle plugins
pyinstaller --onefile --name=tutor --add-data=./tutor/templates:./tutor/templates ./bin/main
pyinstaller --onefile --name=tutor --add-data=./tutor/templates:./tutor/templates ./bin/main.py
dist/tutor:
$(MAKE) bundle

View File

@ -1,4 +0,0 @@
#!/usr/bin/env python3
from tutor.commands.cli import main
main()

10
bin/main.py Executable file
View File

@ -0,0 +1,10 @@
#!/usr/bin/env python3
from tutor.commands.cli import main
# Manually adding plugins to bundle
from tutor.plugins import Plugins
import tutorminio.plugin
Plugins.EXTRA_INSTALLED["minio"] = tutorminio.plugin
main()

View File

@ -83,6 +83,15 @@ You may want to pull/push images from/to a custom docker registry. For instance,
Vendor services
~~~~~~~~~~~~~~~
Nginx
*****
- ``NGINX_HTTP_PORT`` (default: ``80``)
- ``NGINX_HTTPS_PORT`` (default: ``443``)
- ``WEB_PROXY`` (default: ``true``)
Nginx is used to route web traffic to the various applications and to serve static assets. In case there is another web server in front of the Nginx container (for instance, a web server running on the host or an Ingress controller on Kubernetes), the container exposed ports can be modified. If ``WEB_PROXY`` is set to ``true`` then we assume that SSL termination does not occur in the Nginx container.
MySQL
*****

View File

@ -22,6 +22,7 @@
local
k8s
dev
plugins
extra
troubleshooting
tutor

View File

@ -15,8 +15,8 @@ Memory
In the following, we assume you have access to a working Kubernetes cluster. `kubectl` should use your cluster configuration by default. To launch a cluster locally, you may try out Minikube. Just follow the `official installation instructions <https://kubernetes.io/docs/setup/minikube/>`_.
The Kubernetes cluster should have at least 4Gb of RAM on each node. When running Minikube, the virtual machine should have that much allocated memory. See below for an example with VirtualBox::
The Kubernetes cluster should have at least 4Gb of RAM on each node. When running Minikube, the virtual machine should have that much allocated memory. See below for an example with VirtualBox:
.. image:: img/virtualbox-minikube-system.png
:alt: Virtualbox memory settings for Minikube
@ -40,12 +40,31 @@ With Kubernetes, your Open edX platform will *not* be available at localhost or
where ``MINIKUBEIP`` should be replaced by the result of the command ``minikube ip``.
`ReadWriteMany` storage provider access mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cert-manager for TLS certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some of the data volumes are shared between pods and thus require the `ReadWriteMany` access mode. We assume that a persistent volume provisioner with such capability is already installed on the cluster. For instance, on AWS the `AWS EBS <https://kubernetes.io/docs/concepts/storage/storage-classes/#aws-ebs>`_ provisioner is available. On DigitalOcean, there is `no such provider <https://www.digitalocean.com/docs/kubernetes/how-to/add-volumes/>`_ out of the box and you have to install one yourself.
Tutor relies on `cert-manager <https://docs.cert-manager.io/>`_ to generate TLS certificates for HTTPS access. In order to activate HTTPS support, you will have to install cert-manager yourself. To do so, follow the `instructions from the official documentation <https://docs.cert-manager.io/en/latest/getting-started/install/kubernetes.html>`_. It might be as simple as running::
On Minikube, the standard storage class uses the `k8s.io/minikube-hostpath <https://kubernetes.io/docs/concepts/storage/volumes/#hostpath>`_ provider, which supports `ReadWriteMany` access mode out of the box, so there is no need to install an extra provider.
kubectl create namespace cert-manager
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.8.0/cert-manager.yaml
If you decide to enable HTTPS certificates, you will also have to set ``WEB_PROXY=true`` in the platform configuration, because the SSL/TLS termination will not occur in the Nginx container, but in the Ingress controller. This parameter will automatically be set during quickstart; you can also do it manually with::
tutor config save --set WEB_PROXY=true
Note that this configuration might conflict with a local installation.
S3-like object storage with `MinIO <https://www.minio.io/>`_
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Like many web applications, Open edX needs to persist data. In particular, it needs to persist files uploaded by students and course designers. In the local installation, these files are persisted to disk, on the host filesystem. But on Kubernetes, it is difficult to share a single filesystem between different pods. This would require persistent volume claims with `ReadWriteMany` access mode, and these are difficult to setup.
Luckily, there is another solution: at `edx.org <edx.org>`_, uploaded files are persisted on AWS S3: Open edX is compatible out-of-the-box with the S3 API for storing user-generated files. The problem with S3 is that it introduces a dependency on AWS. To solve this problem, Tutor comes with a plugin that emulates the S3 API but stores files on premises. This is achieved thanks to `MinIO <https://www.minio.io/>`_. If you want to deploy a production platform to Kubernetes, you will most certainly need to enable the ``minio`` plugin::
tutor plugin enable minio
The "minio.LMS_HOST" domain name will have to point to your Kubernetes cluster. This will not be necessary if you have a CNAME from "\*.LMS_HOST" to "LMS_HOST", of course.
Kubernetes dashboard
~~~~~~~~~~~~~~~~~~~~
@ -71,7 +90,7 @@ The other benefit of ``kubectl apply`` is that it allows you to customise the Ku
- ../env/
...
To learn more about "kustomizations", refer to the `official documentation <https://kubectl.docs.kubernetes.io/pages/app_customization/introduction.html>`_.
To learn more about "kustomizations", refer to the `official documentation <https://kubectl.docs.kubernetes.io/pages/app_customization/introduction.html>`__.
Quickstart
----------
@ -93,13 +112,22 @@ Other commands
As with the :ref:`local installation <local>`, there are multiple commands to run operations on your Open edX platform. To view those commands, run::
tutor k8s -h
In particular, the `tutor k8s start` command restarts and reconfigures all services by running ``kubectl apply``. That means that you can delete containers, deployments or just any other kind of resources, and Tutor will re-create them automatically. You should just beware of not deleting any persistent data stored in persistent volume claims. For instance, to restart from a "blank slate", run::
tutor k8s stop
tutor k8s start
Missing features
----------------
All non-persisting data will be deleted, and then re-created.
For now, the following features from the local deployment are not supported:
Recipes
-------
Updating docker images
~~~~~~~~~~~~~~~~~~~~~~
Kubernetes does not provide a single command for updating docker images out of the box. A `commonly used trick <https://github.com/kubernetes/kubernetes/issues/33664>`_ is to modify an innocuous label on all resources::
kubectl patch -k "$(tutor config printroot)/env" --patch "{\"spec\": {\"template\": {\"metadata\": {\"labels\": {\"date\": \"`date +'%Y%m%d-%H%M%S'`\"}}}}}"
- HTTPS certificates
- Xqueue
Kubernetes deployment is under intense development, and these features should be implemented pretty soon. Stay tuned 🤓

126
docs/plugins.rst Normal file
View File

@ -0,0 +1,126 @@
.. _plugins:
Plugins
=======
Since v3.4.0, Tutor comes with a plugin system that allows anyone to customise the deployment of an Open edX platform very easily. The vision behind this plugin system is that users should not have to fork the Tutor repository to customise their deployments. For instance, if you have created a new application that integrates with Open edX, you should not have to describe how to manually patch the platform settings, ``urls.py`` or ``*.env.json`` files. Instead, you can create a "tutor-myapp" plugin for Tutor. Then, users will start using your application in three simple steps::
# 1) Install the plugin
pip install tutor-myapp
# 2) Enable the plugin
tutor plugins enable myapp
# 3) Restart the platform
tutor local quickstart
Commands
--------
List installed plugins::
tutor plugins list
Enable/disable a plugin::
tutor plugins enable myplugin
tutor plugins disable myplugin
After enabling or disabling a plugin, the environment should be re-generated with::
tutor config save
API (v0)
--------
Note: The API for developing Tutor plugins is still considered unstable: profound changes should be expected for some time.
There are two mechanisms by which a plugin can integrate with Tutor: patches and hooks. Patches affect the rendered environment templates, while hooks are actions that are run during the lifetime of an Open edX platform. A plugin indicates which templates it patches, and which hooks it needs to run.
Entrypoint
~~~~~~~~~~
A plugin is a regular python package with a specific entrypoint: ``tutor.plugin.v0``.
Example::
from setuptools import setup
setup(
...
entry_points={"tutor.plugin.v0": ["myplugin = myplugin.plugin"]},
)
The ``myplugin.plugin`` python module should then declare a few attributes that will define its behaviour.
``config``
~~~~~~~~~~
The ``config`` attribute is used to modify existing and add new configuration parameters:
* ``config["set"]`` are key/values that should be modified.
* ``config["defaults"]`` are default key/values for this plugin. Key names will automatically be prefixed with the plugin name (as declared in the entrypoint), in upper case.
Example::
config = {
"set": {
"DOCKER_IMAGE_OPENEDX": "openedx:mytag",
},
"defaults": {
"PARAM": "somevalue",
},
}
This will override the ``DOCKER_IMAGE_OPENEDX`` configuration parameter and will add a new parameter ``MYPLUGIN_PARAM`` that will be equal to "somevalue".
``patches``
~~~~~~~~~~~
The Tutor templates include calls to ``{{ patch("patchname") }}`` in many different places. Plugins can add content in these places by adding values to the ``patches`` attribute.
The ``patches`` attribute can be a callable function instead of a static attribute.
Example::
patches = {
"local-docker-compose-services": """redis:
image: redis:latest"""
}
This will add a Redis instance to the services run with ``tutor local`` commands.
``hooks``
~~~~~~~~~
Hooks are services that are run during the lifetime of the platform. Currently, there is just one ``init`` hook. You should add there the services that will be run during initialisation, for instance for database creation and migrations.
Example::
hooks = {"init": ["myservice1", "myservice2"]}
During initialisation, "myservice1" and "myservice2" will be run in sequence with the commands defined in the templates ``myplugin/hooks/myservice1/init`` and ``myplugin/hooks/myservice2/init``.
``templates``
~~~~~~~~~~~~~
In order to define plugin-specific hooks, a plugin should also have a template directory that includes the plugin hooks. The ``templates`` attribute should point to that directory.
Example::
import os
templates = templates = os.path.join(os.path.abspath(os.path.dirname(__file__)), "templates")
With the above declaration, you can store plugin-specific templates in the ``templates/myplugin`` folder next to the ``plugin.py`` file.
Existing plugins
----------------
There exists just one Tutor plugin, for now. In the future, Xqueue and Student Notes will be moved outside of the main configuration and will have their own plugin.
MinIO
~~~~~
::
tutor plugins enable minio
See the `plugin documentation <https://github.com/regisb/tutor/tree/master/plugins/minio>`_.

View File

@ -1,11 +1,46 @@
TODO
Object storage for Open edX with `MinIO <https://www.minio.io/>`_
=================================================================
- This is mainly for production. Does not work with `tutor dev` commands.
- For local testing, you need to set MINIO_HOST to minio.localhost:
This is a plugin for `Tutor <https://docs.tutor.overhang.io>`_ that provides S3-like object storage for Open edX platforms. It's S3, but without the dependency on AWS. This is achieved thanks to `MinIO <https://www.minio.io/>`_, an open source project that provides object storage with an API compatible with S3.
In particular, this plugin is essential for `Kubernetes deployment <https://docs.tutor.overhang.io/k8s.html>`_.
Installation
------------
The plugin is currently bundled with the `binary releases of Tutor <https://github.com/regisb/tutor/releases>`_. If you have installed Tutor from source, you will have to install this plugin from source, too::
git clone https://github.com/regisb/tutor/
pip install -e tutor/plugins/minio
Then, to enable this plugin, run::
tutor plugins enable minio
Configuration
-------------
- ``MINIO_BUCKET_NAME`` (default: ``"openedx"``)
- ``MINIO_FILE_UPLOAD_BUCKET_NAME`` (default: ``"openedxuploads"``)
- ``MINIO_COURSE_IMPORT_EXPORT_BUCKET`` (default: ``"openedxcourseimportexport"``)
- ``MINIO_HOST`` (default: ``"minio.{{ LMS_HOST }}"``)
- ``MINIO_DOCKER_REGISTRY`` (default: ``"{{ DOCKER_REGISTRY }}"``)
- ``MINIO_DOCKER_IMAGE_CLIENT`` (default: ``"minio/mc:RELEASE.2019-05-23T01-33-27Z"``)
- ``MINIO_DOCKER_IMAGE_SERVER`` (default: ``"minio/minio:RELEASE.2019-05-23T00-29-34Z"``)
These values can be modified with ``tutor config save --set PARAM_NAME=VALUE`` commands.
DNS records
-----------
It is assumed that the ``MINIO_HOST`` DNS record points to your server. When running MinIO on your laptop, you should point your services to ``minio.localhost``::
tutor config save --set MINIO_HOST=minio.localhost
- You need `minio.LMS_HOST` domain name. For local development, the MinIO admin dashboard is at minio.localhost. For authentication, use MINIO_ACCESS_KEY and MINIO_SECRET_KEY:
tutor config printvalue OPENEDX_AWS_ACCESS_KEY
tutor config printvalue OPENEDX_AWS_SECRET_ACCESS_KEY
Web UI
------
The MinIO web UI can be accessed at http://<MINIO_HOST>. The credentials for accessing the UI can be obtained with::
tutor config printvalue OPENEDX_AWS_ACCESS_KEY
tutor config printvalue OPENEDX_AWS_SECRET_ACCESS_KEY

View File

@ -9,6 +9,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: minio
strategy:
type: Recreate
template:
metadata:
labels:
@ -16,7 +18,13 @@ spec:
spec:
containers:
- name: minio
image: {{ MINIO_DOCKER_REGISTRY }}{{ MINIO_DOCKER_IMAGE }}
image: {{ MINIO_DOCKER_REGISTRY }}{{ MINIO_DOCKER_IMAGE_SERVER }}
args: ["server", "--address", ":9000", "/data"]
env:
- name: MINIO_ACCESS_KEY
value: "{{ OPENEDX_AWS_ACCESS_KEY }}"
- name: MINIO_SECRET_KEY
value: "{{ OPENEDX_AWS_SECRET_ACCESS_KEY }}"
ports:
- containerPort: 9000
volumeMounts:
@ -25,4 +33,25 @@ spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: minio
claimName: minio
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio-client
labels:
app.kubernetes.io/name: minio-client
spec:
selector:
matchLabels:
app.kubernetes.io/name: minio-client
template:
metadata:
labels:
app.kubernetes.io/name: minio-client
spec:
containers:
- name: minio
image: {{ MINIO_DOCKER_REGISTRY }}{{ MINIO_DOCKER_IMAGE_CLIENT }}
command: ["sh", "-e", "-c"]
args: ["while true; do echo 'ready'; sleep 10; done"]

View File

@ -0,0 +1,18 @@
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ MINIO_HOST|replace(".", "-") }}
spec:
secretName: {{ MINIO_HOST }}-tls
issuerRef:
name: letsencrypt
commonName: {{ MINIO_HOST }}
dnsNames:
- {{ MINIO_HOST }}
acme:
config:
- http01:
ingress: web
domains:
- {{ MINIO_HOST }}

View File

@ -0,0 +1,6 @@
- host: {{ MINIO_HOST }}
http:
paths:
- backend:
serviceName: nginx
servicePort: {% if ACTIVATE_HTTPS %}443{% else %}80{% endif %}

View File

@ -0,0 +1 @@
- {{ MINIO_HOST }}

View File

@ -0,0 +1,12 @@
---
apiVersion: v1
kind: Service
metadata:
name: minio
spec:
type: NodePort
ports:
- port: 9000
protocol: TCP
selector:
app.kubernetes.io/name: minio

View File

@ -0,0 +1,14 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio
labels:
app.kubernetes.io/component: volume
app.kubernetes.io/name: minio
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi

View File

@ -2,6 +2,13 @@
upstream minio-backend {
server minio:9000 fail_timeout=0;
}
{% if ACTIVATE_HTTPS %}
server {
server_name {{ MINIO_HOST }};
listen 80;
return 301 https://$server_name$request_uri;
}
{% endif %}
server {
{% if ACTIVATE_HTTPS %}listen 443 {{ "" if WEB_PROXY else "ssl" }};{% else %}listen 80;{% endif %}
server_name minio.localhost {{ MINIO_HOST }};
@ -13,6 +20,8 @@ server {
# Disables server version feedback on pages and in headers
server_tokens off;
client_max_body_size 0;
location / {
proxy_set_header X-Forwarded-Proto $scheme;

View File

@ -21,7 +21,7 @@ config = {
templates = os.path.join(HERE, "templates")
scripts = {"init": ["minio-client"]}
hooks = {"init": ["minio-client"]}
def patches():

View File

@ -39,7 +39,7 @@ class EnvTests(unittest.TestCase):
config = {}
tutor_config.merge(config, tutor_config.load_defaults())
config["MYSQL_ROOT_PASSWORD"] = "testpassword"
rendered = env.render_file(config, "scripts", "mysql-client", "init")
rendered = env.render_file(config, "hooks", "mysql-client", "init")
self.assertIn("testpassword", rendered)
@unittest.mock.patch.object(tutor_config.fmt, "echo")

View File

@ -8,7 +8,7 @@ from tutor import plugins
class PluginsTests(unittest.TestCase):
def setUp(self):
plugins.Patches.CACHE.clear()
plugins.Plugins.clear()
def test_iter_installed(self):
with unittest.mock.patch.object(
@ -19,6 +19,25 @@ class PluginsTests(unittest.TestCase):
def test_is_installed(self):
self.assertFalse(plugins.is_installed("dummy"))
def test_extra_installed(self):
class plugin1:
pass
class plugin2:
pass
plugins.Plugins.EXTRA_INSTALLED["plugin1"] = plugin1
plugins.Plugins.EXTRA_INSTALLED["plugin2"] = plugin2
with unittest.mock.patch.object(
plugins.Plugins,
"iter_installed_entrypoints",
return_value=[("plugin1", plugin1)],
):
self.assertEqual(
[("plugin1", plugin1), ("plugin2", plugin2)],
list(plugins.iter_installed()),
)
def test_enable(self):
config = {plugins.CONFIG_KEY: []}
with unittest.mock.patch.object(plugins, "is_installed", return_value=True):
@ -48,7 +67,7 @@ class PluginsTests(unittest.TestCase):
patches = {"patch1": "Hello {{ ID }}"}
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
patches = list(plugins.iter_patches({}, "patch1"))
self.assertEqual([("plugin1", "Hello {{ ID }}")], patches)
@ -58,7 +77,7 @@ class PluginsTests(unittest.TestCase):
pass
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
patches = list(plugins.iter_patches({}, "patch1"))
self.assertEqual([], patches)
@ -75,7 +94,7 @@ class PluginsTests(unittest.TestCase):
}
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
tutor_config.load_plugins(config, defaults)
@ -97,7 +116,7 @@ class PluginsTests(unittest.TestCase):
config = {"set": {"ID": "newid"}}
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
tutor_config.load_plugins(config, {})
@ -110,7 +129,7 @@ class PluginsTests(unittest.TestCase):
config = {"set": {"PARAM1": "{{ 128|random_string }}"}}
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
tutor_config.load_plugins(config, {})
self.assertEqual(128, len(config["PARAM1"]))
@ -123,28 +142,29 @@ class PluginsTests(unittest.TestCase):
config = {"defaults": {"PARAM2": "{{ PARAM1 }}"}}
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
tutor_config.load_plugins(config, defaults)
self.assertEqual("{{ PARAM1 }}", defaults["PLUGIN1_PARAM2"])
def test_scripts(self):
def test_hooks(self):
class plugin1:
scripts = {"init": ["myclient"]}
hooks = {"init": ["myclient"]}
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
self.assertEqual(
[("plugin1", "myclient")], list(plugins.iter_scripts({}, "init"))
[("plugin1", "myclient")], list(plugins.iter_hooks({}, "init"))
)
def test_iter_templates(self):
class plugin1:
templates = "/tmp/templates"
with unittest.mock.patch.object(
plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
plugins.Plugins, "iter_enabled", return_value=[("plugin1", plugin1)]
):
self.assertEqual(
[("plugin1", "/tmp/templates")], list(plugins.iter_templates({}))
)
)

View File

@ -20,13 +20,19 @@ def k8s():
def quickstart(root, non_interactive):
click.echo(fmt.title("Interactive platform configuration"))
config = interactive_config.update(root, interactive=(not non_interactive))
if config["ACTIVATE_HTTPS"] and not config["WEB_PROXY"]:
fmt.echo_alert(
"Potentially invalid configuration: ACTIVATE_HTTPS=true WEB_PROXY=false\n"
"You should either disable HTTPS support or configure your platform to use"
" a web proxy. See the Kubernetes section in the Tutor documentation for"
" more information."
)
click.echo(fmt.title("Updating the current environment"))
tutor_env.save(root, config)
click.echo(fmt.title("Starting the platform"))
start.callback(root)
click.echo(fmt.title("Database creation and migrations"))
init.callback(root)
# TODO https certificates
@click.command(help="Run all configured Open edX services")
@ -107,7 +113,9 @@ def init(root):
def createuser(root, superuser, staff, name, email):
config = tutor_config.load(root)
runner = K8sScriptRunner(root, config)
scripts.create_user(runner, superuser, staff, name, email)
runner.check_service_is_activated("lms")
command = scripts.create_user_command(superuser, staff, name, email)
kubectl_exec(config, "lms", command, attach=True)
@click.command(help="Import the demo course")
@ -129,24 +137,30 @@ def indexcourses(root):
scripts.index_courses(runner)
# @click.command(help="Launch a shell in LMS or CMS")
# @click.argument("service", type=click.Choice(["lms", "cms"]))
# def shell(service):
# K8s().execute(service, "bash")
@click.command(name="exec", help="Execute a command in a pod of the given application")
@opts.root
@click.argument("service")
@click.argument("command")
def exec_command(root, service, command):
config = tutor_config.load(root)
kubectl_exec(config, service, command, attach=True)
@click.command(help="View output from containers")
@opts.root
@click.option("-c", "--container", help="Print the logs of this specific container")
@click.option("-f", "--follow", is_flag=True, help="Follow log output")
@click.option("--tail", type=int, help="Number of lines to show from each container")
@click.argument("service")
def logs(root, follow, tail, service):
def logs(root, container, follow, tail, service):
config = tutor_config.load(root)
command = ["logs"]
selectors = ["app.kubernetes.io/name=" + service] if service else []
command += resource_selector(config, *selectors)
if container:
command += ["-c", container]
if follow:
command += ["--follow"]
if tail is not None:
@ -157,54 +171,45 @@ def logs(root, follow, tail, service):
class K8sScriptRunner(scripts.BaseRunner):
def exec(self, service, command):
selector = "app.kubernetes.io/name={}".format(service)
kubectl_exec(self.config, service, command, attach=False)
def kubectl_exec(config, service, command, attach=False):
selector = "app.kubernetes.io/name={}".format(service)
# Find pod in runner deployment
wait_for_pod_ready(config, service)
fmt.echo_info("Finding pod name for {} deployment...".format(service))
pod = utils.check_output(
"kubectl",
"get",
*resource_selector(config, selector),
"pods",
"-o=jsonpath={.items[0].metadata.name}",
)
# Run command
attach_opts = ["-i", "-t"] if attach else []
utils.kubectl(
"exec",
*attach_opts,
"--namespace",
config["K8S_NAMESPACE"],
pod.decode(),
"--",
"sh",
"-e",
"-c",
command,
)
# Find pod in runner deployment
wait_for_pod_ready(self.config, service)
fmt.echo_info("Finding pod name for {} deployment...".format(service))
pod = utils.check_output(
"kubectl",
"get",
*resource_selector(self.config, selector),
"pods",
"-o=jsonpath={.items[0].metadata.name}",
)
# Delete any previously run jobs (completed job objects still exist)
# utils.kubectl("delete", "-k", kustomization, "--wait", selector)
# Run job
utils.kubectl(
"exec",
"--namespace",
self.config["K8S_NAMESPACE"],
pod.decode(),
"--",
"sh",
"-e",
"-c",
command,
)
# # Wait until complete
# fmt.echo_info(
# "Waiting for job to complete. To view logs, run: \n\n kubectl logs -n {} -l app.kubernetes.io/name={} --follow\n".format(
# self.config["K8S_NAMESPACE"], job_name
# )
# )
# utils.kubectl(
# "wait",
# "--namespace",
# self.config["K8S_NAMESPACE"],
# "--for=condition=complete",
# "--timeout=-1s",
# selector,
# "job",
# )
def wait_for_pod_ready(config, service):
fmt.echo_info("Waiting for a {} pod to be ready...".format(service))
utils.kubectl(
"wait",
*resource_selector(config, "app.kubernetes.io/name={}".format(service)),
"--for=condition=Ready",
"--for=condition=ContainersReady",
"--timeout=600s",
"pod",
)
@ -218,5 +223,5 @@ k8s.add_command(init)
k8s.add_command(createuser)
k8s.add_command(importdemocourse)
k8s.add_command(indexcourses)
# k8s.add_command(shell)
k8s.add_command(exec_command)
k8s.add_command(logs)

View File

@ -176,7 +176,7 @@ def https_create(root):
fmt.echo_info("HTTPS is not activated: certificate generation skipped")
return
script = runner.render("scripts", "certbot", "create")
script = runner.render("hooks", "certbot", "create")
if config["WEB_PROXY"]:
fmt.echo_info(
@ -259,7 +259,9 @@ def logs(root, follow, tail, service):
def createuser(root, superuser, staff, name, email):
config = tutor_config.load(root)
runner = ScriptRunner(root, config)
scripts.create_user(runner, superuser, staff, name, email)
runner.check_service_is_activated("lms")
command = scripts.create_user_command(superuser, staff, name, email)
runner.exec("lms", command)
@click.command(help="Import the demo course")

View File

@ -1,60 +1,110 @@
import pkg_resources
from . import exceptions
from . import fmt
"""
Tutor plugins are regular python packages that have a 'tutor.plugin.v1' entrypoint. This
entrypoint must point to a module or a class that implements I don't know what (yet).
TODO
"""
# TODO switch to v1
ENTRYPOINT = "tutor.plugin.v0"
CONFIG_KEY = "PLUGINS"
class Patches:
class Plugins:
"""
Provide a patch cache on which we can conveniently iterate without having to parse again all plugin patches for every environment file.
Tutor plugins are regular python packages that have a 'tutor.plugin.v0' entrypoint.
The CACHE static attribute is a dict of the form:
The API for Tutor plugins is currently in development. The entrypoint will switch to
'tutor.plugin.v1' once it is stabilised.
{
"patchname": {
"pluginname": "patch content",
...
},
...
}
This entrypoint must point to a module or a class that implements one or more of the
following properties:
`patches` (dict str->str): entries in this dict will be used to patch the rendered
Tutor templates. For instance, to add "somecontent" to a template that includes '{{
patch("mypatch") }}', set: `patches["mypatch"] = "somecontent"`. It is recommended
to store all patches in separate files, and to dynamically list patches by listing
the contents of a "patches" subdirectory.
`templates` (str): path to a directory that includes new template files for the
plugin. It is recommended that all files in the template directory are stored in a
`myplugin` folder to avoid conflicts with other plugins. Plugin templates are useful
for content re-use, e.g: "{% include 'myplugin/mytemplate.html'}".
`hooks` (dict str->list[str]): hooks are commands that will be run at various points
during the lifetime of the platform. For instance, to run `service1` and `service2`
in sequence during initialization, you should define:
hooks["init"] = ["service1", "service2"]
It is then assumed that there are `myplugin/hooks/service1/init` and
`myplugin/hooks/service2/init` templates in the plugin `templates` directory.
"""
CACHE = {}
ENTRYPOINT = "tutor.plugin.v0"
INSTANCE = None
EXTRA_INSTALLED = {}
def __init__(self, config, name):
self.name = name
if not self.CACHE:
self.fill_cache(config)
def __init__(self, config):
self.config = config
self.patches = {}
self.hooks = {}
self.templates = {}
def __iter__(self):
"""
Yields:
plugin name (str)
patch content (str)
"""
plugin_patches = self.CACHE.get(self.name, {})
for plugin_name, plugin in self.iter_enabled():
patches = get_callable_attr(plugin, "patches", {})
for patch_name, content in patches.items():
if patch_name not in self.patches:
self.patches[patch_name] = {}
self.patches[patch_name][plugin_name] = content
hooks = get_callable_attr(plugin, "hooks", {})
for hook_name, services in hooks.items():
if hook_name not in self.hooks:
self.hooks[hook_name] = {}
self.hooks[hook_name][plugin_name] = services
templates = get_callable_attr(plugin, "templates")
if templates:
self.templates[plugin_name] = templates
@classmethod
def clear(cls):
cls.INSTANCE = None
cls.EXTRA_INSTALLED.clear()
@classmethod
def instance(cls, config):
if cls.INSTANCE is None or cls.INSTANCE.config != config:
cls.INSTANCE = cls(config)
return cls.INSTANCE
@classmethod
def iter_installed(cls):
yield from cls.EXTRA_INSTALLED.items()
for name, module in cls.iter_installed_entrypoints():
if name not in cls.EXTRA_INSTALLED:
yield name, module
@classmethod
def iter_installed_entrypoints(cls):
for entrypoint in pkg_resources.iter_entry_points(cls.ENTRYPOINT):
yield (entrypoint.name, entrypoint.load())
def iter_enabled(self):
for name, plugin in self.iter_installed():
if is_enabled(self.config, name):
yield name, plugin
def iter_patches(self, name):
plugin_patches = self.patches.get(name, {})
plugins = sorted(plugin_patches.keys())
for plugin in plugins:
yield plugin, plugin_patches[plugin]
@classmethod
def fill_cache(cls, config):
for plugin_name, plugin in iter_enabled(config):
patches = get_callable_attr(plugin, "patches", {})
for patch_name, content in patches.items():
if patch_name not in cls.CACHE:
cls.CACHE[patch_name] = {}
cls.CACHE[patch_name][plugin_name] = content
def iter_hooks(self, hook_name):
for plugin_name, services in self.hooks.get(hook_name, {}).items():
for service in services:
yield plugin_name, service
def iter_templates(self):
yield from self.templates.items()
def get_callable_attr(plugin, attr_name, default=None):
@ -70,8 +120,7 @@ def is_installed(name):
def iter_installed():
for entrypoint in pkg_resources.iter_entry_points(ENTRYPOINT):
yield entrypoint.name, entrypoint.load()
yield from Plugins.iter_installed()
def enable(config, name):
@ -91,9 +140,7 @@ def disable(config, name):
def iter_enabled(config):
for name, plugin in iter_installed():
if is_enabled(config, name):
yield name, plugin
yield from Plugins.instance(config).iter_enabled()
def is_enabled(config, name):
@ -101,30 +148,12 @@ def is_enabled(config, name):
def iter_patches(config, name):
for plugin, patch in Patches(config, name):
yield plugin, patch
yield from Plugins.instance(config).iter_patches(name)
def iter_scripts(config, script_name):
"""
Scripts are of the form:
def iter_hooks(config, hook_name):
yield from Plugins.instance(config).iter_hooks(hook_name)
scripts = {
"script-name": [
"service-name1",
"service-name2",
...
],
...
}
"""
for plugin_name, plugin in iter_enabled(config):
scripts = get_callable_attr(plugin, "scripts", {})
for service in scripts.get(script_name, []):
yield plugin_name, service
def iter_templates(config):
for plugin_name, plugin in iter_enabled(config):
templates = get_callable_attr(plugin, "templates")
if templates:
yield plugin_name, templates
yield from Plugins.instance(config).iter_templates()

View File

@ -30,27 +30,26 @@ class BaseRunner:
def is_activated(self, service):
return self.config["ACTIVATE_" + service.upper()]
def iter_plugin_scripts(self, script):
yield from plugins.iter_scripts(self.config, script)
def iter_plugin_hooks(self, hook):
yield from plugins.iter_hooks(self.config, hook)
def initialise(runner):
fmt.echo_info("Initialising all services...")
runner.run("mysql-client", "scripts", "mysql-client", "init")
runner.run("mysql-client", "hooks", "mysql-client", "init")
for service in ["lms", "cms", "forum", "notes", "xqueue"]:
if runner.is_activated(service):
fmt.echo_info("Initialising {}...".format(service))
runner.run(service, "scripts", service, "init")
for plugin_name, service in runner.iter_plugin_scripts("init"):
runner.run(service, "hooks", service, "init")
for plugin_name, service in runner.iter_plugin_hooks("init"):
fmt.echo_info(
"Plugin {}: running init for service {}...".format(plugin_name, service)
)
runner.run(service, plugin_name, "scripts", service, "init")
runner.run(service, plugin_name, "hooks", service, "init")
fmt.echo_info("All services initialised.")
def create_user(runner, superuser, staff, username, email):
runner.check_service_is_activated("lms")
def create_user_command(superuser, staff, username, email):
opts = ""
if superuser:
opts += " --superuser"
@ -60,14 +59,14 @@ def create_user(runner, superuser, staff, username, email):
"./manage.py lms --settings=tutor.production manage_user {opts} {username} {email}\n"
"./manage.py lms --settings=tutor.production changepassword {username}"
).format(opts=opts, username=username, email=email)
runner.exec("lms", command)
return command
def import_demo_course(runner):
runner.check_service_is_activated("cms")
runner.run("cms", "importdemocourse")
runner.run("cms", "hooks", "cms", "importdemocourse")
def index_courses(runner):
runner.check_service_is_activated("cms")
runner.run("cms", "indexcourses")
runner.run("cms", "hooks", "cms", "indexcourses")

View File

@ -29,8 +29,51 @@ spec:
name: settings-cms
- mountPath: /openedx/config
name: config
- mountPath: /openedx/data
name: data
resources:
requests:
memory: 2Gi
volumes:
- name: settings-lms
configMap:
name: openedx-settings-lms
- name: settings-cms
configMap:
name: openedx-settings-cms
- name: config
configMap:
name: openedx-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms-worker
labels:
app.kubernetes.io/name: cms-worker
spec:
selector:
matchLabels:
app.kubernetes.io/name: cms-worker
template:
metadata:
labels:
app.kubernetes.io/name: cms-worker
spec:
containers:
- name: cms-worker
image: {{ DOCKER_REGISTRY }}{{ DOCKER_IMAGE_OPENEDX }}
args: ["./manage.py", "cms", "celery", "worker", "--loglevel=info", "--hostname=edx.cms.core.default.%%h", "--maxtasksperchild", "100"]
env:
- name: SERVICE_VARIANT
value: cms
- name: C_FORCE_ROOT
value: "1"
volumeMounts:
- mountPath: /openedx/edx-platform/lms/envs/tutor/
name: settings-lms
- mountPath: /openedx/edx-platform/cms/envs/tutor/
name: settings-cms
- mountPath: /openedx/config
name: config
volumes:
- name: settings-lms
configMap:
@ -41,9 +84,6 @@ spec:
- name: config
configMap:
name: openedx-config
- name: data
persistentVolumeClaim:
claimName: cms-data
{% if ACTIVATE_FORUM %}
---
apiVersion: apps/v1
@ -104,8 +144,51 @@ spec:
name: settings-cms
- mountPath: /openedx/config
name: config
- mountPath: /openedx/data
name: data
resources:
requests:
memory: 2Gi
volumes:
- name: settings-lms
configMap:
name: openedx-settings-lms
- name: settings-cms
configMap:
name: openedx-settings-cms
- name: config
configMap:
name: openedx-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: lms-worker
labels:
app.kubernetes.io/name: lms-worker
spec:
selector:
matchLabels:
app.kubernetes.io/name: lms-worker
template:
metadata:
labels:
app.kubernetes.io/name: lms-worker
spec:
containers:
- name: lms-worker
image: {{ DOCKER_REGISTRY }}{{ DOCKER_IMAGE_OPENEDX }}
args: ["./manage.py", "lms", "celery", "worker", "--loglevel=info", "--hostname=edx.lms.core.default.%%h", "--maxtasksperchild", "100"]
env:
- name: SERVICE_VARIANT
value: lms
- name: C_FORCE_ROOT
value: "1"
volumeMounts:
- mountPath: /openedx/edx-platform/lms/envs/tutor/
name: settings-lms
- mountPath: /openedx/edx-platform/cms/envs/tutor/
name: settings-cms
- mountPath: /openedx/config
name: config
volumes:
- name: settings-lms
configMap:
@ -116,9 +199,6 @@ spec:
- name: config
configMap:
name: openedx-config
- name: data
persistentVolumeClaim:
claimName: lms-data
{% if ACTIVATE_ELASTICSEARCH %}
---
apiVersion: apps/v1
@ -131,6 +211,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: elasticsearch
strategy:
type: Recreate
template:
metadata:
labels:
@ -191,6 +273,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: mongodb
strategy:
type: Recreate
template:
metadata:
labels:
@ -205,10 +289,11 @@ spec:
volumeMounts:
- mountPath: /data/db
name: data
volumes:
- name: data
# TODO this should be a pvc, otherwise the volume data will be lost when the pod is deleted
emptyDir: {}
persistentVolumeClaim:
claimName: mongodb
{% endif %}
{% if ACTIVATE_MYSQL %}
---
@ -222,6 +307,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: mysql
strategy:
type: Recreate
template:
metadata:
labels:
@ -296,12 +383,8 @@ spec:
volumeMounts:
- mountPath: /openedx/edx-notes-api/notesserver/settings/tutor.py
name: settings
- mountPath: /openedx/data
name: data
subPath: tutor.py
volumes:
- name: data
persistentVolumeClaim:
claimName: notes-data
- name: settings
configMap:
name: notes-settings
@ -383,13 +466,9 @@ spec:
- name: openedx-staticfiles
emptyDir: {}
- name: data-cms
persistentVolumeClaim:
claimName: cms-data
readOnly: true
emptyDir: {}
- name: data-lms
persistentVolumeClaim:
claimName: lms-data
readOnly: true
emptyDir: {}
{% if ACTIVATE_RABBITMQ %}
---
apiVersion: apps/v1
@ -402,6 +481,8 @@ spec:
selector:
matchLabels:
app.kubernetes.io/name: rabbitmq
strategy:
type: Recreate
template:
metadata:
labels:
@ -420,5 +501,49 @@ spec:
persistentVolumeClaim:
claimName: rabbitmq
{% endif %}
{% if ACTIVATE_XQUEUE %}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: xqueue
labels:
app.kubernetes.io/name: xqueue
spec:
selector:
matchLabels:
app.kubernetes.io/name: xqueue
template:
metadata:
labels:
app.kubernetes.io/name: xqueue
spec:
containers:
- name: xqueue
image: {{ DOCKER_REGISTRY }}{{ DOCKER_IMAGE_XQUEUE }}
ports:
- containerPort: 8040
env:
- name: DJANGO_SETTINGS_MODULE
value: xqueue.tutor
volumeMounts:
- mountPath: /openedx/xqueue/xqueue/tutor.py
name: settings
subPath: tutor.py
- name: xqueue-consumer
image: {{ DOCKER_REGISTRY }}{{ DOCKER_IMAGE_XQUEUE }}
command: ["sh", "-e", "-c"]
args: ["while true; do echo 'running consumers'; ./manage.py run_consumer; sleep 10; done"]
env:
- name: DJANGO_SETTINGS_MODULE
value: xqueue.tutor
volumeMounts:
- mountPath: /openedx/xqueue/xqueue/tutor.py
name: settings
subPath: tutor.py
volumes:
- name: settings
configMap:
name: xqueue-settings
{% endif %}
{{ patch("k8s-deployments") }}

View File

@ -1,18 +1,87 @@
---
---{% set hosts = [LMS_HOST, "preview." + LMS_HOST, CMS_HOST] %}{% if ACTIVATE_NOTES %}{% set hosts = hosts + [NOTES_HOST] %}{% endif %}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web
labels:
app.kubernetes.io/name: web
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 1000m
{% if ACTIVATE_HTTPS%}certmanager.k8s.io/issuer: letsencrypt
certmanager.k8s.io/acme-challenge-type: http01{% endif %}
spec:
rules:
{% set hosts = [LMS_HOST, "preview." + LMS_HOST, CMS_HOST] %}{% if ACTIVATE_NOTES %}{% set hosts = hosts + [NOTES_HOST] %}{% endif %}{% for host in hosts %}
{% for host in hosts %}
- host: {{ host }}
http:
paths:
- backend:
serviceName: nginx
servicePort: 80
- backend:
serviceName: nginx
servicePort: 443
{% endfor %}
servicePort: {% if ACTIVATE_HTTPS %}443{% else %}80{% endif %}{% endfor %}
{{ patch("k8s-ingress-rules")|indent(2) }}
{% if ACTIVATE_HTTPS %}
tls:
- hosts:
{% for host in hosts %}
- {{ host }}
{{ patch("k8s-ingress-tls-hosts")|indent(6) }}
{% endfor %}
secretName: letsencrypt
{%endif%}
{% if ACTIVATE_HTTPS %}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt
labels:
app.kubernetes.io/name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: {{ CONTACT_EMAIL }}
privateKeySecretRef:
name: letsencrypt-privatekey
http01: {}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ LMS_HOST|replace(".", "-") }}
spec:
secretName: {{ LMS_HOST }}-tls
issuerRef:
name: letsencrypt
commonName: {{ LMS_HOST }}
dnsNames:
- {{ LMS_HOST }}
- {{ CMS_HOST }}
acme:
config:
- http01:
ingress: web
domains:
- {{ LMS_HOST }}
- {{ CMS_HOST }}
{% if ACTIVATE_NOTES %}
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ NOTES_HOST|replace(".", "-") }}
spec:
secretName: {{ NOTES_HOST }}-tls
issuerRef:
name: letsencrypt
commonName: {{ NOTES_HOST }}
dnsNames:
- {{ NOTES_HOST }}
acme:
config:
- http01:
ingress: web
domains:
- {{ NOTES_HOST }}
{% endif %}
{{ patch("k8s-ingress-certificates") }}
{% endif %}

View File

@ -148,3 +148,18 @@ spec:
selector:
app.kubernetes.io/name: smtp
{% endif %}
{% if ACTIVATE_XQUEUE %}
---
apiVersion: v1
kind: Service
metadata:
name: xqueue
spec:
type: NodePort
ports:
- port: 8040
protocol: TCP
selector:
app.kubernetes.io/name: xqueue
{% endif %}
{{ patch("k8s-services") }}

View File

@ -1,31 +1,3 @@
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cms-data
labels:
app.kubernetes.io/component: volume
app.kubernetes.io/name: cms-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lms-data
labels:
app.kubernetes.io/component: volume
app.kubernetes.io/name: lms-data
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
{% if ACTIVATE_ELASTICSEARCH %}
---
apiVersion: v1
@ -42,6 +14,22 @@ spec:
requests:
storage: 2Gi
{% endif %}
{% if ACTIVATE_MONGODB %}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb
labels:
app.kubernetes.io/component: volume
app.kubernetes.io/name: mongodb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
{% endif %}
{% if ACTIVATE_MYSQL %}
---
apiVersion: v1
@ -58,22 +46,6 @@ spec:
requests:
storage: 5Gi
{% endif %}
{% if ACTIVATE_NOTES %}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: notes-data
labels:
app.kubernetes.io/component: volume
app.kubernetes.io/name: notes-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
{% endif %}
{% if ACTIVATE_RABBITMQ %}
---
apiVersion: v1
@ -89,4 +61,5 @@ spec:
resources:
requests:
storage: 1Gi
{% endif %}
{% endif %}
{{ patch("k8s-volumes") }}

View File

@ -37,3 +37,6 @@ configMapGenerator:
{% if ACTIVATE_NOTES %}- name: notes-settings
files:
- apps/notes/settings/tutor.py{% endif %}
{% if ACTIVATE_XQUEUE %}- name: xqueue-settings
files:
- apps/xqueue/settings/tutor.py{% endif %}

View File

@ -229,7 +229,8 @@ services:
environment:
DJANGO_SETTINGS_MODULE: xqueue.tutor
restart: unless-stopped
command: ./manage.py run_consumer
entrypoint: ["sh", "-e", "-c"]
command: ["while true; do echo 'running consumers'; ./manage.py run_consumer; sleep 10; done"]
{% if ACTIVATE_MYSQL %}depends_on:
- mysql{% endif %}
{% endif %}