Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
from datetime import datetime
|
|
|
|
from time import sleep
|
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
import click
|
|
|
|
|
2019-06-03 22:44:12 +00:00
|
|
|
from .. import config as tutor_config
|
2019-05-11 19:20:09 +00:00
|
|
|
from .. import env as tutor_env
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
from .. import exceptions
|
2019-05-11 19:20:09 +00:00
|
|
|
from .. import fmt
|
2019-06-05 17:45:22 +00:00
|
|
|
from .. import interactive as interactive_config
|
2019-05-11 19:20:09 +00:00
|
|
|
from .. import scripts
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
from .. import serialize
|
2019-05-11 19:20:09 +00:00
|
|
|
from .. import utils
|
2019-01-22 20:25:04 +00:00
|
|
|
|
|
|
|
|
2019-10-08 20:25:31 +00:00
|
|
|
@click.group(help="Run Open edX on Kubernetes")
|
2019-01-22 20:25:04 +00:00
|
|
|
def k8s():
|
|
|
|
pass
|
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-05-05 09:45:24 +00:00
|
|
|
@click.command(help="Configure and run Open edX from scratch")
|
2019-06-05 17:45:22 +00:00
|
|
|
@click.option("-I", "--non-interactive", is_flag=True, help="Run non-interactively")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def quickstart(context, non_interactive):
|
2019-01-22 20:25:04 +00:00
|
|
|
click.echo(fmt.title("Interactive platform configuration"))
|
2019-12-12 16:05:56 +00:00
|
|
|
config = interactive_config.update(context.root, interactive=(not non_interactive))
|
2019-06-06 19:58:21 +00:00
|
|
|
if config["ACTIVATE_HTTPS"] and not config["WEB_PROXY"]:
|
|
|
|
fmt.echo_alert(
|
|
|
|
"Potentially invalid configuration: ACTIVATE_HTTPS=true WEB_PROXY=false\n"
|
|
|
|
"You should either disable HTTPS support or configure your platform to use"
|
|
|
|
" a web proxy. See the Kubernetes section in the Tutor documentation for"
|
|
|
|
" more information."
|
|
|
|
)
|
2019-06-05 13:43:51 +00:00
|
|
|
click.echo(fmt.title("Updating the current environment"))
|
2020-01-08 19:07:25 +00:00
|
|
|
tutor_env.save(context.root, config)
|
2019-01-22 20:25:04 +00:00
|
|
|
click.echo(fmt.title("Starting the platform"))
|
2019-12-12 16:05:56 +00:00
|
|
|
start.callback()
|
2019-05-09 07:51:06 +00:00
|
|
|
click.echo(fmt.title("Database creation and migrations"))
|
2020-06-01 20:38:04 +00:00
|
|
|
init.callback(limit=None)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
@click.command(help="Run all configured Open edX services")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def start(context):
|
2019-05-09 07:51:06 +00:00
|
|
|
# Create namespace
|
|
|
|
utils.kubectl(
|
|
|
|
"apply",
|
|
|
|
"--kustomize",
|
2019-12-12 16:05:56 +00:00
|
|
|
tutor_env.pathjoin(context.root),
|
2019-05-09 07:51:06 +00:00
|
|
|
"--wait",
|
|
|
|
"--selector",
|
|
|
|
"app.kubernetes.io/component=namespace",
|
2019-05-05 09:45:24 +00:00
|
|
|
)
|
2019-05-09 07:51:06 +00:00
|
|
|
# Create volumes
|
|
|
|
utils.kubectl(
|
|
|
|
"apply",
|
|
|
|
"--kustomize",
|
2019-12-12 16:05:56 +00:00
|
|
|
tutor_env.pathjoin(context.root),
|
2019-05-09 07:51:06 +00:00
|
|
|
"--wait",
|
|
|
|
"--selector",
|
|
|
|
"app.kubernetes.io/component=volume",
|
2019-05-05 09:45:24 +00:00
|
|
|
)
|
2020-04-25 20:32:57 +00:00
|
|
|
# Create everything else except jobs, ingress and issuer
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
utils.kubectl(
|
|
|
|
"apply",
|
|
|
|
"--kustomize",
|
|
|
|
tutor_env.pathjoin(context.root),
|
|
|
|
"--selector",
|
2020-04-25 20:32:57 +00:00
|
|
|
"app.kubernetes.io/component notin (job, ingress, issuer)",
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
@click.command(help="Stop a running platform")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def stop(context):
|
|
|
|
config = tutor_config.load(context.root)
|
2019-05-09 07:51:06 +00:00
|
|
|
utils.kubectl(
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
"delete",
|
|
|
|
*resource_selector(config),
|
|
|
|
"deployments,services,ingress,configmaps,jobs",
|
2019-05-09 07:51:06 +00:00
|
|
|
)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-07-08 05:59:14 +00:00
|
|
|
@click.command(help="Reboot an existing platform")
|
2020-01-08 19:07:25 +00:00
|
|
|
def reboot():
|
2019-12-12 16:05:56 +00:00
|
|
|
stop.callback()
|
|
|
|
start.callback()
|
2019-07-08 05:59:14 +00:00
|
|
|
|
|
|
|
|
2019-06-05 19:01:02 +00:00
|
|
|
def resource_selector(config, *selectors):
|
|
|
|
"""
|
|
|
|
Convenient utility for filtering only the resources that belong to this project.
|
|
|
|
"""
|
|
|
|
selector = ",".join(
|
|
|
|
["app.kubernetes.io/instance=openedx-" + config["ID"]] + list(selectors)
|
|
|
|
)
|
|
|
|
return ["--namespace", config["K8S_NAMESPACE"], "--selector=" + selector]
|
|
|
|
|
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
@click.command(help="Completely delete an existing platform")
|
|
|
|
@click.option("-y", "--yes", is_flag=True, help="Do not ask for confirmation")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def delete(context, yes):
|
2019-01-22 20:25:04 +00:00
|
|
|
if not yes:
|
2019-05-05 09:45:24 +00:00
|
|
|
click.confirm(
|
|
|
|
"Are you sure you want to delete the platform? All data will be removed.",
|
|
|
|
abort=True,
|
|
|
|
)
|
2019-05-09 07:51:06 +00:00
|
|
|
utils.kubectl(
|
2019-12-12 16:05:56 +00:00
|
|
|
"delete",
|
|
|
|
"-k",
|
|
|
|
tutor_env.pathjoin(context.root),
|
|
|
|
"--ignore-not-found=true",
|
|
|
|
"--wait",
|
2019-05-09 07:51:06 +00:00
|
|
|
)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-06-05 17:28:06 +00:00
|
|
|
@click.command(help="Initialise all applications")
|
2020-06-01 20:38:04 +00:00
|
|
|
@click.option("-l", "--limit", help="Limit initialisation to this service or plugin")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
2020-06-01 20:38:04 +00:00
|
|
|
def init(context, limit):
|
2019-12-12 16:05:56 +00:00
|
|
|
config = tutor_config.load(context.root)
|
|
|
|
runner = K8sScriptRunner(context.root, config)
|
2019-06-05 19:01:02 +00:00
|
|
|
for service in ["mysql", "elasticsearch", "mongodb"]:
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
if tutor_config.is_service_activated(config, service):
|
2019-06-05 19:01:02 +00:00
|
|
|
wait_for_pod_ready(config, service)
|
2020-06-01 20:38:04 +00:00
|
|
|
scripts.initialise(runner, limit_to=limit)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
@click.command(help="Create an Open edX user and interactively set their password")
|
|
|
|
@click.option("--superuser", is_flag=True, help="Make superuser")
|
|
|
|
@click.option("--staff", is_flag=True, help="Make staff user")
|
2019-07-11 03:55:12 +00:00
|
|
|
@click.option(
|
|
|
|
"-p",
|
|
|
|
"--password",
|
|
|
|
help="Specify password from the command line. If undefined, you will be prompted to input a password",
|
|
|
|
)
|
2019-01-22 20:25:04 +00:00
|
|
|
@click.argument("name")
|
|
|
|
@click.argument("email")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def createuser(context, superuser, staff, password, name, email):
|
|
|
|
config = tutor_config.load(context.root)
|
2019-07-11 03:55:12 +00:00
|
|
|
command = scripts.create_user_command(
|
|
|
|
superuser, staff, name, email, password=password
|
|
|
|
)
|
2020-03-13 11:09:48 +00:00
|
|
|
# This needs to be interactive in case the user needs to type a password
|
2019-06-06 19:58:21 +00:00
|
|
|
kubectl_exec(config, "lms", command, attach=True)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
@click.command(help="Import the demo course")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def importdemocourse(context):
|
2019-05-11 22:10:14 +00:00
|
|
|
fmt.echo_info("Importing demo course")
|
2019-12-12 16:05:56 +00:00
|
|
|
config = tutor_config.load(context.root)
|
|
|
|
runner = K8sScriptRunner(context.root, config)
|
2019-05-29 09:20:52 +00:00
|
|
|
scripts.import_demo_course(runner)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2020-03-13 11:09:48 +00:00
|
|
|
@click.command(
|
|
|
|
help="Set a theme for a given domain name. To reset to the default theme , use 'default' as the theme name."
|
|
|
|
)
|
|
|
|
@click.argument("theme_name")
|
|
|
|
@click.argument("domain_names", metavar="domain_name", nargs=-1)
|
|
|
|
@click.pass_obj
|
|
|
|
def settheme(context, theme_name, domain_names):
|
|
|
|
config = tutor_config.load(context.root)
|
|
|
|
runner = K8sScriptRunner(context.root, config)
|
|
|
|
for domain_name in domain_names:
|
|
|
|
scripts.set_theme(theme_name, domain_name, runner)
|
|
|
|
|
|
|
|
|
2019-06-06 19:58:21 +00:00
|
|
|
@click.command(name="exec", help="Execute a command in a pod of the given application")
|
|
|
|
@click.argument("service")
|
|
|
|
@click.argument("command")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def exec_command(context, service, command):
|
|
|
|
config = tutor_config.load(context.root)
|
2019-06-06 19:58:21 +00:00
|
|
|
kubectl_exec(config, service, command, attach=True)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-05-09 07:51:06 +00:00
|
|
|
@click.command(help="View output from containers")
|
2019-06-06 19:58:21 +00:00
|
|
|
@click.option("-c", "--container", help="Print the logs of this specific container")
|
2019-05-09 07:51:06 +00:00
|
|
|
@click.option("-f", "--follow", is_flag=True, help="Follow log output")
|
|
|
|
@click.option("--tail", type=int, help="Number of lines to show from each container")
|
|
|
|
@click.argument("service")
|
2019-12-12 16:05:56 +00:00
|
|
|
@click.pass_obj
|
|
|
|
def logs(context, container, follow, tail, service):
|
|
|
|
config = tutor_config.load(context.root)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-05-09 07:51:06 +00:00
|
|
|
command = ["logs"]
|
2019-06-05 19:01:02 +00:00
|
|
|
selectors = ["app.kubernetes.io/name=" + service] if service else []
|
|
|
|
command += resource_selector(config, *selectors)
|
2019-05-09 07:51:06 +00:00
|
|
|
|
2019-06-06 19:58:21 +00:00
|
|
|
if container:
|
|
|
|
command += ["-c", container]
|
2019-05-09 07:51:06 +00:00
|
|
|
if follow:
|
|
|
|
command += ["--follow"]
|
|
|
|
if tail is not None:
|
|
|
|
command += ["--tail", str(tail)]
|
|
|
|
|
|
|
|
utils.kubectl(*command)
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
class K8sClients:
|
|
|
|
_instance = None
|
|
|
|
|
|
|
|
def __init__(self):
|
|
|
|
# Loading the kubernetes module here to avoid import overhead
|
|
|
|
from kubernetes import client, config # pylint: disable=import-outside-toplevel
|
|
|
|
|
|
|
|
config.load_kube_config()
|
|
|
|
self._batch_api = None
|
|
|
|
self._core_api = None
|
|
|
|
self._client = client
|
|
|
|
|
|
|
|
@classmethod
|
|
|
|
def instance(cls):
|
|
|
|
if cls._instance is None:
|
|
|
|
cls._instance = cls()
|
|
|
|
return cls._instance
|
|
|
|
|
|
|
|
@property
|
|
|
|
def batch_api(self):
|
|
|
|
if self._batch_api is None:
|
|
|
|
self._batch_api = self._client.BatchV1Api()
|
|
|
|
return self._batch_api
|
|
|
|
|
|
|
|
@property
|
|
|
|
def core_api(self):
|
|
|
|
if self._core_api is None:
|
|
|
|
self._core_api = self._client.CoreV1Api()
|
|
|
|
return self._core_api
|
|
|
|
|
|
|
|
|
2019-05-29 09:20:52 +00:00
|
|
|
class K8sScriptRunner(scripts.BaseRunner):
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
def load_job(self, name):
|
|
|
|
jobs = self.render("k8s", "jobs.yml")
|
|
|
|
for job in serialize.load_all(jobs):
|
|
|
|
if job["metadata"]["name"] == name:
|
|
|
|
return job
|
|
|
|
raise ValueError("Could not find job '{}'".format(name))
|
|
|
|
|
|
|
|
def active_job_names(self):
|
|
|
|
"""
|
|
|
|
Return a list of active job names
|
|
|
|
Docs:
|
|
|
|
https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#list-job-v1-batch
|
|
|
|
"""
|
|
|
|
api = K8sClients.instance().batch_api
|
|
|
|
return [
|
|
|
|
job.metadata.name
|
|
|
|
for job in api.list_namespaced_job(self.config["K8S_NAMESPACE"]).items
|
|
|
|
if job.status.active
|
|
|
|
]
|
|
|
|
|
|
|
|
def run_job(self, service, command):
|
|
|
|
job_name = "{}-job".format(service)
|
|
|
|
try:
|
|
|
|
job = self.load_job(job_name)
|
|
|
|
except ValueError:
|
|
|
|
message = (
|
|
|
|
"The '{job_name}' kubernetes job does not exist in the list of job "
|
|
|
|
"runners. This might be caused by an older plugin. Tutor switched to a"
|
|
|
|
" job runner model for running one-time commands, such as database"
|
|
|
|
" initialisation. For the record, this is the command that we are "
|
|
|
|
"running:\n"
|
|
|
|
"\n"
|
|
|
|
" {command}\n"
|
|
|
|
"\n"
|
|
|
|
"Old-style job running will be deprecated soon. Please inform "
|
|
|
|
"your plugin maintainer!"
|
2020-04-25 20:56:46 +00:00
|
|
|
).format(job_name=job_name, command=command.replace("\n", "\n "),)
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
fmt.echo_alert(message)
|
|
|
|
wait_for_pod_ready(self.config, service)
|
|
|
|
kubectl_exec(self.config, service, command)
|
|
|
|
return
|
|
|
|
# Create a unique job name to make it deduplicate jobs and make it easier to
|
|
|
|
# find later. Logs of older jobs will remain available for some time.
|
|
|
|
job_name += "-" + datetime.now().strftime("%Y%m%d%H%M%S")
|
|
|
|
|
|
|
|
# Wait until all other jobs are completed
|
|
|
|
while True:
|
|
|
|
active_jobs = self.active_job_names()
|
|
|
|
if not active_jobs:
|
|
|
|
break
|
|
|
|
fmt.echo_info(
|
|
|
|
"Waiting for active jobs to terminate: {}".format(" ".join(active_jobs))
|
|
|
|
)
|
|
|
|
sleep(5)
|
|
|
|
|
|
|
|
# Configure job
|
|
|
|
job["metadata"]["name"] = job_name
|
|
|
|
job["metadata"].setdefault("labels", {})
|
|
|
|
job["metadata"]["labels"]["app.kubernetes.io/name"] = job_name
|
|
|
|
job["spec"]["template"]["spec"]["containers"][0]["args"] = [
|
|
|
|
"sh",
|
|
|
|
"-e",
|
|
|
|
"-c",
|
|
|
|
command,
|
|
|
|
]
|
|
|
|
job["spec"]["backoffLimit"] = 1
|
|
|
|
job["spec"]["ttlSecondsAfterFinished"] = 3600
|
|
|
|
# Save patched job to "jobs.yml" file
|
|
|
|
with open(tutor_env.pathjoin(self.root, "k8s", "jobs.yml"), "w") as job_file:
|
|
|
|
serialize.dump(job, job_file)
|
|
|
|
# We cannot use the k8s API to create the job: configMap and volume names need
|
|
|
|
# to be found with the right suffixes.
|
|
|
|
utils.kubectl(
|
|
|
|
"apply",
|
|
|
|
"--kustomize",
|
|
|
|
tutor_env.pathjoin(self.root),
|
|
|
|
"--selector",
|
|
|
|
"app.kubernetes.io/name={}".format(job_name),
|
|
|
|
)
|
|
|
|
|
|
|
|
message = (
|
|
|
|
"Job {job_name} is running. To view the logs from this job, run:\n\n"
|
|
|
|
""" kubectl logs --namespace={namespace} --follow $(kubectl get --namespace={namespace} pods """
|
|
|
|
"""--selector=job-name={job_name} -o=jsonpath="{{.items[0].metadata.name}}")\n\n"""
|
|
|
|
"Waiting for job completion..."
|
|
|
|
).format(job_name=job_name, namespace=self.config["K8S_NAMESPACE"])
|
|
|
|
fmt.echo_info(message)
|
|
|
|
|
|
|
|
# Wait for completion
|
|
|
|
field_selector = "metadata.name={}".format(job_name)
|
|
|
|
while True:
|
|
|
|
jobs = K8sClients.instance().batch_api.list_namespaced_job(
|
|
|
|
self.config["K8S_NAMESPACE"], field_selector=field_selector
|
|
|
|
)
|
|
|
|
if not jobs.items:
|
|
|
|
continue
|
|
|
|
job = jobs.items[0]
|
|
|
|
if not job.status.active:
|
|
|
|
if job.status.succeeded:
|
|
|
|
fmt.echo_info("Job {} successful.".format(job_name))
|
|
|
|
break
|
|
|
|
if job.status.failed:
|
|
|
|
raise exceptions.TutorError(
|
|
|
|
"Job {} failed. View the job logs to debug this issue.".format(
|
|
|
|
job_name
|
|
|
|
)
|
|
|
|
)
|
|
|
|
sleep(5)
|
2019-06-06 19:58:21 +00:00
|
|
|
|
|
|
|
|
|
|
|
def kubectl_exec(config, service, command, attach=False):
|
|
|
|
selector = "app.kubernetes.io/name={}".format(service)
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
pods = K8sClients.instance().core_api.list_namespaced_pod(
|
|
|
|
namespace=config["K8S_NAMESPACE"], label_selector=selector
|
2019-06-06 19:58:21 +00:00
|
|
|
)
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
if not pods.items:
|
|
|
|
raise exceptions.TutorError(
|
|
|
|
"Could not find an active pod for the {} service".format(service)
|
|
|
|
)
|
|
|
|
pod_name = pods.items[0].metadata.name
|
2019-06-06 19:58:21 +00:00
|
|
|
|
|
|
|
# Run command
|
|
|
|
attach_opts = ["-i", "-t"] if attach else []
|
|
|
|
utils.kubectl(
|
|
|
|
"exec",
|
|
|
|
*attach_opts,
|
|
|
|
"--namespace",
|
|
|
|
config["K8S_NAMESPACE"],
|
Improve job running in local and k8s
Running jobs was previously done with "exec". This was because it
allowed us to avoid copying too much container specification information
from the docker-compose/deployments files to the jobs files. However,
this was limiting:
- In order to run a job, the corresponding container had to be running.
This was particularly painful in Kubernetes, where containers are
crashing as long as migrations are not correctly run.
- Containers in which we need to run jobs needed to be present in the
docker-compose/deployments files. This is unnecessary, for example when
mysql is disabled, or in the case of the certbot container.
Now, we create dedicated jobs files, both for local and k8s deployment.
This introduces a little redundancy, but not too much. Note that
dependent containers are not listed in the docker-compose.jobs.yml file,
so an actual platform is still supposed to be running when we launch the
jobs.
This also introduces a subtle change: now, jobs go through the container
entrypoint prior to running. This is probably a good thing, as it will
avoid forgetting about incorrect environment variables.
In k8s, we find ourselves interacting way too much with the kubectl
utility. Parsing output from the CLI is a pain. So we need to switch to
the native kubernetes client library.
2020-03-25 17:47:36 +00:00
|
|
|
pod_name,
|
2019-06-06 19:58:21 +00:00
|
|
|
"--",
|
|
|
|
"sh",
|
|
|
|
"-e",
|
|
|
|
"-c",
|
|
|
|
command,
|
|
|
|
)
|
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
|
2019-06-05 19:01:02 +00:00
|
|
|
def wait_for_pod_ready(config, service):
|
|
|
|
fmt.echo_info("Waiting for a {} pod to be ready...".format(service))
|
|
|
|
utils.kubectl(
|
|
|
|
"wait",
|
|
|
|
*resource_selector(config, "app.kubernetes.io/name={}".format(service)),
|
2019-06-06 19:58:21 +00:00
|
|
|
"--for=condition=ContainersReady",
|
2019-06-05 19:01:02 +00:00
|
|
|
"--timeout=600s",
|
|
|
|
"pod",
|
|
|
|
)
|
|
|
|
|
2019-04-23 07:57:55 +00:00
|
|
|
|
2019-01-22 20:25:04 +00:00
|
|
|
k8s.add_command(quickstart)
|
|
|
|
k8s.add_command(start)
|
|
|
|
k8s.add_command(stop)
|
2019-07-08 05:59:14 +00:00
|
|
|
k8s.add_command(reboot)
|
2019-01-22 20:25:04 +00:00
|
|
|
k8s.add_command(delete)
|
2019-06-05 17:28:06 +00:00
|
|
|
k8s.add_command(init)
|
2019-01-22 20:25:04 +00:00
|
|
|
k8s.add_command(createuser)
|
|
|
|
k8s.add_command(importdemocourse)
|
2020-03-13 11:09:48 +00:00
|
|
|
k8s.add_command(settheme)
|
2019-06-06 19:58:21 +00:00
|
|
|
k8s.add_command(exec_command)
|
2019-05-09 07:51:06 +00:00
|
|
|
k8s.add_command(logs)
|