mirror of
https://github.com/frappe/frappe_docker.git
synced 2024-11-10 07:11:00 +00:00
Merge pull request #175 from frappe/develop
feat: cloud backup and restore
This commit is contained in:
commit
78c25ef277
19
.github/workflows/stale.yml
vendored
Normal file
19
.github/workflows/stale.yml
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
name: Mark stale issues and pull requests
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 0 * * *"
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/stale@v1
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
stale-issue-message: 'Stale issue message'
|
||||
stale-pr-message: 'Stale pull request message'
|
||||
stale-issue-label: 'no-issue-activity'
|
||||
stale-pr-label: 'no-pr-activity'
|
25
CONTRIBUTING.md
Normal file
25
CONTRIBUTING.md
Normal file
@ -0,0 +1,25 @@
|
||||
# Contribution Guidelines
|
||||
## Branches
|
||||
|
||||
* *master*: images on the master branch are built monthly.
|
||||
* *develop*: images on this branch are built daily.
|
||||
|
||||
# Pull Requests
|
||||
|
||||
Please **send all pull request exclusively to the *develop*** branch.
|
||||
When the PR are merged, the merge will trigger the image build automatically.
|
||||
|
||||
Please test all PR as extensively as you can, considering that the software can be run in different modes:
|
||||
* with docker-compose for production
|
||||
* with or without Nginx proxy
|
||||
* with VScode for testing environments
|
||||
|
||||
Every once in a while (or before monthly release) develop will be merged into master.
|
||||
|
||||
## Reducing the number of branching and builds :evergreen_tree: :evergreen_tree: :evergreen_tree:
|
||||
Please be considerate when pushing commits and opening PR for multiple branches, as the process of building images (triggered on push and PR branch push) uses energy and contributes to global warming.
|
||||
|
||||
# Documentation
|
||||
|
||||
You should place README.md(s) in the relevant directories, explaining what the software in that particular directory does.
|
||||
|
68
README.md
68
README.md
@ -224,6 +224,36 @@ docker exec -it \
|
||||
|
||||
The backup will be available in the `sites` mounted volume.
|
||||
|
||||
#### Push backup to s3 compatible storage
|
||||
|
||||
Environment Variables
|
||||
|
||||
- `BUCKET_NAME`, Required to set bucket created on S3 compatible storage.
|
||||
- `ACCESS_KEY_ID`, Required to set access key.
|
||||
- `SECRET_ACCESS_KEY`, Required to set secret access key.
|
||||
- `ENDPOINT_URL`, Required to set URL of S3 compatible storage.
|
||||
- `BUCKET_DIR`, Required to set directory in bucket where sites from this deployment will be backed up.
|
||||
- `BACKUP_LIMIT`, Optionally set this to limit number of backups in bucket directory. Defaults to 3.
|
||||
|
||||
```sh
|
||||
docker run \
|
||||
-e "BUCKET_NAME=backups" \
|
||||
-e "ACCESS_KEY_ID=access_id_from_provider" \
|
||||
-e "SECRET_ACCESS_KEY=secret_access_from_provider" \
|
||||
-e "ENDPOINT_URL=https://region.storage-provider.com" \
|
||||
-e "BUCKET_DIR=frappe-bench-v12" \
|
||||
-v ./installation/sites:/home/frappe/frappe-bench/sites \
|
||||
--network <project-name>_default \
|
||||
frappe/frappe-worker:v12 push-backup
|
||||
```
|
||||
|
||||
Note:
|
||||
|
||||
- Above example will backup files in bucket called `backup` at location `frappe-bench-v12/site.name.com/DATE_TIME/DATE_TIME-site_name_com-{filetype}.{extension}`,
|
||||
- example DATE_TIME: 20200325_042020.
|
||||
- example filetype: database, files or private-files
|
||||
- example extension: sql.gz or tar
|
||||
|
||||
#### Updating and Migrating Sites
|
||||
|
||||
Switch to the root of the `frappe_docker` directory before running the following commands:
|
||||
@ -251,6 +281,44 @@ docker exec -it \
|
||||
<project-name>_erpnext-python_1 docker-entrypoint.sh migrate
|
||||
```
|
||||
|
||||
#### Restore backups
|
||||
|
||||
Environment Variables
|
||||
|
||||
- `MYSQL_ROOT_PASSWORD`, Required to restore mariadb backups.
|
||||
- `BUCKET_NAME`, Required to set bucket created on S3 compatible storage.
|
||||
- `ACCESS_KEY_ID`, Required to set access key.
|
||||
- `SECRET_ACCESS_KEY`, Required to set secret access key.
|
||||
- `ENDPOINT_URL`, Required to set URL of S3 compatible storage.
|
||||
- `BUCKET_DIR`, Required to set directory in bucket where sites from this deployment will be backed up.
|
||||
|
||||
```sh
|
||||
docker run \
|
||||
-e "MYSQL_ROOT_PASSWORD=admin" \
|
||||
-e "BUCKET_NAME=backups" \
|
||||
-e "ACCESS_KEY_ID=access_id_from_provider" \
|
||||
-e "SECRET_ACCESS_KEY=secret_access_from_provider" \
|
||||
-e "ENDPOINT_URL=https://region.storage-provider.com" \
|
||||
-e "BUCKET_DIR=frappe-bench-v12" \
|
||||
-v ./installation/sites:/home/frappe/frappe-bench/sites \
|
||||
-v ./backups:/home/frappe/backups \
|
||||
--network <project-name>_default \
|
||||
frappe/frappe-worker:v12 restore-backup
|
||||
```
|
||||
|
||||
Note:
|
||||
|
||||
- Volume must be mounted at location `/home/frappe/backups` for restoring sites
|
||||
- If no backup files are found in volume, it will use s3 credentials to pull backups
|
||||
- Backup structure for mounted volume or downloaded from s3:
|
||||
- /home/frappe/backups
|
||||
- site1.domain.com
|
||||
- 20200420_162000
|
||||
- 20200420_162000-site1_domain_com-*
|
||||
- site2.domain.com
|
||||
- 20200420_162000
|
||||
- 20200420_162000-site2_domain_com-*
|
||||
|
||||
### Custom apps
|
||||
|
||||
To add your own Frappe/ERPNext apps to the image, we'll need to create a custom image with the help of a unique wrapper script
|
||||
|
@ -29,36 +29,27 @@ def main():
|
||||
|
||||
site_config = get_site_config(site_name)
|
||||
|
||||
# update User's host to '%' required to connect from any container
|
||||
command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format(
|
||||
mysql_command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format(
|
||||
db_host=config.get('db_host'),
|
||||
mariadb_root_username=mariadb_root_username,
|
||||
mariadb_root_password=mariadb_root_password
|
||||
)
|
||||
command += "\"UPDATE mysql.user SET Host = '%' where User = '{db_name}'; FLUSH PRIVILEGES;\"".format(
|
||||
|
||||
# update User's host to '%' required to connect from any container
|
||||
command = mysql_command + "\"UPDATE mysql.user SET Host = '%' where User = '{db_name}'; FLUSH PRIVILEGES;\"".format(
|
||||
db_name=site_config.get('db_name')
|
||||
)
|
||||
os.system(command)
|
||||
|
||||
# Set db password
|
||||
command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format(
|
||||
db_host=config.get('db_host'),
|
||||
mariadb_root_username=mariadb_root_username,
|
||||
mariadb_root_password=mariadb_root_password
|
||||
)
|
||||
command += "\"SET PASSWORD FOR '{db_name}'@'%' = PASSWORD('{db_password}'); FLUSH PRIVILEGES;\"".format(
|
||||
command = mysql_command + "\"UPDATE mysql.user SET authentication_string = PASSWORD('{db_password}') WHERE User = \'{db_name}\' AND Host = \'%\';\"".format(
|
||||
db_name=site_config.get('db_name'),
|
||||
db_password=site_config.get('db_password')
|
||||
)
|
||||
os.system(command)
|
||||
|
||||
# Grant permission to database
|
||||
command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format(
|
||||
db_host=config.get('db_host'),
|
||||
mariadb_root_username=mariadb_root_username,
|
||||
mariadb_root_password=mariadb_root_password
|
||||
)
|
||||
command += "\"GRANT ALL PRIVILEGES ON \`{db_name}\`.* TO '{db_name}'@'%'; FLUSH PRIVILEGES;\"".format(
|
||||
command = mysql_command + "\"GRANT ALL PRIVILEGES ON \`{db_name}\`.* TO '{db_name}'@'%'; FLUSH PRIVILEGES;\"".format(
|
||||
db_name=site_config.get('db_name')
|
||||
)
|
||||
os.system(command)
|
||||
|
182
build/common/commands/push_backup.py
Normal file
182
build/common/commands/push_backup.py
Normal file
@ -0,0 +1,182 @@
|
||||
import os
|
||||
import time
|
||||
import boto3
|
||||
|
||||
import datetime
|
||||
from glob import glob
|
||||
from frappe.utils import get_sites
|
||||
|
||||
DATE_FORMAT = "%Y%m%d_%H%M%S"
|
||||
|
||||
def get_file_ext():
|
||||
return {
|
||||
"database": "-database.sql.gz",
|
||||
"private_files": "-private-files.tar",
|
||||
"public_files": "-files.tar"
|
||||
}
|
||||
|
||||
def get_backup_details(sitename):
|
||||
backup_details = dict()
|
||||
file_ext = get_file_ext()
|
||||
|
||||
# add trailing slash https://stackoverflow.com/a/15010678
|
||||
site_backup_path = os.path.join(os.getcwd(), sitename, "private", "backups", "")
|
||||
|
||||
if os.path.exists(site_backup_path):
|
||||
for filetype, ext in file_ext.items():
|
||||
site_slug = sitename.replace('.', '_')
|
||||
pattern = site_backup_path + '*-' + site_slug + ext
|
||||
backup_files = list(filter(os.path.isfile, glob(pattern)))
|
||||
|
||||
if len(backup_files) > 0:
|
||||
backup_files.sort(key=lambda file: os.stat(os.path.join(site_backup_path, file)).st_ctime)
|
||||
backup_date = datetime.datetime.strptime(time.ctime(os.path.getmtime(backup_files[0])), "%a %b %d %H:%M:%S %Y")
|
||||
backup_details[filetype] = {
|
||||
"sitename": sitename,
|
||||
"file_size_in_bytes": os.stat(backup_files[-1]).st_size,
|
||||
"file_path": os.path.abspath(backup_files[-1]),
|
||||
"filename": os.path.basename(backup_files[-1]),
|
||||
"backup_date": backup_date.date().strftime("%Y-%m-%d %H:%M:%S")
|
||||
}
|
||||
|
||||
return backup_details
|
||||
|
||||
def get_s3_config():
|
||||
check_environment_variables()
|
||||
bucket = os.environ.get('BUCKET_NAME')
|
||||
|
||||
conn = boto3.client(
|
||||
's3',
|
||||
aws_access_key_id=os.environ.get('ACCESS_KEY_ID'),
|
||||
aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'),
|
||||
endpoint_url=os.environ.get('ENDPOINT_URL')
|
||||
)
|
||||
|
||||
return conn, bucket
|
||||
|
||||
def check_environment_variables():
|
||||
if not 'BUCKET_NAME' in os.environ:
|
||||
print('Variable BUCKET_NAME not set')
|
||||
exit(1)
|
||||
|
||||
if not 'ACCESS_KEY_ID' in os.environ:
|
||||
print('Variable ACCESS_KEY_ID not set')
|
||||
exit(1)
|
||||
|
||||
if not 'SECRET_ACCESS_KEY' in os.environ:
|
||||
print('Variable SECRET_ACCESS_KEY not set')
|
||||
exit(1)
|
||||
|
||||
if not 'ENDPOINT_URL' in os.environ:
|
||||
print('Variable ENDPOINT_URL not set')
|
||||
exit(1)
|
||||
|
||||
if not 'BUCKET_DIR' in os.environ:
|
||||
print('Variable BUCKET_DIR not set')
|
||||
exit(1)
|
||||
|
||||
def upload_file_to_s3(filename, folder, conn, bucket):
|
||||
|
||||
destpath = os.path.join(folder, os.path.basename(filename))
|
||||
try:
|
||||
print("Uploading file:", filename)
|
||||
conn.upload_file(filename, bucket, destpath)
|
||||
|
||||
except Exception as e:
|
||||
print("Error uploading: %s" % (e))
|
||||
exit(1)
|
||||
|
||||
def delete_old_backups(limit, bucket, site_name):
|
||||
all_backups = list()
|
||||
all_backup_dates = list()
|
||||
backup_limit = int(limit)
|
||||
check_environment_variables()
|
||||
bucket_dir = os.environ.get('BUCKET_DIR')
|
||||
oldest_backup_date = None
|
||||
|
||||
s3 = boto3.resource(
|
||||
's3',
|
||||
aws_access_key_id=os.environ.get('ACCESS_KEY_ID'),
|
||||
aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'),
|
||||
endpoint_url=os.environ.get('ENDPOINT_URL')
|
||||
)
|
||||
|
||||
bucket = s3.Bucket(bucket)
|
||||
objects = bucket.meta.client.list_objects_v2(
|
||||
Bucket=bucket.name,
|
||||
Delimiter='/')
|
||||
|
||||
if objects:
|
||||
for obj in objects.get('CommonPrefixes'):
|
||||
if obj.get('Prefix') == bucket_dir + '/':
|
||||
for backup_obj in bucket.objects.filter(Prefix=obj.get('Prefix')):
|
||||
try:
|
||||
# backup_obj.key is bucket_dir/site/date_time/backupfile.extension
|
||||
bucket_dir, site_slug, date_time, backupfile = backup_obj.key.split('/')
|
||||
date_time_object = datetime.datetime.strptime(
|
||||
date_time, DATE_FORMAT
|
||||
)
|
||||
|
||||
if site_name in backup_obj.key:
|
||||
all_backup_dates.append(date_time_object)
|
||||
all_backups.append(backup_obj.key)
|
||||
except IndexError as error:
|
||||
print(error)
|
||||
exit(1)
|
||||
|
||||
if len(all_backup_dates) > 0:
|
||||
oldest_backup_date = min(all_backup_dates)
|
||||
|
||||
if len(all_backups) / 3 > backup_limit:
|
||||
oldest_backup = None
|
||||
for backup in all_backups:
|
||||
try:
|
||||
# backup is bucket_dir/site/date_time/backupfile.extension
|
||||
backup_dir, site_slug, backup_dt_string, filename = backup.split('/')
|
||||
backup_datetime = datetime.datetime.strptime(
|
||||
backup_dt_string, DATE_FORMAT
|
||||
)
|
||||
if backup_datetime == oldest_backup_date:
|
||||
oldest_backup = backup
|
||||
|
||||
except IndexError as error:
|
||||
print(error)
|
||||
exit(1)
|
||||
|
||||
if oldest_backup:
|
||||
for obj in bucket.objects.filter(Prefix=oldest_backup):
|
||||
# delete all keys that are inside the oldest_backup
|
||||
if bucket_dir in obj.key:
|
||||
print('Deleteing ' + obj.key)
|
||||
s3.Object(bucket.name, obj.key).delete()
|
||||
|
||||
def main():
|
||||
details = dict()
|
||||
sites = get_sites()
|
||||
conn, bucket = get_s3_config()
|
||||
|
||||
for site in sites:
|
||||
details = get_backup_details(site)
|
||||
db_file = details.get('database', {}).get('file_path')
|
||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/'
|
||||
if db_file:
|
||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(db_file)[:15] + '/'
|
||||
upload_file_to_s3(db_file, folder, conn, bucket)
|
||||
|
||||
public_files = details.get('public_files', {}).get('file_path')
|
||||
if public_files:
|
||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(public_files)[:15] + '/'
|
||||
upload_file_to_s3(public_files, folder, conn, bucket)
|
||||
|
||||
private_files = details.get('private_files', {}).get('file_path')
|
||||
if private_files:
|
||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(private_files)[:15] + '/'
|
||||
upload_file_to_s3(private_files, folder, conn, bucket)
|
||||
|
||||
delete_old_backups(os.environ.get('BACKUP_LIMIT', '3'), bucket, site)
|
||||
|
||||
print('push-backup complete')
|
||||
exit(0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
188
build/common/commands/restore_backup.py
Normal file
188
build/common/commands/restore_backup.py
Normal file
@ -0,0 +1,188 @@
|
||||
import os
|
||||
import datetime
|
||||
import tarfile
|
||||
import hashlib
|
||||
import frappe
|
||||
import boto3
|
||||
|
||||
from push_backup import DATE_FORMAT, check_environment_variables
|
||||
from frappe.utils import get_sites, random_string
|
||||
from frappe.commands.site import _new_site
|
||||
from frappe.installer import make_conf, get_conf_params, make_site_dirs
|
||||
from check_connection import get_site_config, get_config
|
||||
|
||||
def list_directories(path):
|
||||
directories = []
|
||||
for name in os.listdir(path):
|
||||
if os.path.isdir(os.path.join(path, name)):
|
||||
directories.append(name)
|
||||
return directories
|
||||
|
||||
def get_backup_dir():
|
||||
return os.path.join(
|
||||
os.path.expanduser('~'),
|
||||
'backups'
|
||||
)
|
||||
|
||||
def decompress_db(files_base, site):
|
||||
database_file = files_base + '-database.sql.gz'
|
||||
config = get_config()
|
||||
site_config = get_site_config(site)
|
||||
db_root_user = os.environ.get('DB_ROOT_USER', 'root')
|
||||
command = 'gunzip -c {database_file} > {database_extract}'.format(
|
||||
database_file=database_file,
|
||||
database_extract=database_file.replace('.gz','')
|
||||
)
|
||||
|
||||
print('Extract Database GZip for site {}'.format(site))
|
||||
os.system(command)
|
||||
|
||||
def restore_database(files_base, site):
|
||||
db_root_password = os.environ.get('MYSQL_ROOT_PASSWORD')
|
||||
if not db_root_password:
|
||||
print('Variable MYSQL_ROOT_PASSWORD not set')
|
||||
exit(1)
|
||||
|
||||
db_root_user = os.environ.get("DB_ROOT_USER", 'root')
|
||||
|
||||
# restore database
|
||||
database_file = files_base + '-database.sql.gz'
|
||||
decompress_db(files_base, site)
|
||||
config = get_config()
|
||||
site_config = get_site_config(site)
|
||||
|
||||
# mysql command prefix
|
||||
mysql_command = 'mysql -u{db_root_user} -h{db_host} -p{db_password} -e '.format(
|
||||
db_root_user=db_root_user,
|
||||
db_host=config.get('db_host'),
|
||||
db_password=db_root_password
|
||||
)
|
||||
|
||||
# drop db if exists for clean restore
|
||||
drop_database = mysql_command + "\"DROP DATABASE IF EXISTS \`{db_name}\`;\"".format(
|
||||
db_name=site_config.get('db_name')
|
||||
)
|
||||
os.system(drop_database)
|
||||
|
||||
# create db
|
||||
create_database = mysql_command + "\"CREATE DATABASE IF NOT EXISTS \`{db_name}\`;\"".format(
|
||||
db_name=site_config.get('db_name')
|
||||
)
|
||||
os.system(create_database)
|
||||
|
||||
# create user
|
||||
create_user = mysql_command + "\"CREATE USER IF NOT EXISTS \'{db_name}\'@\'%\' IDENTIFIED BY \'{db_password}\'; FLUSH PRIVILEGES;\"".format(
|
||||
db_name=site_config.get('db_name'),
|
||||
db_password=site_config.get('db_password')
|
||||
)
|
||||
os.system(create_user)
|
||||
|
||||
# create user password
|
||||
set_user_password = mysql_command + "\"UPDATE mysql.user SET authentication_string = PASSWORD('{db_password}') WHERE User = \'{db_name}\' AND Host = \'%\';\"".format(
|
||||
db_name=site_config.get('db_name'),
|
||||
db_password=site_config.get('db_password')
|
||||
)
|
||||
os.system(set_user_password)
|
||||
|
||||
# grant db privileges to user
|
||||
grant_privileges = mysql_command + "\"GRANT ALL PRIVILEGES ON \`{db_name}\`.* TO '{db_name}'@'%'; FLUSH PRIVILEGES;\"".format(
|
||||
db_name=site_config.get('db_name')
|
||||
)
|
||||
os.system(grant_privileges)
|
||||
|
||||
command = "mysql -u{db_root_user} -h{db_host} -p{db_password} '{db_name}' < {database_file}".format(
|
||||
db_root_user=db_root_user,
|
||||
db_host=config.get('db_host'),
|
||||
db_password=db_root_password,
|
||||
db_name=site_config.get('db_name'),
|
||||
database_file=database_file.replace('.gz',''),
|
||||
)
|
||||
|
||||
print('Restoring database for site: {}'.format(site))
|
||||
os.system(command)
|
||||
|
||||
def restore_files(files_base):
|
||||
public_files = files_base + '-files.tar'
|
||||
# extract tar
|
||||
public_tar = tarfile.open(public_files)
|
||||
print('Extracting {}'.format(public_files))
|
||||
public_tar.extractall()
|
||||
|
||||
def restore_private_files(files_base):
|
||||
private_files = files_base + '-private-files.tar'
|
||||
private_tar = tarfile.open(private_files)
|
||||
print('Extracting {}'.format(private_files))
|
||||
private_tar.extractall()
|
||||
|
||||
def pull_backup_from_s3():
|
||||
check_environment_variables()
|
||||
|
||||
# https://stackoverflow.com/a/54672690
|
||||
s3 = boto3.resource(
|
||||
's3',
|
||||
aws_access_key_id=os.environ.get('ACCESS_KEY_ID'),
|
||||
aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'),
|
||||
endpoint_url=os.environ.get('ENDPOINT_URL')
|
||||
)
|
||||
|
||||
bucket_dir = os.environ.get('BUCKET_DIR')
|
||||
bucket_name = os.environ.get('BUCKET_NAME')
|
||||
bucket = s3.Bucket(bucket_name)
|
||||
|
||||
# Change directory to /home/frappe/backups
|
||||
os.chdir(get_backup_dir())
|
||||
|
||||
for obj in bucket.objects.filter(Prefix = bucket_dir):
|
||||
backup_file = obj.key.replace(os.path.join(bucket_dir,''),'')
|
||||
if not os.path.exists(os.path.dirname(backup_file)):
|
||||
os.makedirs(os.path.dirname(backup_file))
|
||||
print('Downloading {}'.format(backup_file))
|
||||
bucket.download_file(obj.key, backup_file)
|
||||
|
||||
os.chdir(os.path.join(os.path.expanduser('~'), 'frappe-bench', 'sites'))
|
||||
|
||||
def main():
|
||||
backup_dir = get_backup_dir()
|
||||
|
||||
if len(list_directories(backup_dir)) == 0:
|
||||
pull_backup_from_s3()
|
||||
|
||||
for site in list_directories(backup_dir):
|
||||
site_slug = site.replace('.','_')
|
||||
backups = [datetime.datetime.strptime(backup, DATE_FORMAT) for backup in list_directories(os.path.join(backup_dir,site))]
|
||||
latest_backup = max(backups).strftime(DATE_FORMAT)
|
||||
files_base = os.path.join(backup_dir, site, latest_backup, '')
|
||||
files_base += latest_backup + '-' + site_slug
|
||||
if site in get_sites():
|
||||
restore_database(files_base, site)
|
||||
restore_private_files(files_base)
|
||||
restore_files(files_base)
|
||||
else:
|
||||
mariadb_root_password = os.environ.get('MYSQL_ROOT_PASSWORD')
|
||||
if not mariadb_root_password:
|
||||
print('Variable MYSQL_ROOT_PASSWORD not set')
|
||||
exit(1)
|
||||
mariadb_root_username = os.environ.get('DB_ROOT_USER', 'root')
|
||||
database_file = files_base + '-database.sql.gz'
|
||||
|
||||
site_config = get_conf_params(
|
||||
db_name='_' + hashlib.sha1(site.encode()).hexdigest()[:16],
|
||||
db_password=random_string(16)
|
||||
)
|
||||
|
||||
frappe.local.site = site
|
||||
frappe.local.sites_path = os.getcwd()
|
||||
frappe.local.site_path = os.getcwd() + '/' + site
|
||||
make_conf(
|
||||
db_name=site_config.get('db_name'),
|
||||
db_password=site_config.get('db_password'),
|
||||
)
|
||||
make_site_dirs()
|
||||
restore_database(files_base, site)
|
||||
restore_private_files(files_base)
|
||||
restore_files(files_base)
|
||||
|
||||
exit(0)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -175,6 +175,18 @@ elif [ "$1" = 'console' ]; then
|
||||
python /home/frappe/frappe-bench/commands/console.py "$2"
|
||||
fi
|
||||
|
||||
elif [ "$1" = 'push-backup' ]; then
|
||||
|
||||
su frappe -c ". /home/frappe/frappe-bench/env/bin/activate \
|
||||
&& python /home/frappe/frappe-bench/commands/push_backup.py"
|
||||
exit
|
||||
|
||||
elif [ "$1" = 'restore-backup' ]; then
|
||||
|
||||
su frappe -c ". /home/frappe/frappe-bench/env/bin/activate \
|
||||
&& python /home/frappe/frappe-bench/commands/restore_backup.py"
|
||||
exit
|
||||
|
||||
else
|
||||
|
||||
exec su frappe -c "$@"
|
||||
|
@ -21,7 +21,7 @@ RUN install_packages \
|
||||
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.stretch_amd64.deb
|
||||
RUN dpkg -i wkhtmltox_0.12.5-1.stretch_amd64.deb && rm wkhtmltox_0.12.5-1.stretch_amd64.deb
|
||||
|
||||
RUN mkdir -p apps logs commands
|
||||
RUN mkdir -p apps logs commands /home/frappe/backups
|
||||
|
||||
RUN virtualenv env \
|
||||
&& . env/bin/activate \
|
||||
@ -40,9 +40,9 @@ COPY build/common/worker/install_app.sh /usr/local/bin/install_app
|
||||
|
||||
WORKDIR /home/frappe/frappe-bench/sites
|
||||
|
||||
RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites
|
||||
RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites /home/frappe/backups
|
||||
|
||||
VOLUME [ "/home/frappe/frappe-bench/sites" ]
|
||||
VOLUME [ "/home/frappe/frappe-bench/sites", "/home/frappe/backups" ]
|
||||
|
||||
ENTRYPOINT ["docker-entrypoint.sh"]
|
||||
CMD ["start"]
|
||||
|
@ -18,7 +18,7 @@ RUN install_packages \
|
||||
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.stretch_amd64.deb
|
||||
RUN dpkg -i wkhtmltox_0.12.5-1.stretch_amd64.deb && rm wkhtmltox_0.12.5-1.stretch_amd64.deb
|
||||
|
||||
RUN mkdir -p apps logs commands
|
||||
RUN mkdir -p apps logs commands /home/frappe/backups
|
||||
|
||||
RUN virtualenv env \
|
||||
&& . env/bin/activate \
|
||||
@ -37,9 +37,9 @@ COPY build/common/worker/install_app.sh /usr/local/bin/install_app
|
||||
|
||||
WORKDIR /home/frappe/frappe-bench/sites
|
||||
|
||||
RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites
|
||||
RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites /home/frappe/backups
|
||||
|
||||
VOLUME [ "/home/frappe/frappe-bench/sites" ]
|
||||
VOLUME [ "/home/frappe/frappe-bench/sites", "/home/frappe/backups" ]
|
||||
|
||||
ENTRYPOINT ["docker-entrypoint.sh"]
|
||||
CMD ["start"]
|
||||
|
@ -21,7 +21,7 @@ RUN install_packages \
|
||||
RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.stretch_amd64.deb
|
||||
RUN dpkg -i wkhtmltox_0.12.5-1.stretch_amd64.deb && rm wkhtmltox_0.12.5-1.stretch_amd64.deb
|
||||
|
||||
RUN mkdir -p apps logs commands
|
||||
RUN mkdir -p apps logs commands /home/frappe/backups
|
||||
|
||||
RUN virtualenv env \
|
||||
&& . env/bin/activate \
|
||||
@ -40,9 +40,9 @@ COPY build/common/worker/install_app.sh /usr/local/bin/install_app
|
||||
|
||||
WORKDIR /home/frappe/frappe-bench/sites
|
||||
|
||||
RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites
|
||||
RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites /home/frappe/backups
|
||||
|
||||
VOLUME [ "/home/frappe/frappe-bench/sites" ]
|
||||
VOLUME [ "/home/frappe/frappe-bench/sites", "/home/frappe/backups" ]
|
||||
|
||||
ENTRYPOINT ["docker-entrypoint.sh"]
|
||||
CMD ["start"]
|
||||
|
15
greetings.yml
Normal file
15
greetings.yml
Normal file
@ -0,0 +1,15 @@
|
||||
name: Greetings
|
||||
|
||||
on: [pull_request, issues]
|
||||
|
||||
jobs:
|
||||
greeting:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/first-interaction@v1
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
issue-message: |
|
||||
Hello! We're very happy to see your first issue. If your issue is about a problem, go back and check you have copy-pasted all the debug logs you can so we can help you as fast as possible!
|
||||
pr-message: |
|
||||
Hello! Thank you about this PR. Since this is your first PR, please make sure you have described the improvements and your code is well documented.
|
Loading…
Reference in New Issue
Block a user