Merge branch 'master' into touch-postpone-flag-file
This commit is contained in:
commit
c2186db527
@ -84,7 +84,7 @@ But then a rare genetic mutation happened, and the `c` transformed into `t`. And
|
|||||||
|
|
||||||
We develop `gh-ost` at GitHub and for the community. We may have different priorities than others. From time to time we may suggest a contribution that is not on our immediate roadmap but which may appeal to others.
|
We develop `gh-ost` at GitHub and for the community. We may have different priorities than others. From time to time we may suggest a contribution that is not on our immediate roadmap but which may appeal to others.
|
||||||
|
|
||||||
Please see [Coding gh-ost](https://github.com/github/gh-ost/blob/develdocs/doc/coding-ghost.md) for a guide to getting started developing with gh-ost.
|
Please see [Coding gh-ost](doc/coding-ghost.md) for a guide to getting started developing with gh-ost.
|
||||||
|
|
||||||
## Download/binaries/source
|
## Download/binaries/source
|
||||||
|
|
||||||
|
@ -1 +1 @@
|
|||||||
1.0.36
|
1.0.42
|
||||||
|
@ -1,11 +1,13 @@
|
|||||||
# Cheatsheet
|
# Cheatsheet
|
||||||
|
|
||||||
|
### Operation modes
|
||||||
|
|
||||||
![operation modes](images/gh-ost-operation-modes.png)
|
![operation modes](images/gh-ost-operation-modes.png)
|
||||||
|
|
||||||
|
|
||||||
`gh-ost` operates by connecting to potentially multiple servers, as well as imposing itself as a replica in order to streamline binary log events directly from one of those servers. There are various operation modes, which depend on your setup, configuration, and where you want to run the migration.
|
`gh-ost` operates by connecting to potentially multiple servers, as well as imposing itself as a replica in order to streamline binary log events directly from one of those servers. There are various operation modes, which depend on your setup, configuration, and where you want to run the migration.
|
||||||
|
|
||||||
### a. Connect to replica, migrate on master
|
#### a. Connect to replica, migrate on master
|
||||||
|
|
||||||
This is the mode `gh-ost` expects by default. `gh-ost` will investigate the replica, crawl up to find the topology's master, and will hook onto it as well. Migration will:
|
This is the mode `gh-ost` expects by default. `gh-ost` will investigate the replica, crawl up to find the topology's master, and will hook onto it as well. Migration will:
|
||||||
|
|
||||||
@ -47,7 +49,7 @@ gh-ost \
|
|||||||
With `--execute`, migration actually copies data and flips tables. Without it this is a `noop` run.
|
With `--execute`, migration actually copies data and flips tables. Without it this is a `noop` run.
|
||||||
|
|
||||||
|
|
||||||
### b. Connect to master
|
#### b. Connect to master
|
||||||
|
|
||||||
If you don't have replicas, or do not wish to use them, you are still able to operate directly on the master. `gh-ost` will do all operations directly on the master. You may still ask it to be considerate of replication lag.
|
If you don't have replicas, or do not wish to use them, you are still able to operate directly on the master. `gh-ost` will do all operations directly on the master. You may still ask it to be considerate of replication lag.
|
||||||
|
|
||||||
@ -80,7 +82,7 @@ gh-ost \
|
|||||||
[--execute]
|
[--execute]
|
||||||
```
|
```
|
||||||
|
|
||||||
### c. Migrate/test on replica
|
#### c. Migrate/test on replica
|
||||||
|
|
||||||
This will perform a migration on the replica. `gh-ost` will briefly connect to the master but will thereafter perform all operations on the replica without modifying anything on the master.
|
This will perform a migration on the replica. `gh-ost` will briefly connect to the master but will thereafter perform all operations on the replica without modifying anything on the master.
|
||||||
Throughout the operation, `gh-ost` will throttle such that the replica is up to date.
|
Throughout the operation, `gh-ost` will throttle such that the replica is up to date.
|
||||||
@ -155,3 +157,15 @@ gh-ost --tungsten --assume-master-host=the.topology.master.com
|
|||||||
```
|
```
|
||||||
|
|
||||||
Also note that `--switch-to-rbr` does not work for a Tungsten setup as the replication process is external, so you need to make sure `binlog_format` is set to ROW before Tungsten Replicator connects to the server and starts applying events from the master.
|
Also note that `--switch-to-rbr` does not work for a Tungsten setup as the replication process is external, so you need to make sure `binlog_format` is set to ROW before Tungsten Replicator connects to the server and starts applying events from the master.
|
||||||
|
|
||||||
|
### Concurrent migrations
|
||||||
|
|
||||||
|
It is possible to run concurrent `gh-ost` migrations.
|
||||||
|
|
||||||
|
- Never on the exact same table.
|
||||||
|
- If running on different replicas, (e.g. `table1` on `replica1` and `table2` on `replica2`) then no further configuration required.
|
||||||
|
- If running from same server (binaries run on same server, regardless of which replica/replicas are used):
|
||||||
|
- Make sure not to specify same `-serve-socket-file` (or let `gh-ost` pick one for you).
|
||||||
|
- You may choose to use same `-throttle-flag-file` (preferably use `-throttle-additional-flag-file`, this is exactly the reason there's two, this latter file is for sharing).
|
||||||
|
- You may choose to use same `-panic-flag-file`. This all depends on your flow and how you'd like to control your migrations.
|
||||||
|
- If using same inspected box (either master or replica, `--host=everyone.uses.this.host`) then for each `gh-ost` process you must also provide a different, unique `--replica-server-id`. Optionally use process ID (`$$` in shell) ; but it's on you to choose a number that does not collide with another `gh-ost` or another running replica.
|
||||||
|
@ -19,6 +19,7 @@ Both interfaces may serve at the same time. Both respond to simple text command,
|
|||||||
- `sup`: returns a brief status summary of migration progress
|
- `sup`: returns a brief status summary of migration progress
|
||||||
- `coordinates`: returns recent (though not exactly up to date) binary log coordinates of the inspected server
|
- `coordinates`: returns recent (though not exactly up to date) binary log coordinates of the inspected server
|
||||||
- `chunk-size=<newsize>`: modify the `chunk-size`; applies on next running copy-iteration
|
- `chunk-size=<newsize>`: modify the `chunk-size`; applies on next running copy-iteration
|
||||||
|
- `dml-batch-size=<newsize>`: modify the `dml-batch-size`; applies on next applying of binary log events
|
||||||
- `max-lag-millis=<max-lag>`: modify the maximum replication lag threshold (milliseconds, minimum value is `100`, i.e. `0.1` second)
|
- `max-lag-millis=<max-lag>`: modify the maximum replication lag threshold (milliseconds, minimum value is `100`, i.e. `0.1` second)
|
||||||
- `max-load=<max-load-thresholds>`: modify the `max-load` config; applies on next running copy-iteration
|
- `max-load=<max-load-thresholds>`: modify the `max-load` config; applies on next running copy-iteration
|
||||||
- The `max-load` format must be: `some_status=<numeric-threshold>[,some_status=<numeric-threshold>...]`'
|
- The `max-load` format must be: `some_status=<numeric-threshold>[,some_status=<numeric-threshold>...]`'
|
||||||
@ -52,7 +53,7 @@ While migration is running:
|
|||||||
$ echo status | nc -U /tmp/gh-ost.test.sample_data_0.sock
|
$ echo status | nc -U /tmp/gh-ost.test.sample_data_0.sock
|
||||||
# Migrating `test`.`sample_data_0`; Ghost table is `test`.`_sample_data_0_gst`
|
# Migrating `test`.`sample_data_0`; Ghost table is `test`.`_sample_data_0_gst`
|
||||||
# Migration started at Tue Jun 07 11:45:16 +0200 2016
|
# Migration started at Tue Jun 07 11:45:16 +0200 2016
|
||||||
# chunk-size: 200; max lag: 1500ms; max-load: map[Threads_connected:20]
|
# chunk-size: 200; max lag: 1500ms; dml-batch-size: 10; max-load: map[Threads_connected:20]
|
||||||
# Throttle additional flag file: /tmp/gh-ost.throttle
|
# Throttle additional flag file: /tmp/gh-ost.throttle
|
||||||
# Serving on unix socket: /tmp/gh-ost.test.sample_data_0.sock
|
# Serving on unix socket: /tmp/gh-ost.test.sample_data_0.sock
|
||||||
# Serving on TCP port: 10001
|
# Serving on TCP port: 10001
|
||||||
@ -63,7 +64,7 @@ Copy: 0/2915 0.0%; Applied: 0; Backlog: 0/100; Elapsed: 40s(copy), 41s(total); s
|
|||||||
$ echo "chunk-size=250" | nc -U /tmp/gh-ost.test.sample_data_0.sock
|
$ echo "chunk-size=250" | nc -U /tmp/gh-ost.test.sample_data_0.sock
|
||||||
# Migrating `test`.`sample_data_0`; Ghost table is `test`.`_sample_data_0_gst`
|
# Migrating `test`.`sample_data_0`; Ghost table is `test`.`_sample_data_0_gst`
|
||||||
# Migration started at Tue Jun 07 11:56:03 +0200 2016
|
# Migration started at Tue Jun 07 11:56:03 +0200 2016
|
||||||
# chunk-size: 250; max lag: 1500ms; max-load: map[Threads_connected:20]
|
# chunk-size: 250; max lag: 1500ms; dml-batch-size: 10; max-load: map[Threads_connected:20]
|
||||||
# Throttle additional flag file: /tmp/gh-ost.throttle
|
# Throttle additional flag file: /tmp/gh-ost.throttle
|
||||||
# Serving on unix socket: /tmp/gh-ost.test.sample_data_0.sock
|
# Serving on unix socket: /tmp/gh-ost.test.sample_data_0.sock
|
||||||
# Serving on TCP port: 10001
|
# Serving on TCP port: 10001
|
||||||
|
@ -23,4 +23,8 @@ At this time there is no equivalent to `ALTER IGNORE`, where duplicates are impl
|
|||||||
|
|
||||||
It is therefore unlikely that `gh-ost` will support this behavior.
|
It is therefore unlikely that `gh-ost` will support this behavior.
|
||||||
|
|
||||||
|
### Run concurrent migrations?
|
||||||
|
|
||||||
|
Yes. TL;DR if running all on same replica/master, make sure to provide `--replica-server-id`. [Read more](cheatsheet.md#concurrent-migrations)
|
||||||
|
|
||||||
# Why
|
# Why
|
||||||
|
@ -28,7 +28,9 @@ The `SUPER` privilege is required for `STOP SLAVE`, `START SLAVE` operations. Th
|
|||||||
|
|
||||||
- MySQL 5.7 generated columns are not supported. They may be supported in the future.
|
- MySQL 5.7 generated columns are not supported. They may be supported in the future.
|
||||||
|
|
||||||
- MySQL 5.7 `JSON` columns are not supported. They are likely to be supported shortly.
|
- MySQL 5.7 `POINT` column type is not supported.
|
||||||
|
|
||||||
|
- MySQL 5.7 `JSON` columns are supported but not as part of `PRIMARY KEY`
|
||||||
|
|
||||||
- The two _before_ & _after_ tables must share a `PRIMARY KEY` or other `UNIQUE KEY`. This key will be used by `gh-ost` to iterate through the table rows when copying. [Read more](shared-key.md)
|
- The two _before_ & _after_ tables must share a `PRIMARY KEY` or other `UNIQUE KEY`. This key will be used by `gh-ost` to iterate through the table rows when copying. [Read more](shared-key.md)
|
||||||
- The migration key must not include columns with NULL values. This means either:
|
- The migration key must not include columns with NULL values. This means either:
|
||||||
|
@ -47,6 +47,7 @@ const (
|
|||||||
|
|
||||||
const (
|
const (
|
||||||
HTTPStatusOK = 200
|
HTTPStatusOK = 200
|
||||||
|
MaxEventsBatchSize = 1000
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
@ -191,6 +192,7 @@ type MigrationContext struct {
|
|||||||
Iteration int64
|
Iteration int64
|
||||||
MigrationIterationRangeMinValues *sql.ColumnValues
|
MigrationIterationRangeMinValues *sql.ColumnValues
|
||||||
MigrationIterationRangeMaxValues *sql.ColumnValues
|
MigrationIterationRangeMaxValues *sql.ColumnValues
|
||||||
|
ForceTmpTableName string
|
||||||
|
|
||||||
recentBinlogCoordinates mysql.BinlogCoordinates
|
recentBinlogCoordinates mysql.BinlogCoordinates
|
||||||
|
|
||||||
@ -242,26 +244,52 @@ func GetMigrationContext() *MigrationContext {
|
|||||||
return context
|
return context
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getSafeTableName(baseName string, suffix string) string {
|
||||||
|
name := fmt.Sprintf("_%s_%s", baseName, suffix)
|
||||||
|
if len(name) <= mysql.MaxTableNameLength {
|
||||||
|
return name
|
||||||
|
}
|
||||||
|
extraCharacters := len(name) - mysql.MaxTableNameLength
|
||||||
|
return fmt.Sprintf("_%s_%s", baseName[0:len(baseName)-extraCharacters], suffix)
|
||||||
|
}
|
||||||
|
|
||||||
// GetGhostTableName generates the name of ghost table, based on original table name
|
// GetGhostTableName generates the name of ghost table, based on original table name
|
||||||
|
// or a given table name
|
||||||
func (this *MigrationContext) GetGhostTableName() string {
|
func (this *MigrationContext) GetGhostTableName() string {
|
||||||
return fmt.Sprintf("_%s_gho", this.OriginalTableName)
|
if this.ForceTmpTableName != "" {
|
||||||
|
return getSafeTableName(this.ForceTmpTableName, "gho")
|
||||||
|
} else {
|
||||||
|
return getSafeTableName(this.OriginalTableName, "gho")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetOldTableName generates the name of the "old" table, into which the original table is renamed.
|
// GetOldTableName generates the name of the "old" table, into which the original table is renamed.
|
||||||
func (this *MigrationContext) GetOldTableName() string {
|
func (this *MigrationContext) GetOldTableName() string {
|
||||||
|
var tableName string
|
||||||
|
if this.ForceTmpTableName != "" {
|
||||||
|
tableName = this.ForceTmpTableName
|
||||||
|
} else {
|
||||||
|
tableName = this.OriginalTableName
|
||||||
|
}
|
||||||
|
|
||||||
if this.TimestampOldTable {
|
if this.TimestampOldTable {
|
||||||
t := this.StartTime
|
t := this.StartTime
|
||||||
timestamp := fmt.Sprintf("%d%02d%02d%02d%02d%02d",
|
timestamp := fmt.Sprintf("%d%02d%02d%02d%02d%02d",
|
||||||
t.Year(), t.Month(), t.Day(),
|
t.Year(), t.Month(), t.Day(),
|
||||||
t.Hour(), t.Minute(), t.Second())
|
t.Hour(), t.Minute(), t.Second())
|
||||||
return fmt.Sprintf("_%s_%s_del", this.OriginalTableName, timestamp)
|
return getSafeTableName(tableName, fmt.Sprintf("%s_del", timestamp))
|
||||||
}
|
}
|
||||||
return fmt.Sprintf("_%s_del", this.OriginalTableName)
|
return getSafeTableName(tableName, "del")
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetChangelogTableName generates the name of changelog table, based on original table name
|
// GetChangelogTableName generates the name of changelog table, based on original table name
|
||||||
|
// or a given table name.
|
||||||
func (this *MigrationContext) GetChangelogTableName() string {
|
func (this *MigrationContext) GetChangelogTableName() string {
|
||||||
return fmt.Sprintf("_%s_ghc", this.OriginalTableName)
|
if this.ForceTmpTableName != "" {
|
||||||
|
return getSafeTableName(this.ForceTmpTableName, "ghc")
|
||||||
|
} else {
|
||||||
|
return getSafeTableName(this.OriginalTableName, "ghc")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetVoluntaryLockName returns a name of a voluntary lock to be used throughout
|
// GetVoluntaryLockName returns a name of a voluntary lock to be used throughout
|
||||||
@ -441,8 +469,8 @@ func (this *MigrationContext) SetDMLBatchSize(batchSize int64) {
|
|||||||
if batchSize < 1 {
|
if batchSize < 1 {
|
||||||
batchSize = 1
|
batchSize = 1
|
||||||
}
|
}
|
||||||
if batchSize > 100 {
|
if batchSize > MaxEventsBatchSize {
|
||||||
batchSize = 100
|
batchSize = MaxEventsBatchSize
|
||||||
}
|
}
|
||||||
atomic.StoreInt64(&this.DMLBatchSize, batchSize)
|
atomic.StoreInt64(&this.DMLBatchSize, batchSize)
|
||||||
}
|
}
|
||||||
@ -672,7 +700,7 @@ func (this *MigrationContext) ReadConfigFile() error {
|
|||||||
gcfg.RelaxedParserMode = true
|
gcfg.RelaxedParserMode = true
|
||||||
gcfgscanner.RelaxedScannerMode = true
|
gcfgscanner.RelaxedScannerMode = true
|
||||||
if err := gcfg.ReadFileInto(&this.config, this.ConfigFile); err != nil {
|
if err := gcfg.ReadFileInto(&this.config, this.ConfigFile); err != nil {
|
||||||
return err
|
return fmt.Errorf("Error reading config file %s. Details: %s", this.ConfigFile, err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
// We accept user & password in the form "${SOME_ENV_VARIABLE}" in which case we pull
|
// We accept user & password in the form "${SOME_ENV_VARIABLE}" in which case we pull
|
||||||
|
58
go/base/context_test.go
Normal file
58
go/base/context_test.go
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
/*
|
||||||
|
Copyright 2016 GitHub Inc.
|
||||||
|
See https://github.com/github/gh-ost/blob/master/LICENSE
|
||||||
|
*/
|
||||||
|
|
||||||
|
package base
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/outbrain/golib/log"
|
||||||
|
test "github.com/outbrain/golib/tests"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
log.SetLevel(log.ERROR)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetTableNames(t *testing.T) {
|
||||||
|
{
|
||||||
|
context = newMigrationContext()
|
||||||
|
context.OriginalTableName = "some_table"
|
||||||
|
test.S(t).ExpectEquals(context.GetOldTableName(), "_some_table_del")
|
||||||
|
test.S(t).ExpectEquals(context.GetGhostTableName(), "_some_table_gho")
|
||||||
|
test.S(t).ExpectEquals(context.GetChangelogTableName(), "_some_table_ghc")
|
||||||
|
}
|
||||||
|
{
|
||||||
|
context = newMigrationContext()
|
||||||
|
context.OriginalTableName = "a123456789012345678901234567890123456789012345678901234567890"
|
||||||
|
test.S(t).ExpectEquals(context.GetOldTableName(), "_a1234567890123456789012345678901234567890123456789012345678_del")
|
||||||
|
test.S(t).ExpectEquals(context.GetGhostTableName(), "_a1234567890123456789012345678901234567890123456789012345678_gho")
|
||||||
|
test.S(t).ExpectEquals(context.GetChangelogTableName(), "_a1234567890123456789012345678901234567890123456789012345678_ghc")
|
||||||
|
}
|
||||||
|
{
|
||||||
|
context = newMigrationContext()
|
||||||
|
context.OriginalTableName = "a123456789012345678901234567890123456789012345678901234567890123"
|
||||||
|
oldTableName := context.GetOldTableName()
|
||||||
|
test.S(t).ExpectEquals(oldTableName, "_a1234567890123456789012345678901234567890123456789012345678_del")
|
||||||
|
}
|
||||||
|
{
|
||||||
|
context = newMigrationContext()
|
||||||
|
context.OriginalTableName = "a123456789012345678901234567890123456789012345678901234567890123"
|
||||||
|
context.TimestampOldTable = true
|
||||||
|
longForm := "Jan 2, 2006 at 3:04pm (MST)"
|
||||||
|
context.StartTime, _ = time.Parse(longForm, "Feb 3, 2013 at 7:54pm (PST)")
|
||||||
|
oldTableName := context.GetOldTableName()
|
||||||
|
test.S(t).ExpectEquals(oldTableName, "_a1234567890123456789012345678901234567890123_20130203195400_del")
|
||||||
|
}
|
||||||
|
{
|
||||||
|
context = newMigrationContext()
|
||||||
|
context.OriginalTableName = "foo_bar_baz"
|
||||||
|
context.ForceTmpTableName = "tmp"
|
||||||
|
test.S(t).ExpectEquals(context.GetOldTableName(), "_tmp_del")
|
||||||
|
test.S(t).ExpectEquals(context.GetGhostTableName(), "_tmp_gho")
|
||||||
|
test.S(t).ExpectEquals(context.GetChangelogTableName(), "_tmp_ghc")
|
||||||
|
}
|
||||||
|
}
|
@ -63,7 +63,7 @@ func main() {
|
|||||||
flag.BoolVar(&migrationContext.AllowedRunningOnMaster, "allow-on-master", false, "allow this migration to run directly on master. Preferably it would run on a replica")
|
flag.BoolVar(&migrationContext.AllowedRunningOnMaster, "allow-on-master", false, "allow this migration to run directly on master. Preferably it would run on a replica")
|
||||||
flag.BoolVar(&migrationContext.AllowedMasterMaster, "allow-master-master", false, "explicitly allow running in a master-master setup")
|
flag.BoolVar(&migrationContext.AllowedMasterMaster, "allow-master-master", false, "explicitly allow running in a master-master setup")
|
||||||
flag.BoolVar(&migrationContext.NullableUniqueKeyAllowed, "allow-nullable-unique-key", false, "allow gh-ost to migrate based on a unique key with nullable columns. As long as no NULL values exist, this should be OK. If NULL values exist in chosen key, data may be corrupted. Use at your own risk!")
|
flag.BoolVar(&migrationContext.NullableUniqueKeyAllowed, "allow-nullable-unique-key", false, "allow gh-ost to migrate based on a unique key with nullable columns. As long as no NULL values exist, this should be OK. If NULL values exist in chosen key, data may be corrupted. Use at your own risk!")
|
||||||
flag.BoolVar(&migrationContext.ApproveRenamedColumns, "approve-renamed-columns", false, "in case your `ALTER` statement renames columns, gh-ost will note that and offer its interpretation of the rename. By default gh-ost does not proceed to execute. This flag approves that gh-ost's interpretation si correct")
|
flag.BoolVar(&migrationContext.ApproveRenamedColumns, "approve-renamed-columns", false, "in case your `ALTER` statement renames columns, gh-ost will note that and offer its interpretation of the rename. By default gh-ost does not proceed to execute. This flag approves that gh-ost's interpretation is correct")
|
||||||
flag.BoolVar(&migrationContext.SkipRenamedColumns, "skip-renamed-columns", false, "in case your `ALTER` statement renames columns, gh-ost will note that and offer its interpretation of the rename. By default gh-ost does not proceed to execute. This flag tells gh-ost to skip the renamed columns, i.e. to treat what gh-ost thinks are renamed columns as unrelated columns. NOTE: you may lose column data")
|
flag.BoolVar(&migrationContext.SkipRenamedColumns, "skip-renamed-columns", false, "in case your `ALTER` statement renames columns, gh-ost will note that and offer its interpretation of the rename. By default gh-ost does not proceed to execute. This flag tells gh-ost to skip the renamed columns, i.e. to treat what gh-ost thinks are renamed columns as unrelated columns. NOTE: you may lose column data")
|
||||||
flag.BoolVar(&migrationContext.IsTungsten, "tungsten", false, "explicitly let gh-ost know that you are running on a tungsten-replication based topology (you are likely to also provide --assume-master-host)")
|
flag.BoolVar(&migrationContext.IsTungsten, "tungsten", false, "explicitly let gh-ost know that you are running on a tungsten-replication based topology (you are likely to also provide --assume-master-host)")
|
||||||
flag.BoolVar(&migrationContext.DiscardForeignKeys, "discard-foreign-keys", false, "DANGER! This flag will migrate a table that has foreign keys and will NOT create foreign keys on the ghost table, thus your altered table will have NO foreign keys. This is useful for intentional dropping of foreign keys")
|
flag.BoolVar(&migrationContext.DiscardForeignKeys, "discard-foreign-keys", false, "DANGER! This flag will migrate a table that has foreign keys and will NOT create foreign keys on the ghost table, thus your altered table will have NO foreign keys. This is useful for intentional dropping of foreign keys")
|
||||||
@ -120,6 +120,7 @@ func main() {
|
|||||||
help := flag.Bool("help", false, "Display usage")
|
help := flag.Bool("help", false, "Display usage")
|
||||||
version := flag.Bool("version", false, "Print version & exit")
|
version := flag.Bool("version", false, "Print version & exit")
|
||||||
checkFlag := flag.Bool("check-flag", false, "Check if another flag exists/supported. This allows for cross-version scripting. Exits with 0 when all additional provided flags exist, nonzero otherwise. You must provide (dummy) values for flags that require a value. Example: gh-ost --check-flag --cut-over-lock-timeout-seconds --nice-ratio 0")
|
checkFlag := flag.Bool("check-flag", false, "Check if another flag exists/supported. This allows for cross-version scripting. Exits with 0 when all additional provided flags exist, nonzero otherwise. You must provide (dummy) values for flags that require a value. Example: gh-ost --check-flag --cut-over-lock-timeout-seconds --nice-ratio 0")
|
||||||
|
flag.StringVar(&migrationContext.ForceTmpTableName, "force-table-names", "", "table name prefix to be used on the temporary tables")
|
||||||
|
|
||||||
flag.Parse()
|
flag.Parse()
|
||||||
|
|
||||||
|
@ -200,7 +200,7 @@ func (this *Applier) CreateChangelogTable() error {
|
|||||||
id bigint auto_increment,
|
id bigint auto_increment,
|
||||||
last_update timestamp not null DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
last_update timestamp not null DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
|
||||||
hint varchar(64) charset ascii not null,
|
hint varchar(64) charset ascii not null,
|
||||||
value varchar(255) charset ascii not null,
|
value varchar(4096) charset ascii not null,
|
||||||
primary key(id),
|
primary key(id),
|
||||||
unique key hint_uidx(hint)
|
unique key hint_uidx(hint)
|
||||||
) auto_increment=256
|
) auto_increment=256
|
||||||
@ -398,7 +398,12 @@ func (this *Applier) CalculateNextIterationRangeEndValues() (hasFurtherRange boo
|
|||||||
if this.migrationContext.MigrationIterationRangeMinValues == nil {
|
if this.migrationContext.MigrationIterationRangeMinValues == nil {
|
||||||
this.migrationContext.MigrationIterationRangeMinValues = this.migrationContext.MigrationRangeMinValues
|
this.migrationContext.MigrationIterationRangeMinValues = this.migrationContext.MigrationRangeMinValues
|
||||||
}
|
}
|
||||||
query, explodedArgs, err := sql.BuildUniqueKeyRangeEndPreparedQuery(
|
for i := 0; i < 2; i++ {
|
||||||
|
buildFunc := sql.BuildUniqueKeyRangeEndPreparedQueryViaOffset
|
||||||
|
if i == 1 {
|
||||||
|
buildFunc = sql.BuildUniqueKeyRangeEndPreparedQueryViaTemptable
|
||||||
|
}
|
||||||
|
query, explodedArgs, err := buildFunc(
|
||||||
this.migrationContext.DatabaseName,
|
this.migrationContext.DatabaseName,
|
||||||
this.migrationContext.OriginalTableName,
|
this.migrationContext.OriginalTableName,
|
||||||
&this.migrationContext.UniqueKey.Columns,
|
&this.migrationContext.UniqueKey.Columns,
|
||||||
@ -422,11 +427,12 @@ func (this *Applier) CalculateNextIterationRangeEndValues() (hasFurtherRange boo
|
|||||||
}
|
}
|
||||||
hasFurtherRange = true
|
hasFurtherRange = true
|
||||||
}
|
}
|
||||||
if !hasFurtherRange {
|
if hasFurtherRange {
|
||||||
log.Debugf("Iteration complete: no further range to iterate")
|
this.migrationContext.MigrationIterationRangeMaxValues = iterationRangeMaxValues
|
||||||
return hasFurtherRange, nil
|
return hasFurtherRange, nil
|
||||||
}
|
}
|
||||||
this.migrationContext.MigrationIterationRangeMaxValues = iterationRangeMaxValues
|
}
|
||||||
|
log.Debugf("Iteration complete: no further range to iterate")
|
||||||
return hasFurtherRange, nil
|
return hasFurtherRange, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -47,7 +47,7 @@ func (this *HooksExecutor) initHooks() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (this *HooksExecutor) applyEnvironmentVairables(extraVariables ...string) []string {
|
func (this *HooksExecutor) applyEnvironmentVariables(extraVariables ...string) []string {
|
||||||
env := os.Environ()
|
env := os.Environ()
|
||||||
env = append(env, fmt.Sprintf("GH_OST_DATABASE_NAME=%s", this.migrationContext.DatabaseName))
|
env = append(env, fmt.Sprintf("GH_OST_DATABASE_NAME=%s", this.migrationContext.DatabaseName))
|
||||||
env = append(env, fmt.Sprintf("GH_OST_TABLE_NAME=%s", this.migrationContext.OriginalTableName))
|
env = append(env, fmt.Sprintf("GH_OST_TABLE_NAME=%s", this.migrationContext.OriginalTableName))
|
||||||
@ -75,7 +75,7 @@ func (this *HooksExecutor) applyEnvironmentVairables(extraVariables ...string) [
|
|||||||
// combined output & error are printed to gh-ost's standard error.
|
// combined output & error are printed to gh-ost's standard error.
|
||||||
func (this *HooksExecutor) executeHook(hook string, extraVariables ...string) error {
|
func (this *HooksExecutor) executeHook(hook string, extraVariables ...string) error {
|
||||||
cmd := exec.Command(hook)
|
cmd := exec.Command(hook)
|
||||||
cmd.Env = this.applyEnvironmentVairables(extraVariables...)
|
cmd.Env = this.applyEnvironmentVariables(extraVariables...)
|
||||||
|
|
||||||
combinedOutput, err := cmd.CombinedOutput()
|
combinedOutput, err := cmd.CombinedOutput()
|
||||||
fmt.Fprintln(os.Stderr, string(combinedOutput))
|
fmt.Fprintln(os.Stderr, string(combinedOutput))
|
||||||
|
@ -121,10 +121,33 @@ func (this *Inspector) inspectOriginalAndGhostTables() (err error) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if len(sharedUniqueKeys) == 0 {
|
for i, sharedUniqueKey := range sharedUniqueKeys {
|
||||||
|
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, &sharedUniqueKey.Columns)
|
||||||
|
uniqueKeyIsValid := true
|
||||||
|
for _, column := range sharedUniqueKey.Columns.Columns() {
|
||||||
|
switch column.Type {
|
||||||
|
case sql.FloatColumnType:
|
||||||
|
{
|
||||||
|
log.Warning("Will not use %+v as shared key due to FLOAT data type", sharedUniqueKey.Name)
|
||||||
|
uniqueKeyIsValid = false
|
||||||
|
}
|
||||||
|
case sql.JSONColumnType:
|
||||||
|
{
|
||||||
|
// Noteworthy that at this time MySQL does not allow JSON indexing anyhow, but this code
|
||||||
|
// will remain in place to potentially handle the future case where JSON is supported in indexes.
|
||||||
|
log.Warning("Will not use %+v as shared key due to JSON data type", sharedUniqueKey.Name)
|
||||||
|
uniqueKeyIsValid = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if uniqueKeyIsValid {
|
||||||
|
this.migrationContext.UniqueKey = sharedUniqueKeys[i]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if this.migrationContext.UniqueKey == nil {
|
||||||
return fmt.Errorf("No shared unique key can be found after ALTER! Bailing out")
|
return fmt.Errorf("No shared unique key can be found after ALTER! Bailing out")
|
||||||
}
|
}
|
||||||
this.migrationContext.UniqueKey = sharedUniqueKeys[0]
|
|
||||||
log.Infof("Chosen shared unique key is %s", this.migrationContext.UniqueKey.Name)
|
log.Infof("Chosen shared unique key is %s", this.migrationContext.UniqueKey.Name)
|
||||||
if this.migrationContext.UniqueKey.HasNullable {
|
if this.migrationContext.UniqueKey.HasNullable {
|
||||||
if this.migrationContext.NullableUniqueKeyAllowed {
|
if this.migrationContext.NullableUniqueKeyAllowed {
|
||||||
@ -169,6 +192,9 @@ func (this *Inspector) inspectOriginalAndGhostTables() (err error) {
|
|||||||
|
|
||||||
// validateConnection issues a simple can-connect to MySQL
|
// validateConnection issues a simple can-connect to MySQL
|
||||||
func (this *Inspector) validateConnection() error {
|
func (this *Inspector) validateConnection() error {
|
||||||
|
if len(this.connectionConfig.Password) > mysql.MaxReplicationPasswordLength {
|
||||||
|
return fmt.Errorf("MySQL replication length limited to 32 characters. See https://dev.mysql.com/doc/refman/5.7/en/assigning-passwords.html")
|
||||||
|
}
|
||||||
query := `select @@global.port, @@global.version`
|
query := `select @@global.port, @@global.version`
|
||||||
var port int
|
var port int
|
||||||
if err := this.db.QueryRow(query).Scan(&port, &this.migrationContext.InspectorMySQLVersion); err != nil {
|
if err := this.db.QueryRow(query).Scan(&port, &this.migrationContext.InspectorMySQLVersion); err != nil {
|
||||||
@ -545,6 +571,16 @@ func (this *Inspector) applyColumnTypes(databaseName, tableName string, columnsL
|
|||||||
columnsList.GetColumn(columnName).Type = sql.DateTimeColumnType
|
columnsList.GetColumn(columnName).Type = sql.DateTimeColumnType
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
if strings.Contains(columnType, "json") {
|
||||||
|
for _, columnsList := range columnsLists {
|
||||||
|
columnsList.GetColumn(columnName).Type = sql.JSONColumnType
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if strings.Contains(columnType, "float") {
|
||||||
|
for _, columnsList := range columnsLists {
|
||||||
|
columnsList.GetColumn(columnName).Type = sql.FloatColumnType
|
||||||
|
}
|
||||||
|
}
|
||||||
if strings.HasPrefix(columnType, "enum") {
|
if strings.HasPrefix(columnType, "enum") {
|
||||||
for _, columnsList := range columnsLists {
|
for _, columnsList := range columnsLists {
|
||||||
columnsList.GetColumn(columnName).Type = sql.EnumColumnType
|
columnsList.GetColumn(columnName).Type = sql.EnumColumnType
|
||||||
@ -656,19 +692,22 @@ func (this *Inspector) getSharedUniqueKeys(originalUniqueKeys, ghostUniqueKeys [
|
|||||||
|
|
||||||
// getSharedColumns returns the intersection of two lists of columns in same order as the first list
|
// getSharedColumns returns the intersection of two lists of columns in same order as the first list
|
||||||
func (this *Inspector) getSharedColumns(originalColumns, ghostColumns *sql.ColumnList, columnRenameMap map[string]string) (*sql.ColumnList, *sql.ColumnList) {
|
func (this *Inspector) getSharedColumns(originalColumns, ghostColumns *sql.ColumnList, columnRenameMap map[string]string) (*sql.ColumnList, *sql.ColumnList) {
|
||||||
columnsInGhost := make(map[string]bool)
|
|
||||||
for _, ghostColumn := range ghostColumns.Names() {
|
|
||||||
columnsInGhost[ghostColumn] = true
|
|
||||||
}
|
|
||||||
sharedColumnNames := []string{}
|
sharedColumnNames := []string{}
|
||||||
for _, originalColumn := range originalColumns.Names() {
|
for _, originalColumn := range originalColumns.Names() {
|
||||||
isSharedColumn := false
|
isSharedColumn := false
|
||||||
if columnsInGhost[originalColumn] || columnsInGhost[columnRenameMap[originalColumn]] {
|
for _, ghostColumn := range ghostColumns.Names() {
|
||||||
|
if strings.EqualFold(originalColumn, ghostColumn) {
|
||||||
isSharedColumn = true
|
isSharedColumn = true
|
||||||
}
|
}
|
||||||
if this.migrationContext.DroppedColumnsMap[originalColumn] {
|
if strings.EqualFold(columnRenameMap[originalColumn], ghostColumn) {
|
||||||
|
isSharedColumn = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for droppedColumn := range this.migrationContext.DroppedColumnsMap {
|
||||||
|
if strings.EqualFold(originalColumn, droppedColumn) {
|
||||||
isSharedColumn = false
|
isSharedColumn = false
|
||||||
}
|
}
|
||||||
|
}
|
||||||
if isSharedColumn {
|
if isSharedColumn {
|
||||||
sharedColumnNames = append(sharedColumnNames, originalColumn)
|
sharedColumnNames = append(sharedColumnNames, originalColumn)
|
||||||
}
|
}
|
||||||
|
@ -10,10 +10,8 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"math"
|
"math"
|
||||||
"os"
|
"os"
|
||||||
"os/signal"
|
|
||||||
"strings"
|
"strings"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"syscall"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/github/gh-ost/go/base"
|
"github.com/github/gh-ost/go/base"
|
||||||
@ -52,10 +50,6 @@ func newApplyEventStructByDML(dmlEvent *binlog.BinlogDMLEvent) *applyEventStruct
|
|||||||
return result
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
const (
|
|
||||||
applyEventsQueueBuffer = 100
|
|
||||||
)
|
|
||||||
|
|
||||||
type PrintStatusRule int
|
type PrintStatusRule int
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@ -79,7 +73,7 @@ type Migrator struct {
|
|||||||
|
|
||||||
firstThrottlingCollected chan bool
|
firstThrottlingCollected chan bool
|
||||||
ghostTableMigrated chan bool
|
ghostTableMigrated chan bool
|
||||||
rowCopyComplete chan bool
|
rowCopyComplete chan error
|
||||||
allEventsUpToLockProcessed chan string
|
allEventsUpToLockProcessed chan string
|
||||||
|
|
||||||
rowCopyCompleteFlag int64
|
rowCopyCompleteFlag int64
|
||||||
@ -97,31 +91,16 @@ func NewMigrator() *Migrator {
|
|||||||
parser: sql.NewParser(),
|
parser: sql.NewParser(),
|
||||||
ghostTableMigrated: make(chan bool),
|
ghostTableMigrated: make(chan bool),
|
||||||
firstThrottlingCollected: make(chan bool, 3),
|
firstThrottlingCollected: make(chan bool, 3),
|
||||||
rowCopyComplete: make(chan bool),
|
rowCopyComplete: make(chan error),
|
||||||
allEventsUpToLockProcessed: make(chan string),
|
allEventsUpToLockProcessed: make(chan string),
|
||||||
|
|
||||||
copyRowsQueue: make(chan tableWriteFunc),
|
copyRowsQueue: make(chan tableWriteFunc),
|
||||||
applyEventsQueue: make(chan *applyEventStruct, applyEventsQueueBuffer),
|
applyEventsQueue: make(chan *applyEventStruct, base.MaxEventsBatchSize),
|
||||||
handledChangelogStates: make(map[string]bool),
|
handledChangelogStates: make(map[string]bool),
|
||||||
}
|
}
|
||||||
return migrator
|
return migrator
|
||||||
}
|
}
|
||||||
|
|
||||||
// acceptSignals registers for OS signals
|
|
||||||
func (this *Migrator) acceptSignals() {
|
|
||||||
c := make(chan os.Signal, 1)
|
|
||||||
|
|
||||||
signal.Notify(c, syscall.SIGHUP)
|
|
||||||
go func() {
|
|
||||||
for sig := range c {
|
|
||||||
switch sig {
|
|
||||||
case syscall.SIGHUP:
|
|
||||||
log.Debugf("Received SIGHUP. Reloading configuration")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
|
|
||||||
// initiateHooksExecutor
|
// initiateHooksExecutor
|
||||||
func (this *Migrator) initiateHooksExecutor() (err error) {
|
func (this *Migrator) initiateHooksExecutor() (err error) {
|
||||||
this.hooksExecutor = NewHooksExecutor()
|
this.hooksExecutor = NewHooksExecutor()
|
||||||
@ -180,11 +159,16 @@ func (this *Migrator) executeAndThrottleOnError(operation func() error) (err err
|
|||||||
// consumeRowCopyComplete blocks on the rowCopyComplete channel once, and then
|
// consumeRowCopyComplete blocks on the rowCopyComplete channel once, and then
|
||||||
// consumes and drops any further incoming events that may be left hanging.
|
// consumes and drops any further incoming events that may be left hanging.
|
||||||
func (this *Migrator) consumeRowCopyComplete() {
|
func (this *Migrator) consumeRowCopyComplete() {
|
||||||
<-this.rowCopyComplete
|
if err := <-this.rowCopyComplete; err != nil {
|
||||||
|
this.migrationContext.PanicAbort <- err
|
||||||
|
}
|
||||||
atomic.StoreInt64(&this.rowCopyCompleteFlag, 1)
|
atomic.StoreInt64(&this.rowCopyCompleteFlag, 1)
|
||||||
this.migrationContext.MarkRowCopyEndTime()
|
this.migrationContext.MarkRowCopyEndTime()
|
||||||
go func() {
|
go func() {
|
||||||
for <-this.rowCopyComplete {
|
for err := range this.rowCopyComplete {
|
||||||
|
if err != nil {
|
||||||
|
this.migrationContext.PanicAbort <- err
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
@ -777,9 +761,10 @@ func (this *Migrator) printMigrationStatusHint(writers ...io.Writer) {
|
|||||||
))
|
))
|
||||||
maxLoad := this.migrationContext.GetMaxLoad()
|
maxLoad := this.migrationContext.GetMaxLoad()
|
||||||
criticalLoad := this.migrationContext.GetCriticalLoad()
|
criticalLoad := this.migrationContext.GetCriticalLoad()
|
||||||
fmt.Fprintln(w, fmt.Sprintf("# chunk-size: %+v; max-lag-millis: %+vms; max-load: %s; critical-load: %s; nice-ratio: %f",
|
fmt.Fprintln(w, fmt.Sprintf("# chunk-size: %+v; max-lag-millis: %+vms; dml-batch-size: %+v; max-load: %s; critical-load: %s; nice-ratio: %f",
|
||||||
atomic.LoadInt64(&this.migrationContext.ChunkSize),
|
atomic.LoadInt64(&this.migrationContext.ChunkSize),
|
||||||
atomic.LoadInt64(&this.migrationContext.MaxLagMillisecondsThrottleThreshold),
|
atomic.LoadInt64(&this.migrationContext.MaxLagMillisecondsThrottleThreshold),
|
||||||
|
atomic.LoadInt64(&this.migrationContext.DMLBatchSize),
|
||||||
maxLoad.String(),
|
maxLoad.String(),
|
||||||
criticalLoad.String(),
|
criticalLoad.String(),
|
||||||
this.migrationContext.GetNiceRatio(),
|
this.migrationContext.GetNiceRatio(),
|
||||||
@ -945,7 +930,7 @@ func (this *Migrator) printStatus(rule PrintStatusRule, writers ...io.Writer) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// initiateStreaming begins treaming of binary log events and registers listeners for such events
|
// initiateStreaming begins streaming of binary log events and registers listeners for such events
|
||||||
func (this *Migrator) initiateStreaming() error {
|
func (this *Migrator) initiateStreaming() error {
|
||||||
this.eventsStreamer = NewEventsStreamer()
|
this.eventsStreamer = NewEventsStreamer()
|
||||||
if err := this.eventsStreamer.InitDBConnections(); err != nil {
|
if err := this.eventsStreamer.InitDBConnections(); err != nil {
|
||||||
@ -1039,7 +1024,7 @@ func (this *Migrator) initiateApplier() error {
|
|||||||
// a chunk of rows onto the ghost table.
|
// a chunk of rows onto the ghost table.
|
||||||
func (this *Migrator) iterateChunks() error {
|
func (this *Migrator) iterateChunks() error {
|
||||||
terminateRowIteration := func(err error) error {
|
terminateRowIteration := func(err error) error {
|
||||||
this.rowCopyComplete <- true
|
this.rowCopyComplete <- err
|
||||||
return log.Errore(err)
|
return log.Errore(err)
|
||||||
}
|
}
|
||||||
if this.migrationContext.Noop {
|
if this.migrationContext.Noop {
|
||||||
@ -1091,7 +1076,10 @@ func (this *Migrator) iterateChunks() error {
|
|||||||
atomic.AddInt64(&this.migrationContext.Iteration, 1)
|
atomic.AddInt64(&this.migrationContext.Iteration, 1)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
return this.retryOperation(applyCopyRowsFunc)
|
if err := this.retryOperation(applyCopyRowsFunc); err != nil {
|
||||||
|
return terminateRowIteration(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
// Enqueue copy operation; to be executed by executeWriteFuncs()
|
// Enqueue copy operation; to be executed by executeWriteFuncs()
|
||||||
this.copyRowsQueue <- copyRowsFunc
|
this.copyRowsQueue <- copyRowsFunc
|
||||||
|
@ -146,6 +146,7 @@ status # Print a detailed status message
|
|||||||
sup # Print a short status message
|
sup # Print a short status message
|
||||||
coordinates # Print the currently inspected coordinates
|
coordinates # Print the currently inspected coordinates
|
||||||
chunk-size=<newsize> # Set a new chunk-size
|
chunk-size=<newsize> # Set a new chunk-size
|
||||||
|
dml-batch-size=<newsize> # Set a new dml-batch-size
|
||||||
nice-ratio=<ratio> # Set a new nice-ratio, immediate sleep after each row-copy operation, float (examples: 0 is agrressive, 0.7 adds 70% runtime, 1.0 doubles runtime, 2.0 triples runtime, ...)
|
nice-ratio=<ratio> # Set a new nice-ratio, immediate sleep after each row-copy operation, float (examples: 0 is agrressive, 0.7 adds 70% runtime, 1.0 doubles runtime, 2.0 triples runtime, ...)
|
||||||
critical-load=<load> # Set a new set of max-load thresholds
|
critical-load=<load> # Set a new set of max-load thresholds
|
||||||
max-lag-millis=<max-lag> # Set a new replication lag threshold
|
max-lag-millis=<max-lag> # Set a new replication lag threshold
|
||||||
@ -187,6 +188,19 @@ help # This message
|
|||||||
return ForcePrintStatusAndHintRule, nil
|
return ForcePrintStatusAndHintRule, nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
case "dml-batch-size":
|
||||||
|
{
|
||||||
|
if argIsQuestion {
|
||||||
|
fmt.Fprintf(writer, "%+v\n", atomic.LoadInt64(&this.migrationContext.DMLBatchSize))
|
||||||
|
return NoPrintStatusRule, nil
|
||||||
|
}
|
||||||
|
if dmlBatchSize, err := strconv.Atoi(arg); err != nil {
|
||||||
|
return NoPrintStatusRule, err
|
||||||
|
} else {
|
||||||
|
this.migrationContext.SetDMLBatchSize(int64(dmlBatchSize))
|
||||||
|
return ForcePrintStatusAndHintRule, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
case "max-lag-millis":
|
case "max-lag-millis":
|
||||||
{
|
{
|
||||||
if argIsQuestion {
|
if argIsQuestion {
|
||||||
|
@ -8,6 +8,7 @@ package logic
|
|||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"strings"
|
||||||
"sync/atomic"
|
"sync/atomic"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@ -18,6 +19,26 @@ import (
|
|||||||
"github.com/outbrain/golib/sqlutils"
|
"github.com/outbrain/golib/sqlutils"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
httpStatusMessages map[int]string = map[int]string{
|
||||||
|
200: "OK",
|
||||||
|
404: "Not found",
|
||||||
|
417: "Expectation failed",
|
||||||
|
429: "Too many requests",
|
||||||
|
500: "Internal server error",
|
||||||
|
}
|
||||||
|
// See https://github.com/github/freno/blob/master/doc/http.md
|
||||||
|
httpStatusFrenoMessages map[int]string = map[int]string{
|
||||||
|
200: "OK",
|
||||||
|
404: "freno: unknown metric",
|
||||||
|
417: "freno: access forbidden",
|
||||||
|
429: "freno: threshold exceeded",
|
||||||
|
500: "freno: internal error",
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
const frenoMagicHint = "freno"
|
||||||
|
|
||||||
// Throttler collects metrics related to throttling and makes informed decisison
|
// Throttler collects metrics related to throttling and makes informed decisison
|
||||||
// whether throttling should take place.
|
// whether throttling should take place.
|
||||||
type Throttler struct {
|
type Throttler struct {
|
||||||
@ -34,6 +55,17 @@ func NewThrottler(applier *Applier, inspector *Inspector) *Throttler {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (this *Throttler) throttleHttpMessage(statusCode int) string {
|
||||||
|
statusCodesMap := httpStatusMessages
|
||||||
|
if throttleHttp := this.migrationContext.GetThrottleHTTP(); strings.Contains(throttleHttp, frenoMagicHint) {
|
||||||
|
statusCodesMap = httpStatusFrenoMessages
|
||||||
|
}
|
||||||
|
if message, ok := statusCodesMap[statusCode]; ok {
|
||||||
|
return fmt.Sprintf("%s (http=%d)", message, statusCode)
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("http=%d", statusCode)
|
||||||
|
}
|
||||||
|
|
||||||
// shouldThrottle performs checks to see whether we should currently be throttling.
|
// shouldThrottle performs checks to see whether we should currently be throttling.
|
||||||
// It merely observes the metrics collected by other components, it does not issue
|
// It merely observes the metrics collected by other components, it does not issue
|
||||||
// its own metric collection.
|
// its own metric collection.
|
||||||
@ -49,7 +81,7 @@ func (this *Throttler) shouldThrottle() (result bool, reason string, reasonHint
|
|||||||
// HTTP throttle
|
// HTTP throttle
|
||||||
statusCode := atomic.LoadInt64(&this.migrationContext.ThrottleHTTPStatusCode)
|
statusCode := atomic.LoadInt64(&this.migrationContext.ThrottleHTTPStatusCode)
|
||||||
if statusCode != 0 && statusCode != http.StatusOK {
|
if statusCode != 0 && statusCode != http.StatusOK {
|
||||||
return true, fmt.Sprintf("http=%d", statusCode), base.NoThrottleReasonHint
|
return true, this.throttleHttpMessage(int(statusCode)), base.NoThrottleReasonHint
|
||||||
}
|
}
|
||||||
// Replication lag throttle
|
// Replication lag throttle
|
||||||
maxLagMillisecondsThrottleThreshold := atomic.LoadInt64(&this.migrationContext.MaxLagMillisecondsThrottleThreshold)
|
maxLagMillisecondsThrottleThreshold := atomic.LoadInt64(&this.migrationContext.MaxLagMillisecondsThrottleThreshold)
|
||||||
|
@ -56,5 +56,5 @@ func (this *ConnectionConfig) GetDBUri(databaseName string) string {
|
|||||||
// Wrap IPv6 literals in square brackets
|
// Wrap IPv6 literals in square brackets
|
||||||
hostname = fmt.Sprintf("[%s]", hostname)
|
hostname = fmt.Sprintf("[%s]", hostname)
|
||||||
}
|
}
|
||||||
return fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?charset=utf8mb4,utf8,latin1", this.User, this.Password, hostname, this.Key.Port, databaseName)
|
return fmt.Sprintf("%s:%s@tcp(%s:%d)/%s?interpolateParams=true&autocommit=true&charset=utf8mb4,utf8,latin1", this.User, this.Password, hostname, this.Key.Port, databaseName)
|
||||||
}
|
}
|
||||||
|
@ -17,6 +17,7 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const MaxTableNameLength = 64
|
const MaxTableNameLength = 64
|
||||||
|
const MaxReplicationPasswordLength = 32
|
||||||
|
|
||||||
type ReplicationLagResult struct {
|
type ReplicationLagResult struct {
|
||||||
Key InstanceKey
|
Key InstanceKey
|
||||||
|
@ -38,6 +38,8 @@ func buildColumnsPreparedValues(columns *ColumnList) []string {
|
|||||||
var token string
|
var token string
|
||||||
if column.timezoneConversion != nil {
|
if column.timezoneConversion != nil {
|
||||||
token = fmt.Sprintf("convert_tz(?, '%s', '%s')", column.timezoneConversion.ToTimezone, "+00:00")
|
token = fmt.Sprintf("convert_tz(?, '%s', '%s')", column.timezoneConversion.ToTimezone, "+00:00")
|
||||||
|
} else if column.Type == JSONColumnType {
|
||||||
|
token = "convert(? using utf8mb4)"
|
||||||
} else {
|
} else {
|
||||||
token = "?"
|
token = "?"
|
||||||
}
|
}
|
||||||
@ -106,6 +108,8 @@ func BuildSetPreparedClause(columns *ColumnList) (result string, err error) {
|
|||||||
var setToken string
|
var setToken string
|
||||||
if column.timezoneConversion != nil {
|
if column.timezoneConversion != nil {
|
||||||
setToken = fmt.Sprintf("%s=convert_tz(?, '%s', '%s')", EscapeName(column.Name), column.timezoneConversion.ToTimezone, "+00:00")
|
setToken = fmt.Sprintf("%s=convert_tz(?, '%s', '%s')", EscapeName(column.Name), column.timezoneConversion.ToTimezone, "+00:00")
|
||||||
|
} else if column.Type == JSONColumnType {
|
||||||
|
setToken = fmt.Sprintf("%s=convert(? using utf8mb4)", EscapeName(column.Name))
|
||||||
} else {
|
} else {
|
||||||
setToken = fmt.Sprintf("%s=?", EscapeName(column.Name))
|
setToken = fmt.Sprintf("%s=?", EscapeName(column.Name))
|
||||||
}
|
}
|
||||||
@ -231,7 +235,62 @@ func BuildRangeInsertPreparedQuery(databaseName, originalTableName, ghostTableNa
|
|||||||
return BuildRangeInsertQuery(databaseName, originalTableName, ghostTableName, sharedColumns, mappedSharedColumns, uniqueKey, uniqueKeyColumns, rangeStartValues, rangeEndValues, rangeStartArgs, rangeEndArgs, includeRangeStartValues, transactionalTable)
|
return BuildRangeInsertQuery(databaseName, originalTableName, ghostTableName, sharedColumns, mappedSharedColumns, uniqueKey, uniqueKeyColumns, rangeStartValues, rangeEndValues, rangeStartArgs, rangeEndArgs, includeRangeStartValues, transactionalTable)
|
||||||
}
|
}
|
||||||
|
|
||||||
func BuildUniqueKeyRangeEndPreparedQuery(databaseName, tableName string, uniqueKeyColumns *ColumnList, rangeStartArgs, rangeEndArgs []interface{}, chunkSize int64, includeRangeStartValues bool, hint string) (result string, explodedArgs []interface{}, err error) {
|
func BuildUniqueKeyRangeEndPreparedQueryViaOffset(databaseName, tableName string, uniqueKeyColumns *ColumnList, rangeStartArgs, rangeEndArgs []interface{}, chunkSize int64, includeRangeStartValues bool, hint string) (result string, explodedArgs []interface{}, err error) {
|
||||||
|
if uniqueKeyColumns.Len() == 0 {
|
||||||
|
return "", explodedArgs, fmt.Errorf("Got 0 columns in BuildUniqueKeyRangeEndPreparedQuery")
|
||||||
|
}
|
||||||
|
databaseName = EscapeName(databaseName)
|
||||||
|
tableName = EscapeName(tableName)
|
||||||
|
|
||||||
|
var startRangeComparisonSign ValueComparisonSign = GreaterThanComparisonSign
|
||||||
|
if includeRangeStartValues {
|
||||||
|
startRangeComparisonSign = GreaterThanOrEqualsComparisonSign
|
||||||
|
}
|
||||||
|
rangeStartComparison, rangeExplodedArgs, err := BuildRangePreparedComparison(uniqueKeyColumns, rangeStartArgs, startRangeComparisonSign)
|
||||||
|
if err != nil {
|
||||||
|
return "", explodedArgs, err
|
||||||
|
}
|
||||||
|
explodedArgs = append(explodedArgs, rangeExplodedArgs...)
|
||||||
|
rangeEndComparison, rangeExplodedArgs, err := BuildRangePreparedComparison(uniqueKeyColumns, rangeEndArgs, LessThanOrEqualsComparisonSign)
|
||||||
|
if err != nil {
|
||||||
|
return "", explodedArgs, err
|
||||||
|
}
|
||||||
|
explodedArgs = append(explodedArgs, rangeExplodedArgs...)
|
||||||
|
|
||||||
|
uniqueKeyColumnNames := duplicateNames(uniqueKeyColumns.Names())
|
||||||
|
uniqueKeyColumnAscending := make([]string, len(uniqueKeyColumnNames), len(uniqueKeyColumnNames))
|
||||||
|
uniqueKeyColumnDescending := make([]string, len(uniqueKeyColumnNames), len(uniqueKeyColumnNames))
|
||||||
|
for i, column := range uniqueKeyColumns.Columns() {
|
||||||
|
uniqueKeyColumnNames[i] = EscapeName(uniqueKeyColumnNames[i])
|
||||||
|
if column.Type == EnumColumnType {
|
||||||
|
uniqueKeyColumnAscending[i] = fmt.Sprintf("concat(%s) asc", uniqueKeyColumnNames[i])
|
||||||
|
uniqueKeyColumnDescending[i] = fmt.Sprintf("concat(%s) desc", uniqueKeyColumnNames[i])
|
||||||
|
} else {
|
||||||
|
uniqueKeyColumnAscending[i] = fmt.Sprintf("%s asc", uniqueKeyColumnNames[i])
|
||||||
|
uniqueKeyColumnDescending[i] = fmt.Sprintf("%s desc", uniqueKeyColumnNames[i])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = fmt.Sprintf(`
|
||||||
|
select /* gh-ost %s.%s %s */
|
||||||
|
%s
|
||||||
|
from
|
||||||
|
%s.%s
|
||||||
|
where %s and %s
|
||||||
|
order by
|
||||||
|
%s
|
||||||
|
limit 1
|
||||||
|
offset %d
|
||||||
|
`, databaseName, tableName, hint,
|
||||||
|
strings.Join(uniqueKeyColumnNames, ", "),
|
||||||
|
databaseName, tableName,
|
||||||
|
rangeStartComparison, rangeEndComparison,
|
||||||
|
strings.Join(uniqueKeyColumnAscending, ", "),
|
||||||
|
(chunkSize - 1),
|
||||||
|
)
|
||||||
|
return result, explodedArgs, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func BuildUniqueKeyRangeEndPreparedQueryViaTemptable(databaseName, tableName string, uniqueKeyColumns *ColumnList, rangeStartArgs, rangeEndArgs []interface{}, chunkSize int64, includeRangeStartValues bool, hint string) (result string, explodedArgs []interface{}, err error) {
|
||||||
if uniqueKeyColumns.Len() == 0 {
|
if uniqueKeyColumns.Len() == 0 {
|
||||||
return "", explodedArgs, fmt.Errorf("Got 0 columns in BuildUniqueKeyRangeEndPreparedQuery")
|
return "", explodedArgs, fmt.Errorf("Got 0 columns in BuildUniqueKeyRangeEndPreparedQuery")
|
||||||
}
|
}
|
||||||
|
@ -283,7 +283,7 @@ func TestBuildUniqueKeyRangeEndPreparedQuery(t *testing.T) {
|
|||||||
rangeStartArgs := []interface{}{3, 17}
|
rangeStartArgs := []interface{}{3, 17}
|
||||||
rangeEndArgs := []interface{}{103, 117}
|
rangeEndArgs := []interface{}{103, 117}
|
||||||
|
|
||||||
query, explodedArgs, err := BuildUniqueKeyRangeEndPreparedQuery(databaseName, originalTableName, uniqueKeyColumns, rangeStartArgs, rangeEndArgs, chunkSize, false, "test")
|
query, explodedArgs, err := BuildUniqueKeyRangeEndPreparedQueryViaTemptable(databaseName, originalTableName, uniqueKeyColumns, rangeStartArgs, rangeEndArgs, chunkSize, false, "test")
|
||||||
test.S(t).ExpectNil(err)
|
test.S(t).ExpectNil(err)
|
||||||
expected := `
|
expected := `
|
||||||
select /* gh-ost mydb.tbl test */ name, position
|
select /* gh-ost mydb.tbl test */ name, position
|
||||||
|
@ -20,6 +20,8 @@ const (
|
|||||||
DateTimeColumnType = iota
|
DateTimeColumnType = iota
|
||||||
EnumColumnType = iota
|
EnumColumnType = iota
|
||||||
MediumIntColumnType = iota
|
MediumIntColumnType = iota
|
||||||
|
JSONColumnType = iota
|
||||||
|
FloatColumnType = iota
|
||||||
)
|
)
|
||||||
|
|
||||||
const maxMediumintUnsigned int32 = 16777215
|
const maxMediumintUnsigned int32 = 16777215
|
||||||
|
27
localtests/datetime-submillis-zeroleading/create.sql
Normal file
27
localtests/datetime-submillis-zeroleading/create.sql
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
drop table if exists gh_ost_test;
|
||||||
|
create table gh_ost_test (
|
||||||
|
id int auto_increment,
|
||||||
|
i int not null,
|
||||||
|
dt0 datetime(6),
|
||||||
|
dt1 datetime(6),
|
||||||
|
ts2 timestamp(6),
|
||||||
|
updated tinyint unsigned default 0,
|
||||||
|
primary key(id),
|
||||||
|
key i_idx(i)
|
||||||
|
) auto_increment=1;
|
||||||
|
|
||||||
|
drop event if exists gh_ost_test;
|
||||||
|
delimiter ;;
|
||||||
|
create event gh_ost_test
|
||||||
|
on schedule every 1 second
|
||||||
|
starts current_timestamp
|
||||||
|
ends current_timestamp + interval 60 second
|
||||||
|
on completion not preserve
|
||||||
|
enable
|
||||||
|
do
|
||||||
|
begin
|
||||||
|
insert into gh_ost_test values (null, 11, '2016-10-31 11:22:33.0123', now(), '2016-10-31 11:22:33.0369', 0);
|
||||||
|
update gh_ost_test set dt1='2016-10-31 11:22:33.0246', updated = 1 where i = 11 order by id desc limit 1;
|
||||||
|
|
||||||
|
insert into gh_ost_test values (null, 13, '2016-10-31 11:22:33.0123', '2016-10-31 11:22:33.789', '2016-10-31 11:22:33.0369', 0);
|
||||||
|
end ;;
|
@ -3,9 +3,9 @@ create table gh_ost_test (
|
|||||||
id int unsigned auto_increment,
|
id int unsigned auto_increment,
|
||||||
i int not null,
|
i int not null,
|
||||||
ts0 timestamp default current_timestamp,
|
ts0 timestamp default current_timestamp,
|
||||||
ts1 timestamp,
|
ts1 timestamp null,
|
||||||
dt2 datetime,
|
dt2 datetime,
|
||||||
t datetime,
|
t datetime default current_timestamp,
|
||||||
updated tinyint unsigned default 0,
|
updated tinyint unsigned default 0,
|
||||||
primary key(id, t),
|
primary key(id, t),
|
||||||
key i_idx(i)
|
key i_idx(i)
|
||||||
|
@ -1 +1 @@
|
|||||||
--alter="change column t t timestamp not null"
|
--alter="change column t t timestamp default current_timestamp"
|
||||||
|
@ -3,9 +3,9 @@ create table gh_ost_test (
|
|||||||
id int unsigned auto_increment,
|
id int unsigned auto_increment,
|
||||||
i int not null,
|
i int not null,
|
||||||
ts0 timestamp default current_timestamp,
|
ts0 timestamp default current_timestamp,
|
||||||
ts1 timestamp,
|
ts1 timestamp null,
|
||||||
dt2 datetime,
|
dt2 datetime,
|
||||||
t datetime,
|
t datetime null,
|
||||||
updated tinyint unsigned default 0,
|
updated tinyint unsigned default 0,
|
||||||
primary key(id),
|
primary key(id),
|
||||||
key i_idx(i)
|
key i_idx(i)
|
||||||
|
@ -1 +1 @@
|
|||||||
--alter="change column t t timestamp not null"
|
--alter="change column t t timestamp null"
|
||||||
|
@ -2,7 +2,7 @@ drop table if exists gh_ost_test;
|
|||||||
create table gh_ost_test (
|
create table gh_ost_test (
|
||||||
id int auto_increment,
|
id int auto_increment,
|
||||||
i int not null,
|
i int not null,
|
||||||
e enum('red', 'green', 'blue', 'orange') null default null collate 'utf8_bin',
|
e enum('red', 'green', 'blue', 'orange') not null default 'red' collate 'utf8_bin',
|
||||||
primary key(id, e)
|
primary key(id, e)
|
||||||
) auto_increment=1;
|
) auto_increment=1;
|
||||||
|
|
||||||
|
11
localtests/fail-float-unique-key/create.sql
Normal file
11
localtests/fail-float-unique-key/create.sql
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
drop table if exists gh_ost_test;
|
||||||
|
create table gh_ost_test (
|
||||||
|
f float,
|
||||||
|
i int not null,
|
||||||
|
ts timestamp default current_timestamp,
|
||||||
|
dt datetime,
|
||||||
|
key i_idx(i),
|
||||||
|
unique key f_uidx(f)
|
||||||
|
) auto_increment=1;
|
||||||
|
|
||||||
|
drop event if exists gh_ost_test;
|
1
localtests/fail-float-unique-key/expect_failure
Normal file
1
localtests/fail-float-unique-key/expect_failure
Normal file
@ -0,0 +1 @@
|
|||||||
|
No shared unique key can be found
|
1
localtests/fail-no-unique-key/extra_args
Normal file
1
localtests/fail-no-unique-key/extra_args
Normal file
@ -0,0 +1 @@
|
|||||||
|
--alter="add column v varchar(32)"
|
7
localtests/fail-password-length/create.sql
Normal file
7
localtests/fail-password-length/create.sql
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
drop table if exists gh_ost_test;
|
||||||
|
create table gh_ost_test (
|
||||||
|
id int auto_increment,
|
||||||
|
i int not null,
|
||||||
|
ts timestamp,
|
||||||
|
primary key(id)
|
||||||
|
) auto_increment=1;
|
1
localtests/fail-password-length/expect_failure
Normal file
1
localtests/fail-password-length/expect_failure
Normal file
@ -0,0 +1 @@
|
|||||||
|
MySQL replication length limited to 32 characters
|
1
localtests/fail-password-length/extra_args
Normal file
1
localtests/fail-password-length/extra_args
Normal file
@ -0,0 +1 @@
|
|||||||
|
--password="0123456789abcdefghij0123456789abcdefghijxx"
|
21
localtests/json57/create.sql
Normal file
21
localtests/json57/create.sql
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
drop table if exists gh_ost_test;
|
||||||
|
create table gh_ost_test (
|
||||||
|
id int auto_increment,
|
||||||
|
j json,
|
||||||
|
primary key(id)
|
||||||
|
) auto_increment=1;
|
||||||
|
|
||||||
|
drop event if exists gh_ost_test;
|
||||||
|
delimiter ;;
|
||||||
|
create event gh_ost_test
|
||||||
|
on schedule every 1 second
|
||||||
|
starts current_timestamp
|
||||||
|
ends current_timestamp + interval 60 second
|
||||||
|
on completion not preserve
|
||||||
|
enable
|
||||||
|
do
|
||||||
|
begin
|
||||||
|
insert into gh_ost_test values (null, '"sometext"');
|
||||||
|
insert into gh_ost_test values (null, '{"key":"val"}');
|
||||||
|
insert into gh_ost_test values (null, '{"is-it": true, "count": 3, "elements": []}');
|
||||||
|
end ;;
|
27
localtests/json57dml/create.sql
Normal file
27
localtests/json57dml/create.sql
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
drop table if exists gh_ost_test;
|
||||||
|
create table gh_ost_test (
|
||||||
|
id int auto_increment,
|
||||||
|
i int not null,
|
||||||
|
updated tinyint not null default 0,
|
||||||
|
j json,
|
||||||
|
primary key(id)
|
||||||
|
) auto_increment=1;
|
||||||
|
|
||||||
|
drop event if exists gh_ost_test;
|
||||||
|
delimiter ;;
|
||||||
|
create event gh_ost_test
|
||||||
|
on schedule every 1 second
|
||||||
|
starts current_timestamp
|
||||||
|
ends current_timestamp + interval 60 second
|
||||||
|
on completion not preserve
|
||||||
|
enable
|
||||||
|
do
|
||||||
|
begin
|
||||||
|
insert into gh_ost_test (id, i, j) values (null, 11, '"sometext"');
|
||||||
|
insert into gh_ost_test (id, i, j) values (null, 13, '{"key":"val"}');
|
||||||
|
insert into gh_ost_test (id, i, j) values (null, 17, '{"is-it": true, "count": 3, "elements": []}');
|
||||||
|
|
||||||
|
update gh_ost_test set j = '{"updated": 11}', updated = 1 where i = 11 and updated = 0;
|
||||||
|
update gh_ost_test set j = json_set(j, '$.count', 13, '$.id', id), updated = 1 where i = 13 and updated = 0;
|
||||||
|
delete from gh_ost_test where i = 17;
|
||||||
|
end ;;
|
26
localtests/modify-change-case-pk/create.sql
Normal file
26
localtests/modify-change-case-pk/create.sql
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
drop table if exists gh_ost_test;
|
||||||
|
create table gh_ost_test (
|
||||||
|
id int auto_increment,
|
||||||
|
c1 int not null default 0,
|
||||||
|
c2 int not null default 0,
|
||||||
|
primary key (id)
|
||||||
|
) auto_increment=1;
|
||||||
|
|
||||||
|
drop event if exists gh_ost_test;
|
||||||
|
delimiter ;;
|
||||||
|
create event gh_ost_test
|
||||||
|
on schedule every 1 second
|
||||||
|
starts current_timestamp
|
||||||
|
ends current_timestamp + interval 60 second
|
||||||
|
on completion not preserve
|
||||||
|
enable
|
||||||
|
do
|
||||||
|
begin
|
||||||
|
insert ignore into gh_ost_test values (1, 11, 23);
|
||||||
|
insert ignore into gh_ost_test values (2, 13, 23);
|
||||||
|
insert into gh_ost_test values (null, 17, 23);
|
||||||
|
set @last_insert_id := last_insert_id();
|
||||||
|
update gh_ost_test set c1=c1+@last_insert_id, c2=c2+@last_insert_id where id=@last_insert_id order by id desc limit 1;
|
||||||
|
delete from gh_ost_test where id=1;
|
||||||
|
delete from gh_ost_test where c1=13; -- id=2
|
||||||
|
end ;;
|
1
localtests/modify-change-case-pk/expect_failure
Normal file
1
localtests/modify-change-case-pk/expect_failure
Normal file
@ -0,0 +1 @@
|
|||||||
|
No shared unique key can be found after ALTER
|
1
localtests/modify-change-case-pk/extra_args
Normal file
1
localtests/modify-change-case-pk/extra_args
Normal file
@ -0,0 +1 @@
|
|||||||
|
--alter="modify ID int"
|
26
localtests/modify-change-case/create.sql
Normal file
26
localtests/modify-change-case/create.sql
Normal file
@ -0,0 +1,26 @@
|
|||||||
|
drop table if exists gh_ost_test;
|
||||||
|
create table gh_ost_test (
|
||||||
|
id int auto_increment,
|
||||||
|
c1 int not null default 0,
|
||||||
|
c2 int not null default 0,
|
||||||
|
primary key (id)
|
||||||
|
) auto_increment=1;
|
||||||
|
|
||||||
|
drop event if exists gh_ost_test;
|
||||||
|
delimiter ;;
|
||||||
|
create event gh_ost_test
|
||||||
|
on schedule every 1 second
|
||||||
|
starts current_timestamp
|
||||||
|
ends current_timestamp + interval 60 second
|
||||||
|
on completion not preserve
|
||||||
|
enable
|
||||||
|
do
|
||||||
|
begin
|
||||||
|
insert ignore into gh_ost_test values (1, 11, 23);
|
||||||
|
insert ignore into gh_ost_test values (2, 13, 23);
|
||||||
|
insert into gh_ost_test values (null, 17, 23);
|
||||||
|
set @last_insert_id := last_insert_id();
|
||||||
|
update gh_ost_test set c1=c1+@last_insert_id, c2=c2+@last_insert_id where id=@last_insert_id order by id desc limit 1;
|
||||||
|
delete from gh_ost_test where id=1;
|
||||||
|
delete from gh_ost_test where c1=13; -- id=2
|
||||||
|
end ;;
|
1
localtests/modify-change-case/extra_args
Normal file
1
localtests/modify-change-case/extra_args
Normal file
@ -0,0 +1 @@
|
|||||||
|
--alter="modify C2 int not null default 0"
|
@ -90,6 +90,7 @@ test_single() {
|
|||||||
--initially-drop-socket-file \
|
--initially-drop-socket-file \
|
||||||
--test-on-replica \
|
--test-on-replica \
|
||||||
--default-retries=1 \
|
--default-retries=1 \
|
||||||
|
--chunk-size=10 \
|
||||||
--verbose \
|
--verbose \
|
||||||
--debug \
|
--debug \
|
||||||
--stack \
|
--stack \
|
||||||
|
@ -3,7 +3,7 @@ create table gh_ost_test (
|
|||||||
id int auto_increment,
|
id int auto_increment,
|
||||||
i int not null,
|
i int not null,
|
||||||
ts0 timestamp default current_timestamp,
|
ts0 timestamp default current_timestamp,
|
||||||
ts1 timestamp,
|
ts1 timestamp default current_timestamp,
|
||||||
dt2 datetime,
|
dt2 datetime,
|
||||||
t datetime,
|
t datetime,
|
||||||
updated tinyint unsigned default 0,
|
updated tinyint unsigned default 0,
|
||||||
|
@ -3,8 +3,8 @@ create table gh_ost_test (
|
|||||||
id int auto_increment,
|
id int auto_increment,
|
||||||
i int not null,
|
i int not null,
|
||||||
ts0 timestamp default current_timestamp,
|
ts0 timestamp default current_timestamp,
|
||||||
ts1 timestamp,
|
ts1 timestamp default current_timestamp,
|
||||||
ts2 timestamp,
|
ts2 timestamp default current_timestamp,
|
||||||
updated tinyint unsigned default 0,
|
updated tinyint unsigned default 0,
|
||||||
primary key(id),
|
primary key(id),
|
||||||
key i_idx(i)
|
key i_idx(i)
|
||||||
|
@ -3,7 +3,7 @@ create table gh_ost_test (
|
|||||||
id int auto_increment,
|
id int auto_increment,
|
||||||
i int not null,
|
i int not null,
|
||||||
ts0 timestamp default current_timestamp,
|
ts0 timestamp default current_timestamp,
|
||||||
ts1 timestamp,
|
ts1 timestamp default current_timestamp,
|
||||||
dt2 datetime,
|
dt2 datetime,
|
||||||
t datetime,
|
t datetime,
|
||||||
updated tinyint unsigned default 0,
|
updated tinyint unsigned default 0,
|
||||||
|
@ -1 +1 @@
|
|||||||
--alter="change column t t timestamp not null"
|
--alter="change column t t timestamp not null default current_timestamp"
|
||||||
|
@ -3,8 +3,8 @@ create table gh_ost_test (
|
|||||||
id int auto_increment,
|
id int auto_increment,
|
||||||
i int not null,
|
i int not null,
|
||||||
ts0 timestamp default current_timestamp,
|
ts0 timestamp default current_timestamp,
|
||||||
ts1 timestamp,
|
ts1 timestamp default current_timestamp,
|
||||||
ts2 timestamp,
|
ts2 timestamp default current_timestamp,
|
||||||
updated tinyint unsigned default 0,
|
updated tinyint unsigned default 0,
|
||||||
primary key(id),
|
primary key(id),
|
||||||
key i_idx(i)
|
key i_idx(i)
|
||||||
|
2
vendor/github.com/siddontang/go-mysql/replication/row_event.go
generated
vendored
2
vendor/github.com/siddontang/go-mysql/replication/row_event.go
generated
vendored
@ -666,7 +666,7 @@ func decodeDatetime2(data []byte, dec uint16) (interface{}, int, error) {
|
|||||||
hour := int((hms >> 12))
|
hour := int((hms >> 12))
|
||||||
|
|
||||||
if secPart != 0 {
|
if secPart != 0 {
|
||||||
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d.%d", year, month, day, hour, minute, second, secPart), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d.%06d", year, month, day, hour, minute, second, secPart), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
||||||
}
|
}
|
||||||
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d", year, month, day, hour, minute, second), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d", year, month, day, hour, minute, second), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
||||||
}
|
}
|
||||||
|
Loading…
Reference in New Issue
Block a user