Merge branch 'master' into nm-refactor-migration-context

This commit is contained in:
Nikhil Mathew 2017-11-14 10:29:49 -08:00 committed by GitHub
commit 538833ea84
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
17 changed files with 173 additions and 99 deletions

View File

@ -2,6 +2,10 @@
A more in-depth discussion of various `gh-ost` command line flags: implementation, implication, use cases. A more in-depth discussion of various `gh-ost` command line flags: implementation, implication, use cases.
### allow-master-master
See [`--assume-master-host`](#assume-master-host).
### allow-on-master ### allow-on-master
By default, `gh-ost` would like you to connect to a replica, from where it figures out the master by itself. This wiring is required should your master execute using `binlog_format=STATEMENT`. By default, `gh-ost` would like you to connect to a replica, from where it figures out the master by itself. This wiring is required should your master execute using `binlog_format=STATEMENT`.
@ -14,20 +18,20 @@ When your migration issues a column rename (`change column old_name new_name ...
`gh-ost` will print out what it thinks the _rename_ implied, but will not issue the migration unless you provide with `--approve-renamed-columns`. `gh-ost` will print out what it thinks the _rename_ implied, but will not issue the migration unless you provide with `--approve-renamed-columns`.
If you think `gh-ost` is mistaken and that there's actually no _rename_ involved, you may pass `--skip-renamed-columns` instead. This will cause `gh-ost` to disassociate the column values; data will not be copied between those columns. If you think `gh-ost` is mistaken and that there's actually no _rename_ involved, you may pass [`--skip-renamed-columns`](#skip-renamed-columns) instead. This will cause `gh-ost` to disassociate the column values; data will not be copied between those columns.
### assume-master-host ### assume-master-host
`gh-ost` infers the identity of the master server by crawling up the replication topology. You may explicitly tell `gh-ost` the identity of the master host via `--assume-master-host=the.master.com`. This is useful in: `gh-ost` infers the identity of the master server by crawling up the replication topology. You may explicitly tell `gh-ost` the identity of the master host via `--assume-master-host=the.master.com`. This is useful in:
- master-master topologies (together with `--allow-master-master`), where `gh-ost` can arbitrarily pick one of the co-master and you prefer that it picks a specific one - _master-master_ topologies (together with [`--allow-master-master`](#allow-master-master)), where `gh-ost` can arbitrarily pick one of the co-masters and you prefer that it picks a specific one
- _tungsten replicator_ topologies (together with `--tungsten`), where `gh-ost` is unable to crawl and detect the master - _tungsten replicator_ topologies (together with [`--tungsten`](#tungsten)), where `gh-ost` is unable to crawl and detect the master
### assume-rbr ### assume-rbr
If you happen to _know_ your servers use RBR (Row Based Replication, i.e. `binlog_format=ROW`), you may specify `--assume-rbr`. This skips a verification step where `gh-ost` would issue a `STOP SLAVE; START SLAVE`. If you happen to _know_ your servers use RBR (Row Based Replication, i.e. `binlog_format=ROW`), you may specify `--assume-rbr`. This skips a verification step where `gh-ost` would issue a `STOP SLAVE; START SLAVE`.
Skipping this step means `gh-ost` would not need the `SUPER` privilege in order to operate. Skipping this step means `gh-ost` would not need the `SUPER` privilege in order to operate.
You may want to use this on Amazon RDS You may want to use this on Amazon RDS.
### conf ### conf
@ -41,21 +45,25 @@ password=123456
### concurrent-rowcount ### concurrent-rowcount
See `exact-rowcount` Defaults to `true`. See [`exact-rowcount`](#exact-rowcount)
### critical-load-interval-millis ### critical-load
Comma delimited status-name=threshold, same format as [`--max-load`](#max-load).
`--critical-load` defines a threshold that, when met, `gh-ost` panics and bails out. The default behavior is to bail out immediately when meeting this threshold. `--critical-load` defines a threshold that, when met, `gh-ost` panics and bails out. The default behavior is to bail out immediately when meeting this threshold.
This may sometimes lead to migrations bailing out on a very short spike, that, while in itself is impacting production and is worth investigating, isn't reason enough to kill a 10 hour migration. This may sometimes lead to migrations bailing out on a very short spike, that, while in itself is impacting production and is worth investigating, isn't reason enough to kill a 10 hour migration.
### critical-load-interval-millis
When `--critical-load-interval-millis` is specified (e.g. `--critical-load-interval-millis=2500`), `gh-ost` gives a second chance: when it meets `critical-load` threshold, it doesn't bail out. Instead, it starts a timer (in this example: `2.5` seconds) and re-checks `critical-load` when the timer expires. If `critical-load` is met again, `gh-ost` panics and bails out. If not, execution continues. When `--critical-load-interval-millis` is specified (e.g. `--critical-load-interval-millis=2500`), `gh-ost` gives a second chance: when it meets `critical-load` threshold, it doesn't bail out. Instead, it starts a timer (in this example: `2.5` seconds) and re-checks `critical-load` when the timer expires. If `critical-load` is met again, `gh-ost` panics and bails out. If not, execution continues.
This is somewhat similar to a Nagios `n`-times test, where `n` in our case is always `2`. This is somewhat similar to a Nagios `n`-times test, where `n` in our case is always `2`.
### cut-over ### cut-over
Optional. Default is `safe`. See more discussion in [cut-over](cut-over.md) Optional. Default is `safe`. See more discussion in [`cut-over`](cut-over.md)
### discard-foreign-keys ### discard-foreign-keys
@ -74,7 +82,7 @@ The `--dml-batch-size` flag controls the size of the batched write. Allowed valu
Why is this behavior configurable? Different workloads have different characteristics. Some workloads have very large writes, such that aggregating even `50` writes into a transaction makes for a significant transaction size. On other workloads write rate is high such that one just can't allow for a hundred more syncs to disk per second. The default value of `10` is a modest compromise that should probably work very well for most workloads. Your mileage may vary. Why is this behavior configurable? Different workloads have different characteristics. Some workloads have very large writes, such that aggregating even `50` writes into a transaction makes for a significant transaction size. On other workloads write rate is high such that one just can't allow for a hundred more syncs to disk per second. The default value of `10` is a modest compromise that should probably work very well for most workloads. Your mileage may vary.
Noteworthy is that setting `--dml-batch-size` to higher value _does not_ mean `gh-ost` blocks or waits on writes. The batch size is an upper limit on transaction size, not a minimal one. If `gh-ost` doesn't have "enough" events in the pipe, it does not wait on the binary log, it just writes what it already has. This conveniently suggests that if write load is light enough for `gh-ost` to only see a few events in the binary log at a given time, then it is also light neough for `gh-ost` to apply a fraction of the batch size. Noteworthy is that setting `--dml-batch-size` to higher value _does not_ mean `gh-ost` blocks or waits on writes. The batch size is an upper limit on transaction size, not a minimal one. If `gh-ost` doesn't have "enough" events in the pipe, it does not wait on the binary log, it just writes what it already has. This conveniently suggests that if write load is light enough for `gh-ost` to only see a few events in the binary log at a given time, then it is also light enough for `gh-ost` to apply a fraction of the batch size.
### exact-rowcount ### exact-rowcount
@ -84,8 +92,8 @@ A `gh-ost` execution need to copy whatever rows you have in your existing table
`gh-ost` also supports the `--exact-rowcount` flag. When this flag is given, two things happen: `gh-ost` also supports the `--exact-rowcount` flag. When this flag is given, two things happen:
- An initial, authoritative `select count(*) from your_table`. - An initial, authoritative `select count(*) from your_table`.
This query may take a long time to complete, but is performed before we begin the massive operations. This query may take a long time to complete, but is performed before we begin the massive operations.
When `--concurrent-rowcount` is also specified, this runs in parallel to row copy. When [`--concurrent-rowcount`](#concurrent-rowcount) is also specified, this runs in parallel to row copy.
Note: `--concurrent-rowcount` now defaults to `true`. Note: [`--concurrent-rowcount`](#concurrent-rowcount) now defaults to `true`.
- A continuous update to the estimate as we make progress applying events. - A continuous update to the estimate as we make progress applying events.
We heuristically update the number of rows based on the queries we process from the binlogs. We heuristically update the number of rows based on the queries we process from the binlogs.
@ -95,6 +103,10 @@ While the ongoing estimated number of rows is still heuristic, it's almost exact
Without this parameter, migration is a _noop_: testing table creation and validity of migration, but not touching data. Without this parameter, migration is a _noop_: testing table creation and validity of migration, but not touching data.
### heartbeat-interval-millis
Default 100. See [`subsecond-lag`](subsecond-lag.md) for details.
### initially-drop-ghost-table ### initially-drop-ghost-table
`gh-ost` maintains two tables while migrating: the _ghost_ table (which is synced from your original table and finally replaces it) and a changelog table, which is used internally for bookkeeping. By default, it panics and aborts if it sees those tables upon startup. Provide `--initially-drop-ghost-table` and `--initially-drop-old-table` to let `gh-ost` know it's OK to drop them beforehand. `gh-ost` maintains two tables while migrating: the _ghost_ table (which is synced from your original table and finally replaces it) and a changelog table, which is used internally for bookkeeping. By default, it panics and aborts if it sees those tables upon startup. Provide `--initially-drop-ghost-table` and `--initially-drop-old-table` to let `gh-ost` know it's OK to drop them beforehand.
@ -103,37 +115,55 @@ We think `gh-ost` should not take chances or make assumptions about the user's t
### initially-drop-old-table ### initially-drop-old-table
See #initially-drop-ghost-table See [`initially-drop-ghost-table`](#initially-drop-ghost-table)
### max-lag-millis ### max-lag-millis
On a replication topology, this is perhaps the most important migration throttling factor: the maximum lag allowed for migration to work. If lag exceeds this value, migration throttles. On a replication topology, this is perhaps the most important migration throttling factor: the maximum lag allowed for migration to work. If lag exceeds this value, migration throttles.
When using [Connect to replica, migrate on master](cheatsheet.md), this lag is primarily tested on the very replica `gh-ost` operates on. Lag is measured by checking the heartbeat events injected by `gh-ost` itself on the utility changelog table. That is, to measure this replica's lag, `gh-ost` doesn't need to issue `show slave status` nor have any external heartbeat mechanism. When using [Connect to replica, migrate on master](cheatsheet.md#a-connect-to-replica-migrate-on-master), this lag is primarily tested on the very replica `gh-ost` operates on. Lag is measured by checking the heartbeat events injected by `gh-ost` itself on the utility changelog table. That is, to measure this replica's lag, `gh-ost` doesn't need to issue `show slave status` nor have any external heartbeat mechanism.
When `--throttle-control-replicas` is provided, throttling also considers lag on specified hosts. Lag measurements on listed hosts is done by querying `gh-ost`'s _changelog_ table, where `gh-ost` injects a heartbeat. When [`--throttle-control-replicas`](#throttle-control-replicas) is provided, throttling also considers lag on specified hosts. Lag measurements on listed hosts is done by querying `gh-ost`'s _changelog_ table, where `gh-ost` injects a heartbeat.
See also: [Sub-second replication lag throttling](subsecond-lag.md) See also: [Sub-second replication lag throttling](subsecond-lag.md)
### max-load
List of metrics and threshold values; topping the threshold of any will cause throttler to kick in. See also: [`throttling`](throttle.md#status-thresholds)
### migrate-on-replica ### migrate-on-replica
Typically `gh-ost` is used to migrate tables on a master. If you wish to only perform the migration in full on a replica, connect `gh-ost` to said replica and pass `--migrate-on-replica`. `gh-ost` will briefly connect to the master but other issue no changes on the master. Migration will be fully executed on the replica, while making sure to maintain a small replication lag. Typically `gh-ost` is used to migrate tables on a master. If you wish to only perform the migration in full on a replica, connect `gh-ost` to said replica and pass `--migrate-on-replica`. `gh-ost` will briefly connect to the master but other issue no changes on the master. Migration will be fully executed on the replica, while making sure to maintain a small replication lag.
### postpone-cut-over-flag-file
Indicate a file name, such that the final [cut-over](cut-over.md) step does not take place as long as the file exists.
When this flag is set, `gh-ost` expects the file to exist on startup, or else tries to create it. `gh-ost` exits with error if the file does not exist and `gh-ost` is unable to create it.
With this flag set, the migration will cut-over upon deletion of the file or upon `cut-over` [interactive command](interactive-commands.md).
### replica-server-id
Defaults to 99999. If you run multiple migrations then you must provide a different, unique `--replica-server-id` for each `gh-ost` process.
Optionally involve the process ID, for example: `--replica-server-id=$((1000000000+$$))`.
It's on you to choose a number that does not collide with another `gh-ost` or another running replica.
See also: [`concurrent-migrations`](cheatsheet.md#concurrent-migrations) on the cheatsheet.
### skip-foreign-key-checks ### skip-foreign-key-checks
By default `gh-ost` verifies no foreign keys exist on the migrated table. On servers with large number of tables this check can take a long time. If you're absolutely certain no foreign keys exist (table does not referenece other table nor is referenced by other tables) and wish to save the check time, provide with `--skip-foreign-key-checks`. By default `gh-ost` verifies no foreign keys exist on the migrated table. On servers with large number of tables this check can take a long time. If you're absolutely certain no foreign keys exist (table does not reference other table nor is referenced by other tables) and wish to save the check time, provide with `--skip-foreign-key-checks`.
### skip-renamed-columns ### skip-renamed-columns
See `approve-renamed-columns` See [`approve-renamed-columns`](#approve-renamed-columns)
### test-on-replica ### test-on-replica
Issue the migration on a replica; do not modify data on master. Useful for validating, testing and benchmarking. See [testing-on-replica](testing-on-replica.md) Issue the migration on a replica; do not modify data on master. Useful for validating, testing and benchmarking. See [`testing-on-replica`](testing-on-replica.md)
### throttle-control-replicas ### throttle-control-replicas
Provide a command delimited list of replicas; `gh-ost` will throttle when any of the given replicas lag beyond `--max-lag-millis`. The list can be queried and updated dynamically via [interactive commands](interactive-commands.md) Provide a command delimited list of replicas; `gh-ost` will throttle when any of the given replicas lag beyond [`--max-lag-millis`](#max-lag-millis). The list can be queried and updated dynamically via [interactive commands](interactive-commands.md)
### throttle-http ### throttle-http
@ -142,3 +172,7 @@ Provide a HTTP endpoint; `gh-ost` will issue `HEAD` requests on given URL and th
### timestamp-old-table ### timestamp-old-table
Makes the _old_ table include a timestamp value. The _old_ table is what the original table is renamed to at the end of a successful migration. For example, if the table is `gh_ost_test`, then the _old_ table would normally be `_gh_ost_test_del`. With `--timestamp-old-table` it would be, for example, `_gh_ost_test_20170221103147_del`. Makes the _old_ table include a timestamp value. The _old_ table is what the original table is renamed to at the end of a successful migration. For example, if the table is `gh_ost_test`, then the _old_ table would normally be `_gh_ost_test_del`. With `--timestamp-old-table` it would be, for example, `_gh_ost_test_20170221103147_del`.
### tungsten
See [`tungsten`](cheatsheet.md#tungsten) on the cheatsheet.

View File

@ -43,7 +43,7 @@ Both interfaces may serve at the same time. Both respond to simple text command,
### Querying for data ### Querying for data
For commands that accept an argumetn as value, pass `?` (question mark) to _get_ current value rather than _set_ a new one. For commands that accept an argument as value, pass `?` (question mark) to _get_ current value rather than _set_ a new one.
### Examples ### Examples

View File

@ -24,15 +24,15 @@ Initial output lines may look like this:
2016-05-19 17:57:11 INFO connection validated on 127.0.0.1:3306 2016-05-19 17:57:11 INFO connection validated on 127.0.0.1:3306
2016-05-19 17:57:11 INFO rotate to next log name: mysql-bin.002587 2016-05-19 17:57:11 INFO rotate to next log name: mysql-bin.002587
2016-05-19 17:57:11 INFO connection validated on 127.0.0.1:3306 2016-05-19 17:57:11 INFO connection validated on 127.0.0.1:3306
2016-05-19 17:57:11 INFO Droppping table `mydb`.`_mytable_gst` 2016-05-19 17:57:11 INFO Dropping table `mydb`.`_mytable_gst`
2016-05-19 17:57:11 INFO Table dropped 2016-05-19 17:57:11 INFO Table dropped
2016-05-19 17:57:11 INFO Droppping table `mydb`.`_mytable_old` 2016-05-19 17:57:11 INFO Dropping table `mydb`.`_mytable_old`
2016-05-19 17:57:11 INFO Table dropped 2016-05-19 17:57:11 INFO Table dropped
2016-05-19 17:57:11 INFO Creating ghost table `mydb`.`_mytable_gst` 2016-05-19 17:57:11 INFO Creating ghost table `mydb`.`_mytable_gst`
2016-05-19 17:57:11 INFO Ghost table created 2016-05-19 17:57:11 INFO Ghost table created
2016-05-19 17:57:11 INFO Altering ghost table `mydb`.`_mytable_gst` 2016-05-19 17:57:11 INFO Altering ghost table `mydb`.`_mytable_gst`
2016-05-19 17:57:11 INFO Ghost table altered 2016-05-19 17:57:11 INFO Ghost table altered
2016-05-19 17:57:11 INFO Droppping table `mydb`.`_mytable_osc` 2016-05-19 17:57:11 INFO Dropping table `mydb`.`_mytable_osc`
2016-05-19 17:57:11 INFO Table dropped 2016-05-19 17:57:11 INFO Table dropped
2016-05-19 17:57:11 INFO Creating changelog table `mydb`.`_mytable_osc` 2016-05-19 17:57:11 INFO Creating changelog table `mydb`.`_mytable_osc`
2016-05-19 17:57:11 INFO Changelog table created 2016-05-19 17:57:11 INFO Changelog table created

View File

@ -16,7 +16,7 @@ Use of triggers simplifies a lot of the flow in doing a live table migration, bu
Triggers are stored routines which are invoked on a per-row operation upon `INSERT`, `DELETE`, `UPDATE` on a table. Triggers are stored routines which are invoked on a per-row operation upon `INSERT`, `DELETE`, `UPDATE` on a table.
They were introduced in MySQL `5.0`. They were introduced in MySQL `5.0`.
A trigger may contain a set of queries, and these queries run in the same transaction space as the query that manipulates the table. This makes for an atomicy of both the original operation on the table and the trigger-invoked operations. A trigger may contain a set of queries, and these queries run in the same transaction space as the query that manipulates the table. This makes for an atomicity of both the original operation on the table and the trigger-invoked operations.
### Triggers, overhead ### Triggers, overhead

View File

@ -11,6 +11,10 @@ import (
"regexp" "regexp"
"strings" "strings"
"time" "time"
gosql "database/sql"
"github.com/github/gh-ost/go/mysql"
"github.com/outbrain/golib/log"
) )
var ( var (
@ -33,6 +37,15 @@ func FileExists(fileName string) bool {
return false return false
} }
func TouchFile(fileName string) error {
f, err := os.OpenFile(fileName, os.O_APPEND|os.O_CREATE, 0755)
if err != nil {
return (err)
}
defer f.Close()
return nil
}
// StringContainsAll returns true if `s` contains all non empty given `substrings` // StringContainsAll returns true if `s` contains all non empty given `substrings`
// The function returns `false` if no non-empty arguments are given. // The function returns `false` if no non-empty arguments are given.
func StringContainsAll(s string, substrings ...string) bool { func StringContainsAll(s string, substrings ...string) bool {
@ -50,3 +63,25 @@ func StringContainsAll(s string, substrings ...string) bool {
} }
return nonEmptyStringsFound return nonEmptyStringsFound
} }
func ValidateConnection(db *gosql.DB, connectionConfig *mysql.ConnectionConfig) (string, error) {
query := `select @@global.port, @@global.version`
var port, extraPort int
var version string
if err := db.QueryRow(query).Scan(&port, &version); err != nil {
return "", err
}
extraPortQuery := `select @@global.extra_port`
if err := db.QueryRow(extraPortQuery).Scan(&extraPort); err != nil {
// swallow this error. not all servers support extra_port
}
if connectionConfig.Key.Port == port || (extraPort > 0 && connectionConfig.Key.Port == extraPort) {
log.Infof("connection validated on %+v", connectionConfig.Key)
return version, nil
} else if extraPort == 0 {
return "", fmt.Errorf("Unexpected database port reported: %+v", port)
} else {
return "", fmt.Errorf("Unexpected database port reported: %+v / extra_port: %+v", port, extraPort)
}
}

View File

@ -55,12 +55,12 @@ func NewGoMySQLReader(migrationContext *base.MigrationContext) (binlogReader *Go
// ConnectBinlogStreamer // ConnectBinlogStreamer
func (this *GoMySQLReader) ConnectBinlogStreamer(coordinates mysql.BinlogCoordinates) (err error) { func (this *GoMySQLReader) ConnectBinlogStreamer(coordinates mysql.BinlogCoordinates) (err error) {
if coordinates.IsEmpty() { if coordinates.IsEmpty() {
return log.Errorf("Emptry coordinates at ConnectBinlogStreamer()") return log.Errorf("Empty coordinates at ConnectBinlogStreamer()")
} }
this.currentCoordinates = coordinates this.currentCoordinates = coordinates
log.Infof("Connecting binlog streamer at %+v", this.currentCoordinates) log.Infof("Connecting binlog streamer at %+v", this.currentCoordinates)
// Start sync with sepcified binlog file and position // Start sync with specified binlog file and position
this.binlogStreamer, err = this.binlogSyncer.StartSync(gomysql.Position{this.currentCoordinates.LogFile, uint32(this.currentCoordinates.LogPos)}) this.binlogStreamer, err = this.binlogSyncer.StartSync(gomysql.Position{this.currentCoordinates.LogFile, uint32(this.currentCoordinates.LogPos)})
return err return err
@ -112,7 +112,7 @@ func (this *GoMySQLReader) handleRowsEvent(ev *replication.BinlogEvent, rowsEven
} }
} }
// The channel will do the throttling. Whoever is reding from the channel // The channel will do the throttling. Whoever is reding from the channel
// decides whether action is taken sycnhronously (meaning we wait before // decides whether action is taken synchronously (meaning we wait before
// next iteration) or asynchronously (we keep pushing more events) // next iteration) or asynchronously (we keep pushing more events)
// In reality, reads will be synchronous // In reality, reads will be synchronous
entriesChannel <- binlogEntry entriesChannel <- binlogEntry

View File

@ -46,7 +46,7 @@ func main() {
migrationContext := base.NewMigrationContext() migrationContext := base.NewMigrationContext()
flag.StringVar(&migrationContext.InspectorConnectionConfig.Key.Hostname, "host", "127.0.0.1", "MySQL hostname (preferably a replica, not the master)") flag.StringVar(&migrationContext.InspectorConnectionConfig.Key.Hostname, "host", "127.0.0.1", "MySQL hostname (preferably a replica, not the master)")
flag.StringVar(&migrationContext.AssumeMasterHostname, "assume-master-host", "", "(optional) explicitly tell gh-ost the identity of the master. Format: some.host.com[:port] This is useful in master-master setups where you wish to pick an explicit master, or in a tungsten-replicator where gh-ost is unabel to determine the master") flag.StringVar(&migrationContext.AssumeMasterHostname, "assume-master-host", "", "(optional) explicitly tell gh-ost the identity of the master. Format: some.host.com[:port] This is useful in master-master setups where you wish to pick an explicit master, or in a tungsten-replicator where gh-ost is unable to determine the master")
flag.IntVar(&migrationContext.InspectorConnectionConfig.Key.Port, "port", 3306, "MySQL port (preferably a replica, not the master)") flag.IntVar(&migrationContext.InspectorConnectionConfig.Key.Port, "port", 3306, "MySQL port (preferably a replica, not the master)")
flag.StringVar(&migrationContext.CliUser, "user", "", "MySQL user") flag.StringVar(&migrationContext.CliUser, "user", "", "MySQL user")
flag.StringVar(&migrationContext.CliPassword, "password", "", "MySQL password") flag.StringVar(&migrationContext.CliPassword, "password", "", "MySQL password")
@ -121,6 +121,7 @@ func main() {
version := flag.Bool("version", false, "Print version & exit") version := flag.Bool("version", false, "Print version & exit")
checkFlag := flag.Bool("check-flag", false, "Check if another flag exists/supported. This allows for cross-version scripting. Exits with 0 when all additional provided flags exist, nonzero otherwise. You must provide (dummy) values for flags that require a value. Example: gh-ost --check-flag --cut-over-lock-timeout-seconds --nice-ratio 0") checkFlag := flag.Bool("check-flag", false, "Check if another flag exists/supported. This allows for cross-version scripting. Exits with 0 when all additional provided flags exist, nonzero otherwise. You must provide (dummy) values for flags that require a value. Example: gh-ost --check-flag --cut-over-lock-timeout-seconds --nice-ratio 0")
flag.StringVar(&migrationContext.ForceTmpTableName, "force-table-names", "", "table name prefix to be used on the temporary tables") flag.StringVar(&migrationContext.ForceTmpTableName, "force-table-names", "", "table name prefix to be used on the temporary tables")
flag.CommandLine.SetOutput(os.Stdout)
flag.Parse() flag.Parse()
@ -128,7 +129,7 @@ func main() {
return return
} }
if *help { if *help {
fmt.Fprintf(os.Stderr, "Usage of gh-ost:\n") fmt.Fprintf(os.Stdout, "Usage of gh-ost:\n")
flag.PrintDefaults() flag.PrintDefaults()
return return
} }

View File

@ -55,12 +55,14 @@ func (this *Applier) InitDBConnections() (err error) {
return err return err
} }
this.singletonDB.SetMaxOpenConns(1) this.singletonDB.SetMaxOpenConns(1)
if err := this.validateConnection(this.db); err != nil { version, err := base.ValidateConnection(this.db, this.connectionConfig)
if err != nil {
return err return err
} }
if err := this.validateConnection(this.singletonDB); err != nil { if _, err := base.ValidateConnection(this.singletonDB, this.connectionConfig); err != nil {
return err return err
} }
this.migrationContext.ApplierMySQLVersion = version
if err := this.validateAndReadTimeZone(); err != nil { if err := this.validateAndReadTimeZone(); err != nil {
return err return err
} }
@ -76,20 +78,6 @@ func (this *Applier) InitDBConnections() (err error) {
return nil return nil
} }
// validateConnection issues a simple can-connect to MySQL
func (this *Applier) validateConnection(db *gosql.DB) error {
query := `select @@global.port, @@global.version`
var port int
if err := db.QueryRow(query).Scan(&port, &this.migrationContext.ApplierMySQLVersion); err != nil {
return err
}
if port != this.connectionConfig.Key.Port {
return fmt.Errorf("Unexpected database port reported: %+v", port)
}
log.Infof("connection validated on %+v", this.connectionConfig.Key)
return nil
}
// validateAndReadTimeZone potentially reads server time-zone // validateAndReadTimeZone potentially reads server time-zone
func (this *Applier) validateAndReadTimeZone() error { func (this *Applier) validateAndReadTimeZone() error {
query := `select @@global.time_zone` query := `select @@global.time_zone`
@ -227,7 +215,7 @@ func (this *Applier) dropTable(tableName string) error {
sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.DatabaseName),
sql.EscapeName(tableName), sql.EscapeName(tableName),
) )
log.Infof("Droppping table %s.%s", log.Infof("Dropping table %s.%s",
sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.DatabaseName),
sql.EscapeName(tableName), sql.EscapeName(tableName),
) )
@ -397,7 +385,7 @@ func (this *Applier) ReadMigrationRangeValues() error {
// CalculateNextIterationRangeEndValues reads the next-iteration-range-end unique key values, // CalculateNextIterationRangeEndValues reads the next-iteration-range-end unique key values,
// which will be used for copying the next chunk of rows. Ir returns "false" if there is // which will be used for copying the next chunk of rows. Ir returns "false" if there is
// no further chunk to work through, i.e. we're past the last chunk and are done with // no further chunk to work through, i.e. we're past the last chunk and are done with
// itrating the range (and this done with copying row chunks) // iterating the range (and this done with copying row chunks)
func (this *Applier) CalculateNextIterationRangeEndValues() (hasFurtherRange bool, err error) { func (this *Applier) CalculateNextIterationRangeEndValues() (hasFurtherRange bool, err error) {
this.migrationContext.MigrationIterationRangeMinValues = this.migrationContext.MigrationIterationRangeMaxValues this.migrationContext.MigrationIterationRangeMinValues = this.migrationContext.MigrationIterationRangeMaxValues
if this.migrationContext.MigrationIterationRangeMinValues == nil { if this.migrationContext.MigrationIterationRangeMinValues == nil {

View File

@ -202,20 +202,14 @@ func (this *Inspector) validateConnection() error {
if len(this.connectionConfig.Password) > mysql.MaxReplicationPasswordLength { if len(this.connectionConfig.Password) > mysql.MaxReplicationPasswordLength {
return fmt.Errorf("MySQL replication length limited to 32 characters. See https://dev.mysql.com/doc/refman/5.7/en/assigning-passwords.html") return fmt.Errorf("MySQL replication length limited to 32 characters. See https://dev.mysql.com/doc/refman/5.7/en/assigning-passwords.html")
} }
query := `select @@global.port, @@global.version`
var port int version, err := base.ValidateConnection(this.db, this.connectionConfig)
if err := this.db.QueryRow(query).Scan(&port, &this.migrationContext.InspectorMySQLVersion); err != nil { this.migrationContext.InspectorMySQLVersion = version
return err return err
}
if port != this.connectionConfig.Key.Port {
return fmt.Errorf("Unexpected database port reported: %+v", port)
}
log.Infof("connection validated on %+v", this.connectionConfig.Key)
return nil
} }
// validateGrants verifies the user by which we're executing has necessary grants // validateGrants verifies the user by which we're executing has necessary grants
// to do its thang. // to do its thing.
func (this *Inspector) validateGrants() error { func (this *Inspector) validateGrants() error {
query := `show /* gh-ost */ grants for current_user()` query := `show /* gh-ost */ grants for current_user()`
foundAll := false foundAll := false
@ -274,7 +268,7 @@ func (this *Inspector) validateGrants() error {
// restartReplication is required so that we are _certain_ the binlog format and // restartReplication is required so that we are _certain_ the binlog format and
// row image settings have actually been applied to the replication thread. // row image settings have actually been applied to the replication thread.
// It is entriely possible, for example, that the replication is using 'STATEMENT' // It is entirely possible, for example, that the replication is using 'STATEMENT'
// binlog format even as the variable says 'ROW' // binlog format even as the variable says 'ROW'
func (this *Inspector) restartReplication() error { func (this *Inspector) restartReplication() error {
log.Infof("Restarting replication on %s:%d to make sure binlog settings apply to replication thread", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port) log.Infof("Restarting replication on %s:%d to make sure binlog settings apply to replication thread", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
@ -390,7 +384,7 @@ func (this *Inspector) validateLogSlaveUpdates() error {
} }
if this.migrationContext.InspectorIsAlsoApplier() { if this.migrationContext.InspectorIsAlsoApplier() {
log.Warningf("log_slave_updates not found on %s:%d, but executing directly on master, so I'm proceeeding", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port) log.Warningf("log_slave_updates not found on %s:%d, but executing directly on master, so I'm proceeding", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
return nil return nil
} }

View File

@ -182,7 +182,7 @@ func (this *Migrator) canStopStreaming() bool {
// onChangelogStateEvent is called when a binlog event operation on the changelog table is intercepted. // onChangelogStateEvent is called when a binlog event operation on the changelog table is intercepted.
func (this *Migrator) onChangelogStateEvent(dmlEvent *binlog.BinlogDMLEvent) (err error) { func (this *Migrator) onChangelogStateEvent(dmlEvent *binlog.BinlogDMLEvent) (err error) {
// Hey, I created the changlog table, I know the type of columns it has! // Hey, I created the changelog table, I know the type of columns it has!
if hint := dmlEvent.NewColumnValues.StringColumn(2); hint != "state" { if hint := dmlEvent.NewColumnValues.StringColumn(2); hint != "state" {
return nil return nil
} }
@ -268,6 +268,18 @@ func (this *Migrator) countTableRows() (err error) {
return countRowsFunc() return countRowsFunc()
} }
func (this *Migrator) createFlagFiles() (err error) {
if this.migrationContext.PostponeCutOverFlagFile != "" {
if !base.FileExists(this.migrationContext.PostponeCutOverFlagFile) {
if err := base.TouchFile(this.migrationContext.PostponeCutOverFlagFile); err != nil {
return log.Errorf("--postpone-cut-over-flag-file indicated by gh-ost is unable to create said file: %s", err.Error())
}
log.Infof("Created postpone-cut-over-flag-file: %s", this.migrationContext.PostponeCutOverFlagFile)
}
}
return nil
}
// Migrate executes the complete migration logic. This is *the* major gh-ost function. // Migrate executes the complete migration logic. This is *the* major gh-ost function.
func (this *Migrator) Migrate() (err error) { func (this *Migrator) Migrate() (err error) {
log.Infof("Migrating %s.%s", sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.OriginalTableName)) log.Infof("Migrating %s.%s", sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.OriginalTableName))
@ -304,6 +316,9 @@ func (this *Migrator) Migrate() (err error) {
if err := this.initiateApplier(); err != nil { if err := this.initiateApplier(); err != nil {
return err return err
} }
if err := this.createFlagFiles(); err != nil {
return err
}
initialLag, _ := this.inspector.getReplicationLag() initialLag, _ := this.inspector.getReplicationLag()
log.Infof("Waiting for ghost table to be migrated. Current lag is %+v", initialLag) log.Infof("Waiting for ghost table to be migrated. Current lag is %+v", initialLag)
@ -380,7 +395,7 @@ func (this *Migrator) ExecOnFailureHook() (err error) {
func (this *Migrator) handleCutOverResult(cutOverError error) (err error) { func (this *Migrator) handleCutOverResult(cutOverError error) (err error) {
if this.migrationContext.TestOnReplica { if this.migrationContext.TestOnReplica {
// We're merly testing, we don't want to keep this state. Rollback the renames as possible // We're merely testing, we don't want to keep this state. Rollback the renames as possible
this.applier.RenameTablesRollback() this.applier.RenameTablesRollback()
} }
if cutOverError == nil { if cutOverError == nil {
@ -738,7 +753,7 @@ func (this *Migrator) initiateStatus() error {
// printMigrationStatusHint prints a detailed configuration dump, that is useful // printMigrationStatusHint prints a detailed configuration dump, that is useful
// to keep in mind; such as the name of migrated table, throttle params etc. // to keep in mind; such as the name of migrated table, throttle params etc.
// This gets printed at beginning and end of migration, every 10 minutes throughout // This gets printed at beginning and end of migration, every 10 minutes throughout
// migration, and as reponse to the "status" interactive command. // migration, and as response to the "status" interactive command.
func (this *Migrator) printMigrationStatusHint(writers ...io.Writer) { func (this *Migrator) printMigrationStatusHint(writers ...io.Writer) {
w := io.MultiWriter(writers...) w := io.MultiWriter(writers...)
fmt.Fprintln(w, fmt.Sprintf("# Migrating %s.%s; Ghost table is %s.%s", fmt.Fprintln(w, fmt.Sprintf("# Migrating %s.%s; Ghost table is %s.%s",
@ -816,7 +831,7 @@ func (this *Migrator) printMigrationStatusHint(writers ...io.Writer) {
} }
} }
// printStatus prints the prgoress status, and optionally additionally detailed // printStatus prints the progress status, and optionally additionally detailed
// dump of configuration. // dump of configuration.
// `rule` indicates the type of output expected. // `rule` indicates the type of output expected.
// By default the status is written to standard output, but other writers can // By default the status is written to standard output, but other writers can

View File

@ -147,7 +147,7 @@ sup # Print a short status message
coordinates # Print the currently inspected coordinates coordinates # Print the currently inspected coordinates
chunk-size=<newsize> # Set a new chunk-size chunk-size=<newsize> # Set a new chunk-size
dml-batch-size=<newsize> # Set a new dml-batch-size dml-batch-size=<newsize> # Set a new dml-batch-size
nice-ratio=<ratio> # Set a new nice-ratio, immediate sleep after each row-copy operation, float (examples: 0 is agrressive, 0.7 adds 70% runtime, 1.0 doubles runtime, 2.0 triples runtime, ...) nice-ratio=<ratio> # Set a new nice-ratio, immediate sleep after each row-copy operation, float (examples: 0 is aggressive, 0.7 adds 70% runtime, 1.0 doubles runtime, 2.0 triples runtime, ...)
critical-load=<load> # Set a new set of max-load thresholds critical-load=<load> # Set a new set of max-load thresholds
max-lag-millis=<max-lag> # Set a new replication lag threshold max-lag-millis=<max-lag> # Set a new replication lag threshold
replication-lag-query=<query> # Set a new query that determines replication lag (no quotes) replication-lag-query=<query> # Set a new query that determines replication lag (no quotes)
@ -305,8 +305,8 @@ help # This message
return NoPrintStatusRule, err return NoPrintStatusRule, err
} }
if arg != "" && arg != this.migrationContext.OriginalTableName { if arg != "" && arg != this.migrationContext.OriginalTableName {
// User exlpicitly provided table name. This is a courtesy protection mechanism // User explicitly provided table name. This is a courtesy protection mechanism
err := fmt.Errorf("User commanded 'unpostpone' on %s, but migrated table is %s; ingoring request.", arg, this.migrationContext.OriginalTableName) err := fmt.Errorf("User commanded 'unpostpone' on %s, but migrated table is %s; ignoring request.", arg, this.migrationContext.OriginalTableName)
return NoPrintStatusRule, err return NoPrintStatusRule, err
} }
if atomic.LoadInt64(&this.migrationContext.IsPostponingCutOver) > 0 { if atomic.LoadInt64(&this.migrationContext.IsPostponingCutOver) > 0 {

View File

@ -107,7 +107,7 @@ func (this *EventsStreamer) InitDBConnections() (err error) {
if this.db, err = mysql.GetDB(EventsStreamerUri); err != nil { if this.db, err = mysql.GetDB(EventsStreamerUri); err != nil {
return err return err
} }
if err := this.validateConnection(); err != nil { if _, err := base.ValidateConnection(this.db, this.connectionConfig); err != nil {
return err return err
} }
if err := this.readCurrentBinlogCoordinates(); err != nil { if err := this.readCurrentBinlogCoordinates(); err != nil {
@ -133,20 +133,6 @@ func (this *EventsStreamer) initBinlogReader(binlogCoordinates *mysql.BinlogCoor
return nil return nil
} }
// validateConnection issues a simple can-connect to MySQL
func (this *EventsStreamer) validateConnection() error {
query := `select @@global.port`
var port int
if err := this.db.QueryRow(query).Scan(&port); err != nil {
return err
}
if port != this.connectionConfig.Key.Port {
return fmt.Errorf("Unexpected database port reported: %+v", port)
}
log.Infof("connection validated on %+v", this.connectionConfig.Key)
return nil
}
func (this *EventsStreamer) GetCurrentBinlogCoordinates() *mysql.BinlogCoordinates { func (this *EventsStreamer) GetCurrentBinlogCoordinates() *mysql.BinlogCoordinates {
return this.binlogReader.GetCurrentBinlogCoordinates() return this.binlogReader.GetCurrentBinlogCoordinates()
} }

View File

@ -38,7 +38,7 @@ var (
const frenoMagicHint = "freno" const frenoMagicHint = "freno"
// Throttler collects metrics related to throttling and makes informed decisison // Throttler collects metrics related to throttling and makes informed decision
// whether throttling should take place. // whether throttling should take place.
type Throttler struct { type Throttler struct {
migrationContext *base.MigrationContext migrationContext *base.MigrationContext

View File

@ -57,7 +57,7 @@ func (this BinlogCoordinates) String() string {
return this.DisplayString() return this.DisplayString()
} }
// Equals tests equality of this corrdinate and another one. // Equals tests equality of this coordinate and another one.
func (this *BinlogCoordinates) Equals(other *BinlogCoordinates) bool { func (this *BinlogCoordinates) Equals(other *BinlogCoordinates) bool {
if other == nil { if other == nil {
return false return false
@ -95,8 +95,8 @@ func (this *BinlogCoordinates) FileSmallerThan(other *BinlogCoordinates) bool {
return this.LogFile < other.LogFile return this.LogFile < other.LogFile
} }
// FileNumberDistance returns the numeric distance between this corrdinate's file number and the other's. // FileNumberDistance returns the numeric distance between this coordinate's file number and the other's.
// Effectively it means "how many roatets/FLUSHes would make these coordinates's file reach the other's" // Effectively it means "how many rotates/FLUSHes would make these coordinates's file reach the other's"
func (this *BinlogCoordinates) FileNumberDistance(other *BinlogCoordinates) int { func (this *BinlogCoordinates) FileNumberDistance(other *BinlogCoordinates) int {
thisNumber, _ := this.FileNumber() thisNumber, _ := this.FileNumber()
otherNumber, _ := other.FileNumber() otherNumber, _ := other.FileNumber()

View File

@ -15,7 +15,7 @@ const (
DefaultInstancePort = 3306 DefaultInstancePort = 3306
) )
// InstanceKey is an instance indicator, identifued by hostname and port // InstanceKey is an instance indicator, identified by hostname and port
type InstanceKey struct { type InstanceKey struct {
Hostname string Hostname string
Port int Port int
@ -83,7 +83,7 @@ func (this *InstanceKey) IsValid() bool {
return len(this.Hostname) > 0 && this.Port > 0 return len(this.Hostname) > 0 && this.Port > 0
} }
// DetachedKey returns an instance key whose hostname is detahced: invalid, but recoverable // DetachedKey returns an instance key whose hostname is detached: invalid, but recoverable
func (this *InstanceKey) DetachedKey() *InstanceKey { func (this *InstanceKey) DetachedKey() *InstanceKey {
if this.IsDetached() { if this.IsDetached() {
return this return this
@ -91,7 +91,7 @@ func (this *InstanceKey) DetachedKey() *InstanceKey {
return &InstanceKey{Hostname: fmt.Sprintf("%s%s", detachHint, this.Hostname), Port: this.Port} return &InstanceKey{Hostname: fmt.Sprintf("%s%s", detachHint, this.Hostname), Port: this.Port}
} }
// ReattachedKey returns an instance key whose hostname is detahced: invalid, but recoverable // ReattachedKey returns an instance key whose hostname is detached: invalid, but recoverable
func (this *InstanceKey) ReattachedKey() *InstanceKey { func (this *InstanceKey) ReattachedKey() *InstanceKey {
if !this.IsDetached() { if !this.IsDetached() {
return this return this

View File

@ -26,7 +26,7 @@ const (
const maxMediumintUnsigned int32 = 16777215 const maxMediumintUnsigned int32 = 16777215
type TimezoneConvertion struct { type TimezoneConversion struct {
ToTimezone string ToTimezone string
} }
@ -35,7 +35,7 @@ type Column struct {
IsUnsigned bool IsUnsigned bool
Charset string Charset string
Type ColumnType Type ColumnType
timezoneConversion *TimezoneConvertion timezoneConversion *TimezoneConversion
} }
func (this *Column) convertArg(arg interface{}) interface{} { func (this *Column) convertArg(arg interface{}) interface{} {
@ -172,7 +172,7 @@ func (this *ColumnList) GetColumnType(columnName string) ColumnType {
} }
func (this *ColumnList) SetConvertDatetimeToTimestamp(columnName string, toTimezone string) { func (this *ColumnList) SetConvertDatetimeToTimestamp(columnName string, toTimezone string) {
this.GetColumn(columnName).timezoneConversion = &TimezoneConvertion{ToTimezone: toTimezone} this.GetColumn(columnName).timezoneConversion = &TimezoneConversion{ToTimezone: toTimezone}
} }
func (this *ColumnList) HasTimezoneConversion(columnName string) bool { func (this *ColumnList) HasTimezoneConversion(columnName string) bool {

View File

@ -11,7 +11,8 @@ tests_path=$(dirname $0)
test_logfile=/tmp/gh-ost-test.log test_logfile=/tmp/gh-ost-test.log
ghost_binary=/tmp/gh-ost-test ghost_binary=/tmp/gh-ost-test
exec_command_file=/tmp/gh-ost-test.bash exec_command_file=/tmp/gh-ost-test.bash
orig_content_output_file=/gh-ost-test.orig.content.csv
ghost_content_output_file=/gh-ost-test.ghost.content.csv
test_pattern="${1:-.}" test_pattern="${1:-.}"
master_host= master_host=
@ -29,6 +30,10 @@ verify_master_and_replica() {
echo "Cannot verify gh-ost-test-mysql-replica" echo "Cannot verify gh-ost-test-mysql-replica"
exit 1 exit 1
fi fi
if [ "$(gh-ost-test-mysql-replica -e "select @@global.binlog_format" -ss)" != "ROW" ] ; then
echo "Expecting test replica to have binlog_format=ROW"
exit 1
fi
read replica_host replica_port <<< $(gh-ost-test-mysql-replica -e "select @@hostname, @@port" -ss) read replica_host replica_port <<< $(gh-ost-test-mysql-replica -e "select @@hostname, @@port" -ss)
} }
@ -42,6 +47,21 @@ echo_dot() {
echo -n "." echo -n "."
} }
start_replication() {
gh-ost-test-mysql-replica -e "stop slave; start slave;"
num_attempts=0
while gh-ost-test-mysql-replica -e "show slave status\G" | grep Seconds_Behind_Master | grep -q NULL ; do
((num_attempts=num_attempts+1))
if [ $num_attempts -gt 10 ] ; then
echo
echo "ERROR replication failure"
exit 1
fi
echo_dot
sleep 1
done
}
test_single() { test_single() {
local test_name local test_name
test_name="$1" test_name="$1"
@ -49,7 +69,7 @@ test_single() {
echo -n "Testing: $test_name" echo -n "Testing: $test_name"
echo_dot echo_dot
gh-ost-test-mysql-replica -e "stop slave; start slave; do sleep(1)" start_replication
echo_dot echo_dot
gh-ost-test-mysql-master --default-character-set=utf8mb4 test < $tests_path/$test_name/create.sql gh-ost-test-mysql-master --default-character-set=utf8mb4 test < $tests_path/$test_name/create.sql
@ -82,13 +102,12 @@ test_single() {
--table=gh_ost_test \ --table=gh_ost_test \
--alter='engine=innodb' \ --alter='engine=innodb' \
--exact-rowcount \ --exact-rowcount \
--switch-to-rbr \ --assume-rbr \
--initially-drop-old-table \ --initially-drop-old-table \
--initially-drop-ghost-table \ --initially-drop-ghost-table \
--throttle-query='select timestampdiff(second, min(last_update), now()) < 5 from _gh_ost_test_ghc' \ --throttle-query='select timestampdiff(second, min(last_update), now()) < 5 from _gh_ost_test_ghc' \
--serve-socket-file=/tmp/gh-ost.test.sock \ --serve-socket-file=/tmp/gh-ost.test.sock \
--initially-drop-socket-file \ --initially-drop-socket-file \
--postpone-cut-over-flag-file=/tmp/gh-ost.test.postpone.flag \
--test-on-replica \ --test-on-replica \
--default-retries=1 \ --default-retries=1 \
--chunk-size=10 \ --chunk-size=10 \
@ -134,15 +153,17 @@ test_single() {
fi fi
echo_dot echo_dot
orig_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${orig_columns} from gh_ost_test ${order_by}" -ss | md5sum) gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${orig_columns} from gh_ost_test ${order_by}" -ss > $orig_content_output_file
ghost_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${ghost_columns} from _gh_ost_test_gho ${order_by}" -ss | md5sum) gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${ghost_columns} from _gh_ost_test_gho ${order_by}" -ss > $ghost_content_output_file
orig_checksum=$(cat $orig_content_output_file | md5sum)
ghost_checksum=$(cat $ghost_content_output_file | md5sum)
if [ "$orig_checksum" != "$ghost_checksum" ] ; then if [ "$orig_checksum" != "$ghost_checksum" ] ; then
echo "ERROR $test_name: checksum mismatch" echo "ERROR $test_name: checksum mismatch"
echo "---" echo "---"
gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${orig_columns} from gh_ost_test" -ss diff $orig_content_output_file $ghost_content_output_file
echo "---"
gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${ghost_columns} from _gh_ost_test_gho" -ss echo "diff $orig_content_output_file $ghost_content_output_file"
return 1 return 1
fi fi
} }