resolved conflict

This commit is contained in:
Shlomi Noach 2016-07-26 11:57:01 +02:00
commit 6dbf5c31a2
12 changed files with 100 additions and 25 deletions

View File

@ -0,0 +1,9 @@
> This is the place to report a bug, ask a question, suggest an enhancment.
> This is the place to make a discussion before creating a PR.
> Please label your Issue
> Please understand if this Issue is not addressed immediately or in a timeframe you were expecting.
> Thank you!

View File

@ -0,0 +1,19 @@
## A Pull Request should be associated with an Issue.
> We wish to have discussions in Issues. A single issue may be targeted by multiple PRs.
> If you're offering a new feature or fixing anything, we'd like to know beforehand in Issues,
> and potentially we'll be able to point development in a particular direction.
Related issue: https://github.com/github/gh-ost/issues/0123456789
> Thank you! We are open to PRs, but please understand if for technical reasons we are unable to accept each and any PR
### Description
This PR [briefly explain what is does]
> In case this PR introduced Go code changes:
- [ ] contributed code is using same conventions as original code
- [ ] code is formatted via `gofmt` (please avoid `goimports`)
- [ ] code is built via `bash build.sh`

View File

@ -1,14 +1,14 @@
# gh-ost
#### GitHub's online schema migration for MySQL
#### GitHub's online schema migration for MySQL <img src="doc/images/gh-ost-logo-light-160.png" align="right">
`gh-ost` is a triggerless online schema migration solution for MySQL. It is testable and provides with pausability, dynamic control/reconfiguration, auditing, and many operational perks.
`gh-ost` is a triggerless online schema migration solution for MySQL. It is testable and provides pausability, dynamic control/reconfiguration, auditing, and many operational perks.
`gh-ost` produces a light workload on the master throughout the migration, decoupled from the existing workload on the migrated table.
It has been designed based on years of experience with existing solutions, and changes the paradigm of table migrations.
![gh-ost logo](doc/images/gh-ost-logo-light-160.png)
## How?
@ -44,7 +44,7 @@ The [cheatsheet](doc/cheatsheet.md) has it all. You may be interested in invokin
Our tips:
- [Testing above all](testing-on-replica.md), try out `--test-on-replica` first few times. Better yet, make it continuous. We have multiple replicas where we iterate our entire fleet of production tables, migrating them one by one, checksumming the results, verifying migration is good.
- [Testing above all](doc/testing-on-replica.md), try out `--test-on-replica` first few times. Better yet, make it continuous. We have multiple replicas where we iterate our entire fleet of production tables, migrating them one by one, checksumming the results, verifying migration is good.
- For each master migration, first issue a _noop_
- Then issue the real thing via `--execute`.
@ -63,11 +63,9 @@ Originally this was named `gh-osc`: GitHub Online Schema Change, in the likes of
But then a rare genetic mutation happened, and the `s` transformed into `t`. And that sent us down the path of trying to figure out a new acronym. Right now, `gh-ost` (pronounce: _Ghost_), stands for:
- GitHub Online Schema Transmogrifier/Translator/Transformer/Transfigurator
Pronounce: _ghost_
## License
`gh-ost` is licensed under the [MIT license](https://github.com/github/gh-ost/blob/documentation/LICENSE)
`gh-ost` is licensed under the [MIT license](https://github.com/github/gh-ost/blob/master/LICENSE)
`gh-ost` uses 3rd party libraries, each with their own license. These are found [here](https://github.com/github/gh-ost/tree/master/vendor).
@ -75,7 +73,19 @@ Pronounce: _ghost_
`gh-ost` is released at a stable state, but with mileage to go. We are [open to pull requests](https://github.com/github/gh-ost/blob/master/.github/CONTRIBUTING.md). Please first discuss your intentions via [Issues](https://github.com/github/gh-ost/issues).
We develop `gh-ost` at GitHub and for the community. We may have different priorities than others. From time to time we may suggest a contribution that is not on our immediate roadmap but which may appeal to others.
We develop `gh-ost` at GitHub and for the community. We may have different priorities than others. From time to time we may suggest a contribution that is not on our immediate roadmap but which may appeal to others.
## Download/binaries/source
`gh-ost` is now GA and stable.
`gh-ost` is available in binary format for Linux and Mac OS/X
[Download latest release here](https://github.com/github/gh-ost/releases/latest)
`gh-ost` is a Go project; it is built with Go 1.5 with "experimental vendor". Soon to migrate to Go 1.6. See and use [build file](https://github.com/github/gh-ost/blob/master/build.sh) for compiling it on your own.
Generally speaking, `master` branch is stable, but only [releases](https://github.com/github/gh-ost/releases) are to be used in production.
## Authors

View File

@ -1,7 +1,7 @@
#!/bin/bash
#
#
RELEASE_VERSION="1.0.3"
RELEASE_VERSION="1.0.5"
buildpath=/tmp/gh-ost
target=gh-ost

View File

@ -24,7 +24,7 @@ replication lag on to determine throttling
- `critical-load=<load>`: change critical load setting (exceeding given thresholds causes panic and abort)
- `nice-ratio=<ratio>`: change _nice_ ratio: 0 for aggressive, positive integer `n`: for any unit of time spent copying rows, spend `n` units of time sleeping.
- `throttle-query`: change throttle query
- `throttle-control-replicas`: change list of throttle-control replicas, these are replicas `gh-ost` will cehck
- `throttle-control-replicas`: change list of throttle-control replicas, these are replicas `gh-ost` will check
- `throttle`: force migration suspend
- `no-throttle`: cancel forced suspension (though other throttling reasons may still apply)
- `unpostpone`: at a time where `gh-ost` is postponing the [cut-over](cut-over.md) phase, instruct `gh-ost` to stop postponing and proceed immediately to cut-over.

View File

@ -30,3 +30,13 @@ There is a `lock_wait_timeout` explicitly associated with the cut-over operation
This is where `gh-ost` shines. There is no need to kill it as you may be used to with other tools. You can reconfigure `gh-ost` [on the fly](https://github.com/github/gh-ost/blob/master/doc/interactive-commands.md) to be nicer.
You're always able to actively begin [throttling](throttle.md). Just touch the `throttle-file` or `echo throttle` into `gh-ost`. Otherwise, reconfigure your `max-load`, the `nice-ratio`, the `throttle-query` to gain better thresholds that would suit your needs.
### What if my replicas don't use binary logs?
If the master is running Row Based Replication (RBR) - point `gh-ost` to the master, and specify `--allow-on-master`. See [cheatsheets](cheatsheets.md)
If the master is running Statement Based Replication (SBR) - you have no alternative but to reconfigure a replica with:
- `log_bin`
- `log_slave_updates`
- `binlog_format=ROW`

View File

@ -7,6 +7,8 @@ package base
import (
"fmt"
"os"
"regexp"
"strings"
"sync"
"sync/atomic"
@ -34,6 +36,10 @@ const (
CutOverTwoStep = iota
)
var (
envVariableRegexp = regexp.MustCompile("[$][{](.*)[}]")
)
// MigrationContext has the general, global state of migration. It is used by
// all components throughout the migration process.
type MigrationContext struct {
@ -71,6 +77,7 @@ type MigrationContext struct {
CutOverLockTimeoutSeconds int64
PanicFlagFile string
DropServeSocket bool
ServeSocketFile string
ServeTCPPort int64
@ -439,8 +446,19 @@ func (this *MigrationContext) ReadConfigFile() error {
if this.ConfigFile == "" {
return nil
}
gcfg.RelaxedParserMode = true
if err := gcfg.ReadFileInto(&this.config, this.ConfigFile); err != nil {
return err
}
// We accept user & password in the form "${SOME_ENV_VARIABLE}" in which case we pull
// the given variable from os env
if submatch := envVariableRegexp.FindStringSubmatch(this.config.Client.User); len(submatch) > 1 {
this.config.Client.User = os.Getenv(submatch[1])
}
if submatch := envVariableRegexp.FindStringSubmatch(this.config.Client.Password); len(submatch) > 1 {
this.config.Client.Password = os.Getenv(submatch[1])
}
return nil
}

View File

@ -63,15 +63,6 @@ func (this *GoMySQLReader) ConnectBinlogStreamer(coordinates mysql.BinlogCoordin
return err
}
func (this *GoMySQLReader) Reconnect() error {
this.binlogSyncer.Close()
connectCoordinates := &mysql.BinlogCoordinates{LogFile: this.currentCoordinates.LogFile, LogPos: 4}
if err := this.ConnectBinlogStreamer(*connectCoordinates); err != nil {
return err
}
return nil
}
func (this *GoMySQLReader) GetCurrentBinlogCoordinates() *mysql.BinlogCoordinates {
this.currentCoordinatesMutex.Lock()
defer this.currentCoordinatesMutex.Unlock()

View File

@ -83,6 +83,7 @@ func main() {
flag.StringVar(&migrationContext.PostponeCutOverFlagFile, "postpone-cut-over-flag-file", "", "while this file exists, migration will postpone the final stage of swapping tables, and will keep on syncing the ghost table. Cut-over/swapping would be ready to perform the moment the file is deleted.")
flag.StringVar(&migrationContext.PanicFlagFile, "panic-flag-file", "", "when this file is created, gh-ost will immediately terminate, without cleanup")
flag.BoolVar(&migrationContext.DropServeSocket, "initially-drop-socket-file", false, "Should gh-ost forcibly delete an existing socket file. Be careful: this might drop the socket file of a running migration!")
flag.StringVar(&migrationContext.ServeSocketFile, "serve-socket-file", "", "Unix socket file to serve on. Default: auto-determined and advertised upon startup")
flag.Int64Var(&migrationContext.ServeTCPPort, "serve-tcp-port", 0, "TCP port to serve on. Default: disabled")

View File

@ -36,7 +36,7 @@ func (this *Server) BindSocketFile() (err error) {
if this.migrationContext.ServeSocketFile == "" {
return nil
}
if base.FileExists(this.migrationContext.ServeSocketFile) {
if this.migrationContext.DropServeSocket && base.FileExists(this.migrationContext.ServeSocketFile) {
os.Remove(this.migrationContext.ServeSocketFile)
}
this.unixListener, err = net.Listen("unix", this.migrationContext.ServeSocketFile)

View File

@ -189,18 +189,27 @@ func (this *EventsStreamer) StreamEvents(canStopStreaming func() bool) error {
}
}()
// The next should block and execute forever, unless there's a serious error
var successiveFailures int64
var lastAppliedRowsEventHint mysql.BinlogCoordinates
for {
if err := this.binlogReader.StreamEvents(canStopStreaming, this.eventsChannel); err != nil {
log.Infof("StreamEvents encountered unexpected error: %+v", err)
this.migrationContext.MarkPointOfInterest()
time.Sleep(ReconnectStreamerSleepSeconds * time.Second)
// Reposition at same binlog file. Single attempt (TODO: make multiple attempts?)
lastAppliedRowsEventHint := this.binlogReader.LastAppliedRowsEventHint
// See if there's retry overflow
if this.binlogReader.LastAppliedRowsEventHint.Equals(&lastAppliedRowsEventHint) {
successiveFailures += 1
} else {
successiveFailures = 0
}
if successiveFailures > this.migrationContext.MaxRetries() {
return fmt.Errorf("%d successive failures in streamer reconnect at coordinates %+v", lastAppliedRowsEventHint)
}
// Reposition at same binlog file.
lastAppliedRowsEventHint = this.binlogReader.LastAppliedRowsEventHint
log.Infof("Reconnecting... Will resume at %+v", lastAppliedRowsEventHint)
// if err := this.binlogReader.Reconnect(); err != nil {
// return err
// }
if err := this.initBinlogReader(this.GetReconnectBinlogCoordinates()); err != nil {
return err
}

8
vendor/gopkg.in/gcfg.v1/set.go generated vendored
View File

@ -16,6 +16,8 @@ type tag struct {
intMode string
}
var RelaxedParserMode = false
func newTag(ts string) tag {
t := tag{}
s := strings.Split(ts, ",")
@ -197,6 +199,9 @@ func set(cfg interface{}, sect, sub, name string, blank bool, value string) erro
vCfg := vPCfg.Elem()
vSect, _ := fieldFold(vCfg, sect)
if !vSect.IsValid() {
if RelaxedParserMode {
return nil
}
return fmt.Errorf("invalid section: section %q", sect)
}
if vSect.Kind() == reflect.Map {
@ -232,6 +237,9 @@ func set(cfg interface{}, sect, sub, name string, blank bool, value string) erro
}
vVar, t := fieldFold(vSect, name)
if !vVar.IsValid() {
if RelaxedParserMode {
return nil
}
return fmt.Errorf("invalid variable: "+
"section %q subsection %q variable %q", sect, sub, name)
}