Merge branch 'master' into eta-ewma
This commit is contained in:
commit
3f3a69c708
4
.gitignore
vendored
Normal file
4
.gitignore
vendored
Normal file
@ -0,0 +1,4 @@
|
||||
/.gopath/
|
||||
/bin/
|
||||
/libexec/
|
||||
/.vendor/
|
@ -58,8 +58,10 @@ More tips:
|
||||
Also see:
|
||||
|
||||
- [requirements and limitations](doc/requirements-and-limitations.md)
|
||||
- [common questions](doc/questions.md)
|
||||
- [what if?](doc/what-if.md)
|
||||
- [the fine print](doc/the-fine-print.md)
|
||||
- [Community questions](https://github.com/github/gh-ost/issues?q=label%3Aquestion)
|
||||
|
||||
## What's in a name?
|
||||
|
||||
|
1
RELEASE_VERSION
Normal file
1
RELEASE_VERSION
Normal file
@ -0,0 +1 @@
|
||||
1.0.35
|
2
build.sh
2
build.sh
@ -2,7 +2,7 @@
|
||||
#
|
||||
#
|
||||
|
||||
RELEASE_VERSION="1.0.23"
|
||||
RELEASE_VERSION=$(cat RELEASE_VERSION)
|
||||
|
||||
function build {
|
||||
osname=$1
|
||||
|
@ -146,8 +146,12 @@ gh-ost --allow-master-master --assume-master-host=a.specific.master.com
|
||||
|
||||
Topologies using _tungsten replicator_ are peculiar in that the participating servers are not actually aware they are replicating. The _tungsten replicator_ looks just like another app issuing queries on those hosts. `gh-ost` is unable to identify that a server participates in a _tungsten_ topology.
|
||||
|
||||
If you choose to migrate directly on master (see above), there's nothing special you need to do. If you choose to migrate via replica, then you must supply the identity of the master, and indicate this is a tungsten setup, as follows:
|
||||
If you choose to migrate directly on master (see above), there's nothing special you need to do.
|
||||
|
||||
If you choose to migrate via replica, then you need to make sure Tungsten is configured with log-slave-updates parameter (note this is different from MySQL's own log-slave-updates parameter), otherwise changes will not be in the replica's binlog, causing data to be corrupted after table swap. You must also supply the identity of the master, and indicate this is a tungsten setup, as follows:
|
||||
|
||||
```
|
||||
gh-ost --tungsten --assume-master-host=the.topology.master.com
|
||||
```
|
||||
|
||||
Also note that `--switch-to-rbr` does not work for a Tungsten setup as the replication process is external, so you need to make sure `binlog_format` is set to ROW before Tungsten Replicator connects to the server and starts applying events from the master.
|
||||
|
@ -63,6 +63,19 @@ Optional. Default is `safe`. See more discussion in [cut-over](cut-over.md)
|
||||
|
||||
At this time (10-2016) `gh-ost` does not support foreign keys on migrated tables (it bails out when it notices a FK on the migrated table). However, it is able to support _dropping_ of foreign keys via this flag. If you're trying to get rid of foreign keys in your environment, this is a useful flag.
|
||||
|
||||
See also: [`skip-foreign-key-checks`](#skip-foreign-key-checks)
|
||||
|
||||
|
||||
### dml-batch-size
|
||||
|
||||
`gh-ost` reads event from the binary log and applies them onto the _ghost_ table. It does so in batched writes: grouping multiple events to apply in a single transaction. This gives better write throughput as we don't need to sync the transaction log to disk for each event.
|
||||
|
||||
The `--dml-batch-size` flag controls the size of the batched write. Allowed values are `1 - 100`, where `1` means no batching (every event from the binary log is applied onto the _ghost_ table on its own transaction). Default value is `10`.
|
||||
|
||||
Why is this behavior configurable? Different workloads have different characteristics. Some workloads have very large writes, such that aggregating even `50` writes into a transaction makes for a significant transaction size. On other workloads write rate is high such that one just can't allow for a hundred more syncs to disk per second. The default value of `10` is a modest compromise that should probably work very well for most workloads. Your mileage may vary.
|
||||
|
||||
Noteworthy is that setting `--dml-batch-size` to higher value _does not_ mean `gh-ost` blocks or waits on writes. The batch size is an upper limit on transaction size, not a minimal one. If `gh-ost` doesn't have "enough" events in the pipe, it does not wait on the binary log, it just writes what it already has. This conveniently suggests that if write load is light enough for `gh-ost` to only see a few events in the binary log at a given time, then it is also light neough for `gh-ost` to apply a fraction of the batch size.
|
||||
|
||||
### exact-rowcount
|
||||
|
||||
A `gh-ost` execution need to copy whatever rows you have in your existing table onto the ghost table. This can, and often be, a large number. Exactly what that number is?
|
||||
@ -71,7 +84,8 @@ A `gh-ost` execution need to copy whatever rows you have in your existing table
|
||||
`gh-ost` also supports the `--exact-rowcount` flag. When this flag is given, two things happen:
|
||||
- An initial, authoritative `select count(*) from your_table`.
|
||||
This query may take a long time to complete, but is performed before we begin the massive operations.
|
||||
When `--concurrent-rowcount` is also specified, this runs in paralell to row copy.
|
||||
When `--concurrent-rowcount` is also specified, this runs in parallel to row copy.
|
||||
Note: `--concurrent-rowcount` now defaults to `true`.
|
||||
- A continuous update to the estimate as we make progress applying events.
|
||||
We heuristically update the number of rows based on the queries we process from the binlogs.
|
||||
|
||||
@ -97,10 +111,7 @@ On a replication topology, this is perhaps the most important migration throttli
|
||||
|
||||
When using [Connect to replica, migrate on master](cheatsheet.md), this lag is primarily tested on the very replica `gh-ost` operates on. Lag is measured by checking the heartbeat events injected by `gh-ost` itself on the utility changelog table. That is, to measure this replica's lag, `gh-ost` doesn't need to issue `show slave status` nor have any external heartbeat mechanism.
|
||||
|
||||
When `--throttle-control-replicas` is provided, throttling also considers lag on specified hosts. Measuring lag on these hosts works as follows:
|
||||
|
||||
- If `--replication-lag-query` is provided, use the query, trust its result to indicate lag seconds (fraction, i.e. float, allowed)
|
||||
- Otherwise, issue `show slave status` and read `Seconds_behind_master` (`1sec` granularity)
|
||||
When `--throttle-control-replicas` is provided, throttling also considers lag on specified hosts. Lag measurements on listed hosts is done by querying `gh-ost`'s _changelog_ table, where `gh-ost` injects a heartbeat.
|
||||
|
||||
See also: [Sub-second replication lag throttling](subsecond-lag.md)
|
||||
|
||||
@ -108,6 +119,10 @@ See also: [Sub-second replication lag throttling](subsecond-lag.md)
|
||||
|
||||
Typically `gh-ost` is used to migrate tables on a master. If you wish to only perform the migration in full on a replica, connect `gh-ost` to said replica and pass `--migrate-on-replica`. `gh-ost` will briefly connect to the master but other issue no changes on the master. Migration will be fully executed on the replica, while making sure to maintain a small replication lag.
|
||||
|
||||
### skip-foreign-key-checks
|
||||
|
||||
By default `gh-ost` verifies no foreign keys exist on the migrated table. On servers with large number of tables this check can take a long time. If you're absolutely certain no foreign keys exist (table does not referenece other table nor is referenced by other tables) and wish to save the check time, provide with `--skip-foreign-key-checks`.
|
||||
|
||||
### skip-renamed-columns
|
||||
|
||||
See `approve-renamed-columns`
|
||||
@ -115,3 +130,7 @@ See `approve-renamed-columns`
|
||||
### test-on-replica
|
||||
|
||||
Issue the migration on a replica; do not modify data on master. Useful for validating, testing and benchmarking. See [testing-on-replica](testing-on-replica.md)
|
||||
|
||||
### timestamp-old-table
|
||||
|
||||
Makes the _old_ table include a timestamp value. The _old_ table is what the original table is renamed to at the end of a successful migration. For example, if the table is `gh_ost_test`, then the _old_ table would normally be `_gh_ost_test_del`. With `--timestamp-old-table` it would be, for example, `_gh_ost_test_20170221103147_del`.
|
||||
|
@ -44,6 +44,7 @@ The full list of supported hooks is best found in code: [hooks.go](https://githu
|
||||
- `gh-ost-on-interactive-command`
|
||||
- `gh-ost-on-row-copy-complete`
|
||||
- `gh-ost-on-stop-replication`
|
||||
- `gh-ost-on-start-replication`
|
||||
- `gh-ost-on-begin-postponed`
|
||||
- `gh-ost-on-before-cut-over`
|
||||
- `gh-ost-on-success`
|
||||
|
@ -26,9 +26,9 @@ Both interfaces may serve at the same time. Both respond to simple text command,
|
||||
- The `critical-load` format must be: `some_status=<numeric-threshold>[,some_status=<numeric-threshold>...]`'
|
||||
- For example: `Threads_running=1000,threads_connected=5000`, and you would then write/echo `critical-load=Threads_running=1000,threads_connected=5000` to the socket.
|
||||
- `nice-ratio=<ratio>`: change _nice_ ratio: 0 for aggressive (not nice, not sleeping), positive integer `n`:
|
||||
- For any `1ms` spent copying rows, spend `n*1ms` units of time sleeping.
|
||||
- Examples: assume a single rows chunk copy takes `100ms` to complete.
|
||||
- `nice-ratio=0.5` will cause `gh-ost` to sleep for `50ms` immediately following.
|
||||
- For any `1ms` spent copying rows, spend `n*1ms` units of time sleeping.
|
||||
- Examples: assume a single rows chunk copy takes `100ms` to complete.
|
||||
- `nice-ratio=0.5` will cause `gh-ost` to sleep for `50ms` immediately following.
|
||||
- `nice-ratio=1` will cause `gh-ost` to sleep for `100ms`, effectively doubling runtime
|
||||
- value of `2` will effectively triple the runtime; etc.
|
||||
- `throttle-query`: change throttle query
|
||||
@ -38,6 +38,10 @@ Both interfaces may serve at the same time. Both respond to simple text command,
|
||||
- `unpostpone`: at a time where `gh-ost` is postponing the [cut-over](cut-over.md) phase, instruct `gh-ost` to stop postponing and proceed immediately to cut-over.
|
||||
- `panic`: immediately panic and abort operation
|
||||
|
||||
### Querying for data
|
||||
|
||||
For commands that accept an argumetn as value, pass `?` (question mark) to _get_ current value rather than _set_ a new one.
|
||||
|
||||
### Examples
|
||||
|
||||
While migration is running:
|
||||
@ -63,6 +67,11 @@ $ echo "chunk-size=250" | nc -U /tmp/gh-ost.test.sample_data_0.sock
|
||||
# Serving on TCP port: 10001
|
||||
```
|
||||
|
||||
```shell
|
||||
$ echo "chunk-size=?" | nc -U /tmp/gh-ost.test.sample_data_0.sock
|
||||
250
|
||||
```
|
||||
|
||||
```shell
|
||||
$ echo throttle | nc -U /tmp/gh-ost.test.sample_data_0.sock
|
||||
|
||||
|
26
doc/questions.md
Normal file
26
doc/questions.md
Normal file
@ -0,0 +1,26 @@
|
||||
# How?
|
||||
|
||||
### How does the cut-over work? Is it really atomic?
|
||||
|
||||
The cut-over phase, where the original table is swapped away, and the _ghost_ table takes its place, is an atomic, blocking, controlled operation.
|
||||
|
||||
- Atomic: the tables are swapped together. There is no gap where your table does not exist.
|
||||
- Blocking: all app queries involving the migrated (original) table are either operate on the original table, or are blocked, or proceed to operate on the _new_ table (formerly the _ghost_ table, now swapped in).
|
||||
- Controlled: the cut-over times out at pre-defined threshold, and is atomically aborted, then re-attempted. Cut-over only takes place when no lags are present, and otherwise no throttling reason is found. Cut-over step itself gets high priority and is never throttled.
|
||||
|
||||
Read more on [cut-over](cut-over.md) and on the [cut-over design Issue](https://github.com/github/gh-ost/issues/82)
|
||||
|
||||
|
||||
# Is it possible to?
|
||||
|
||||
### Is it possible to add a UNIQUE KEY?
|
||||
|
||||
Adding a `UNIQUE KEY` is possible, in the condition that no violation will occur. That is, you must make sure there aren't any violating rows on your table before, and during the migration.
|
||||
|
||||
At this time there is no equivalent to `ALTER IGNORE`, where duplicates are implicitly and silently thrown away. The MySQL `5.7` docs say:
|
||||
|
||||
> As of MySQL 5.7.4, the IGNORE clause for ALTER TABLE is removed and its use produces an error.
|
||||
|
||||
It is therefore unlikely that `gh-ost` will support this behavior.
|
||||
|
||||
# Why
|
@ -4,7 +4,7 @@
|
||||
|
||||
- You will need to have one server serving Row Based Replication (RBR) format binary logs. Right now `FULL` row image is supported. `MINIMAL` to be supported in the near future. `gh-ost` prefers to work with replicas. You may [still have your master configured with Statement Based Replication](migrating-with-sbr.md) (SBR).
|
||||
|
||||
- If you are using a replica, the table must have an identical schema between the master and replica.
|
||||
- If you are using a replica, the table must have an identical schema between the master and replica.
|
||||
|
||||
- `gh-ost` requires an account with these privileges:
|
||||
|
||||
@ -28,11 +28,14 @@ The `SUPER` privilege is required for `STOP SLAVE`, `START SLAVE` operations. Th
|
||||
|
||||
- MySQL 5.7 generated columns are not supported. They may be supported in the future.
|
||||
|
||||
- The two _before_ & _after_ tables must share some `UNIQUE KEY`. Such key would be used by `gh-ost` to iterate the table.
|
||||
- As an example, if your table has a single `UNIQUE KEY` and no `PRIMARY KEY`, and you wish to replace it with a `PRIMARY KEY`, you will need two migrations: one to add the `PRIMARY KEY` (this migration will use the existing `UNIQUE KEY`), another to drop the now redundant `UNIQUE KEY` (this migration will use the `PRIMARY KEY`).
|
||||
- MySQL 5.7 `JSON` columns are not supported. They are likely to be supported shortly.
|
||||
|
||||
- The chosen migration key must not include columns with `NULL` values.
|
||||
- `gh-ost` will do its best to pick a migration key with non-nullable columns. It will by default refuse a migration where the only possible `UNIQUE KEY` includes nullable-columns. You may override this refusal via `--allow-nullable-unique-key` but **you must** be sure there are no actual `NULL` values in those columns. Such `NULL` values would cause a data integrity problem and potentially a corrupted migration.
|
||||
- The two _before_ & _after_ tables must share a `PRIMARY KEY` or other `UNIQUE KEY`. This key will be used by `gh-ost` to iterate through the table rows when copying. [Read more](shared-key.md)
|
||||
- The migration key must not include columns with NULL values. This means either:
|
||||
1. The columns are `NOT NULL`, or
|
||||
2. The columns are nullable but don't contain any NULL values.
|
||||
- by default, `gh-ost` will not run if the only `UNIQUE KEY` includes nullable columns.
|
||||
- You may override this via `--allow-nullable-unique-key` but make sure there are no actual `NULL` values in those columns. Existing NULL values can't guarantee data integrity on the migrated table.
|
||||
|
||||
- It is not allowed to migrate a table where another table exists with same name and different upper/lower case.
|
||||
- For example, you may not migrate `MyTable` if another table called `MYtable` exists in the same schema.
|
||||
@ -43,3 +46,7 @@ The `SUPER` privilege is required for `STOP SLAVE`, `START SLAVE` operations. Th
|
||||
- Multisource is not supported when migrating via replica. It _should_ work (but never tested) when connecting directly to master (`--allow-on-master`)
|
||||
|
||||
- Master-master setup is only supported in active-passive setup. Active-active (where table is being written to on both masters concurrently) is unsupported. It may be supported in the future.
|
||||
|
||||
- If you have en `enum` field as part of your migration key (typically the `PRIMARY KEY`), migration performance will be degraded and potentially bad. [Read more](https://github.com/github/gh-ost/pull/277#issuecomment-254811520)
|
||||
|
||||
- Migrating a `FEDERATED` table is unsupported and is irrelevant to the problem `gh-ost` tackles.
|
||||
|
68
doc/shared-key.md
Normal file
68
doc/shared-key.md
Normal file
@ -0,0 +1,68 @@
|
||||
# Shared key
|
||||
|
||||
A requirement for a migration to run is that the two _before_ and _after_ tables have a shared unique key. This is to elaborate and illustrate on the matter.
|
||||
|
||||
### Introduction
|
||||
|
||||
Consider a classic, simple migration. The table is any normal:
|
||||
|
||||
```
|
||||
CREATE TABLE tbl (
|
||||
id bigint unsigned not null auto_increment,
|
||||
data varchar(255),
|
||||
more_data int,
|
||||
PRIMARY KEY(id)
|
||||
)
|
||||
```
|
||||
|
||||
And the migration is a simple `add column ts timestamp`.
|
||||
|
||||
In such migration there is no change in indexes, and in particular no change to any unique key, and specifically no change to the `PRIMARY KEY`. To run this migration, `gh-ost` would iterate the `tbl` table using the primary key, copy rows from `tbl` to the _ghost_ table `_tbl_gho` by order of `id`, and then apply binlog events onto `_tbl_gho`.
|
||||
|
||||
Applying the binlog events assumes the existence of a shared unique key. For example, an `UPDATE` statement in the binary log translate to a `REPLACE` statement which `gh-ost` applies to the _ghost_ table. Such statement expects to add or replace an existing row based on given row data. In particular, it would _replace_ an existing row if a unique key violation is met.
|
||||
|
||||
So `gh-ost` correlates `tbl` and `_tbl_gho` rows using a unique key. In the above example that would be the `PRIMARY KEY`.
|
||||
|
||||
### Rules
|
||||
|
||||
There must be a shared set of not-null columns for which there is a unique constraint in both the original table and the migration (_ghost_) table.
|
||||
|
||||
### Interpreting the rules
|
||||
|
||||
The same columns must be covered by a unique key in both tables. This doesn't have to be the `PRIMARY KEY`. This doesn't have to be a key of the same name.
|
||||
|
||||
Upon migration, `gh-ost` inspects both the original and _ghost_ table and attempts to find at least one such unique key (or rather, a set of columns) that is shared between the two. Typically this would just be the `PRIMARY KEY`, but sometimes you may change the `PRIMARY KEY` itself, in which case `gh-ost` will look for other options.
|
||||
|
||||
`gh-ost` expects unique keys where no `NULL` values are found, i.e. all columns covered by the unique key are defined as `NOT NULL`. This is implicitly true for `PRIMARY KEY`s. If no such key can be found, `gh-ost` bails out. In the event there is no such key, but you happen to _know_ your columns have no `NULL` values even though they're `NULL`-able, you may take responsibility and pass the `--allow-nullable-unique-key`. The migration will run well as long as no `NULL` values are found in the unique key's columns. Any actual `NULL`s may corrupt the migration.
|
||||
|
||||
### Examples: allowed and not allowed
|
||||
|
||||
```
|
||||
create table some_table (
|
||||
id int auto_increment,
|
||||
ts timestamp,
|
||||
name varchar(128) not null,
|
||||
owner_id int not null,
|
||||
loc_id int,
|
||||
primary key(id),
|
||||
unique key name_uidx(name)
|
||||
)
|
||||
```
|
||||
|
||||
Following are examples of migrations that are _good to run_:
|
||||
|
||||
- `add column i int`
|
||||
- `add key owner_idx(owner_id)`
|
||||
- `add unique key owner_name_idx(owner_id, name)` - though you need to make sure to not write conflicting rows while this migration runs
|
||||
- `drop key name_uidx` - `primary key` is shared between the tables
|
||||
- `drop primary key, add primary key(owner_id, loc_id)` - `name_uidx` is shared between the tables and is used for migration
|
||||
- `change id bigint unsigned` - the `'primary key` is used. The change of type still makes the `primary key` workable.
|
||||
- `drop primary key, drop key name_uidx, create primary key(name), create unique key id_uidx(id)` - swapping the two keys. `gh-ost` is still happy because `id` is still unique in both tables. So is `name`.
|
||||
|
||||
|
||||
Following are examples of migrations that _cannot run_:
|
||||
|
||||
- `drop primary key, drop key name_uidx` - no unique key to _ghost_ table, so clearly cannot run
|
||||
- `drop primary key, drop key name_uidx, create primary key(name, owner_id)` - no shared columns to both tables. Even though `name` exists in the _ghost_ table's `primary key`, it is only part of the key and in itself does not guarantee uniqueness in the _ghost_ table.
|
||||
|
||||
Also, you cannot run a migration on a table that doesn't have some form of `unique key` in the first place, such as `some_table (id int, ts timestamp)`
|
@ -2,7 +2,7 @@
|
||||
|
||||
`gh-ost` is able to utilize sub-second replication lag measurements.
|
||||
|
||||
At GitHub, small replication lag is crucial, and we like to keep it below `1s` at all times. If you have similar concern, we strongly urge you to proceed to implement sub-second lag throttling.
|
||||
At GitHub, small replication lag is crucial, and we like to keep it below `1s` at all times.
|
||||
|
||||
`gh-ost` will do sub-second throttling when `--max-lag-millis` is smaller than `1000`, i.e. smaller than `1sec`.
|
||||
Replication lag is measured on:
|
||||
@ -10,24 +10,10 @@ Replication lag is measured on:
|
||||
- The "inspected" server (the server `gh-ost` connects to; replica is desired but not mandatory)
|
||||
- The `throttle-control-replicas` list
|
||||
|
||||
For the inspected server, `gh-ost` uses an internal heartbeat mechanism. It injects heartbeat events onto the utility changelog table, then reads those events in the binary log, and compares times. This measurement is by default and by definition sub-second enabled.
|
||||
In both cases, `gh-ost` uses an internal heartbeat mechanism. It injects heartbeat events onto the utility changelog table, then reads those entries on replicas, and compares times. This measurement is on by default and by definition supports sub-second resolution.
|
||||
|
||||
You can explicitly define how frequently will `gh-ost` inject heartbeat events, via `heartbeat-interval-millis`. You should set `heartbeat-interval-millis <= max-lag-millis`. It still works if not, but loses granularity and effect.
|
||||
|
||||
On the `throttle-control-replicas`, `gh-ost` only issues SQL queries, and does not attempt to read the binary log stream. Perhaps those other replicas don't have binary logs in the first place.
|
||||
In earlier versions, the `--throttle-control-replicas` list was subjected to `1` second resolution or to 3rd party heartbeat injections such as `pt-heartbeat`. This is no longer the case. The argument `--replication-lag-query` has been deprecated and is no longer needed.
|
||||
|
||||
The standard way of getting replication lag on a replica is to issue `SHOW SLAVE STATUS`, then reading `Seconds_behind_master` value. But that value has a `1sec` granularity.
|
||||
|
||||
To be able to throttle on your production replicas fleet when replication lag exceeds a sub-second threshold, you must provide with a `replication-lag-query` that returns a sub-second resolution lag.
|
||||
|
||||
As a common example, many use [pt-heartbeat](https://www.percona.com/doc/percona-toolkit/2.2/pt-heartbeat.html) to inject heartbeat events on the master. You would issue something like:
|
||||
|
||||
/usr/bin/pt-heartbeat -- -D your_schema --create-table --update --replace --interval=0.1 --daemonize --pid ...
|
||||
|
||||
Note `--interval=0.1` to indicate `10` heartbeats per second.
|
||||
|
||||
You would then provide
|
||||
|
||||
gh-ost ... --replication-lag-query="select unix_timestamp(now(6)) - unix_timestamp(ts) as ghost_lag_check from your_schema.heartbeat order by ts desc limit 1"
|
||||
|
||||
Our production migrations use sub-second lag throttling and are able to keep our entire fleet of replicas well below `1sec` lag.
|
||||
Our production migrations use sub-second lag throttling and are able to keep our entire fleet of replicas well below `1sec` lag. We use `--heartbeat-interval-millis=100` on our production migrations with a `--max-lag-millis` value of between `300` and `500`.
|
||||
|
@ -28,15 +28,7 @@ Otherwise you may specify your own list of replica servers you wish it to observ
|
||||
|
||||
- `--max-lag-millis`: maximum allowed lag; any controlled replica lagging more than this value will cause throttling to kick in. When all control replicas have smaller lag than indicated, operation resumes.
|
||||
|
||||
- `--replication-lag-query`: `gh-ost` will, by default, issue a `show slave status` query to find replication lag. However, this is a notoriously flaky value. If you're using your own `heartbeat` mechanism, e.g. via [`pt-heartbeat`](https://www.percona.com/doc/percona-toolkit/2.2/pt-heartbeat.html), you may provide your own custom query to return a single decimal (floating point) value indicating replication lag.
|
||||
|
||||
Example: `--replication-lag-query="SELECT UNIX_TIMESTAMP() - MAX(UNIX_TIMESTAMP(ts)) AS lag FROM mydb.heartbeat"`
|
||||
|
||||
We encourage you to use [sub-second replication lag throttling](subsecond-lag.md). Your query may then look like:
|
||||
|
||||
`--replication-lag-query="SELECT UNIX_TIMESTAMP(6) - MAX(UNIX_TIMESTAMP(ts)) AS lag FROM mydb.heartbeat"`
|
||||
|
||||
Note that you may dynamically change both `replication-lag-query` and the `throttle-control-replicas` list via [interactive commands](interactive-commands.md)
|
||||
Note that you may dynamically change both `--max-lag-millis` and the `throttle-control-replicas` list via [interactive commands](interactive-commands.md)
|
||||
|
||||
#### Status thresholds
|
||||
|
||||
|
@ -47,7 +47,7 @@ Initial setup is a no-concurrency operation
|
||||
- Applying `alter` on ghost table
|
||||
- Comparing structure of original & ghost table. Looking for shared columns, shared unique keys, validating foreign keys. Choosing shared unique key, the key by which we chunk the table and process it.
|
||||
- Setting up the binlog listener; begin listening on changelog events
|
||||
- Injecting a "good to go" ebtry onto the changelog table (to be intercepted via binary logs)
|
||||
- Injecting a "good to go" entry onto the changelog table (to be intercepted via binary logs)
|
||||
- Begin listening on binlog events for original table DMLs
|
||||
- Reading original table's chosen key min/max values
|
||||
|
||||
|
@ -37,6 +37,13 @@ const (
|
||||
CutOverTwoStep = iota
|
||||
)
|
||||
|
||||
type ThrottleReasonHint string
|
||||
|
||||
const (
|
||||
NoThrottleReasonHint ThrottleReasonHint = "NoThrottleReasonHint"
|
||||
UserCommandThrottleReasonHint = "UserCommandThrottleReasonHint"
|
||||
)
|
||||
|
||||
var (
|
||||
envVariableRegexp = regexp.MustCompile("[$][{](.*)[}]")
|
||||
)
|
||||
@ -44,12 +51,14 @@ var (
|
||||
type ThrottleCheckResult struct {
|
||||
ShouldThrottle bool
|
||||
Reason string
|
||||
ReasonHint ThrottleReasonHint
|
||||
}
|
||||
|
||||
func NewThrottleCheckResult(throttle bool, reason string) *ThrottleCheckResult {
|
||||
func NewThrottleCheckResult(throttle bool, reason string, reasonHint ThrottleReasonHint) *ThrottleCheckResult {
|
||||
return &ThrottleCheckResult{
|
||||
ShouldThrottle: throttle,
|
||||
Reason: reason,
|
||||
ReasonHint: reasonHint,
|
||||
}
|
||||
}
|
||||
|
||||
@ -66,24 +75,26 @@ type MigrationContext struct {
|
||||
AllowedMasterMaster bool
|
||||
SwitchToRowBinlogFormat bool
|
||||
AssumeRBR bool
|
||||
SkipForeignKeyChecks bool
|
||||
NullableUniqueKeyAllowed bool
|
||||
ApproveRenamedColumns bool
|
||||
SkipRenamedColumns bool
|
||||
IsTungsten bool
|
||||
DiscardForeignKeys bool
|
||||
|
||||
config ContextConfig
|
||||
configMutex *sync.Mutex
|
||||
ConfigFile string
|
||||
CliUser string
|
||||
CliPassword string
|
||||
config ContextConfig
|
||||
configMutex *sync.Mutex
|
||||
ConfigFile string
|
||||
CliUser string
|
||||
CliPassword string
|
||||
CliMasterUser string
|
||||
CliMasterPassword string
|
||||
|
||||
HeartbeatIntervalMilliseconds int64
|
||||
defaultNumRetries int64
|
||||
ChunkSize int64
|
||||
niceRatio float64
|
||||
MaxLagMillisecondsThrottleThreshold int64
|
||||
replicationLagQuery string
|
||||
throttleControlReplicaKeys *mysql.InstanceKeyMap
|
||||
ThrottleFlagFile string
|
||||
ThrottleAdditionalFlagFile string
|
||||
@ -110,7 +121,9 @@ type MigrationContext struct {
|
||||
OkToDropTable bool
|
||||
InitiallyDropOldTable bool
|
||||
InitiallyDropGhostTable bool
|
||||
TimestampOldTable bool // Should old table name include a timestamp
|
||||
CutOverType CutOver
|
||||
ReplicaServerId uint
|
||||
|
||||
Hostname string
|
||||
AssumeMasterHostname string
|
||||
@ -123,7 +136,9 @@ type MigrationContext struct {
|
||||
OriginalBinlogFormat string
|
||||
OriginalBinlogRowImage string
|
||||
InspectorConnectionConfig *mysql.ConnectionConfig
|
||||
InspectorMySQLVersion string
|
||||
ApplierConnectionConfig *mysql.ConnectionConfig
|
||||
ApplierMySQLVersion string
|
||||
StartTime time.Time
|
||||
RowCopyStartTime time.Time
|
||||
RowCopyEndTime time.Time
|
||||
@ -136,8 +151,10 @@ type MigrationContext struct {
|
||||
controlReplicasLagResult mysql.ReplicationLagResult
|
||||
TotalRowsCopied int64
|
||||
TotalDMLEventsApplied int64
|
||||
DMLBatchSize int64
|
||||
isThrottled bool
|
||||
throttleReason string
|
||||
throttleReasonHint ThrottleReasonHint
|
||||
throttleGeneralCheckResult ThrottleCheckResult
|
||||
throttleMutex *sync.Mutex
|
||||
IsPostponingCutOver int64
|
||||
@ -145,8 +162,11 @@ type MigrationContext struct {
|
||||
AllEventsUpToLockProcessedInjectedFlag int64
|
||||
CleanupImminentFlag int64
|
||||
UserCommandedUnpostponeFlag int64
|
||||
CutOverCompleteFlag int64
|
||||
InCutOverCriticalSectionFlag int64
|
||||
PanicAbort chan error
|
||||
|
||||
OriginalTableColumnsOnApplier *sql.ColumnList
|
||||
OriginalTableColumns *sql.ColumnList
|
||||
OriginalTableUniqueKeys [](*sql.UniqueKey)
|
||||
GhostTableColumns *sql.ColumnList
|
||||
@ -191,6 +211,7 @@ func newMigrationContext() *MigrationContext {
|
||||
ApplierConnectionConfig: mysql.NewConnectionConfig(),
|
||||
MaxLagMillisecondsThrottleThreshold: 1500,
|
||||
CutOverLockTimeoutSeconds: 3,
|
||||
DMLBatchSize: 10,
|
||||
maxLoad: NewLoadMap(),
|
||||
criticalLoad: NewLoadMap(),
|
||||
throttleMutex: &sync.Mutex{},
|
||||
@ -214,11 +235,12 @@ func (this *MigrationContext) GetGhostTableName() string {
|
||||
|
||||
// GetOldTableName generates the name of the "old" table, into which the original table is renamed.
|
||||
func (this *MigrationContext) GetOldTableName() string {
|
||||
if this.TestOnReplica {
|
||||
return fmt.Sprintf("_%s_ght", this.OriginalTableName)
|
||||
}
|
||||
if this.MigrateOnReplica {
|
||||
return fmt.Sprintf("_%s_ghr", this.OriginalTableName)
|
||||
if this.TimestampOldTable {
|
||||
t := this.StartTime
|
||||
timestamp := fmt.Sprintf("%d%02d%02d%02d%02d%02d",
|
||||
t.Year(), t.Month(), t.Day(),
|
||||
t.Hour(), t.Minute(), t.Second())
|
||||
return fmt.Sprintf("_%s_%s_del", this.OriginalTableName, timestamp)
|
||||
}
|
||||
return fmt.Sprintf("_%s_del", this.OriginalTableName)
|
||||
}
|
||||
@ -401,6 +423,16 @@ func (this *MigrationContext) SetChunkSize(chunkSize int64) {
|
||||
atomic.StoreInt64(&this.ChunkSize, chunkSize)
|
||||
}
|
||||
|
||||
func (this *MigrationContext) SetDMLBatchSize(batchSize int64) {
|
||||
if batchSize < 1 {
|
||||
batchSize = 1
|
||||
}
|
||||
if batchSize > 100 {
|
||||
batchSize = 100
|
||||
}
|
||||
atomic.StoreInt64(&this.DMLBatchSize, batchSize)
|
||||
}
|
||||
|
||||
func (this *MigrationContext) SetThrottleGeneralCheckResult(checkResult *ThrottleCheckResult) *ThrottleCheckResult {
|
||||
this.throttleMutex.Lock()
|
||||
defer this.throttleMutex.Unlock()
|
||||
@ -415,34 +447,28 @@ func (this *MigrationContext) GetThrottleGeneralCheckResult() *ThrottleCheckResu
|
||||
return &result
|
||||
}
|
||||
|
||||
func (this *MigrationContext) SetThrottled(throttle bool, reason string) {
|
||||
func (this *MigrationContext) SetThrottled(throttle bool, reason string, reasonHint ThrottleReasonHint) {
|
||||
this.throttleMutex.Lock()
|
||||
defer this.throttleMutex.Unlock()
|
||||
this.isThrottled = throttle
|
||||
this.throttleReason = reason
|
||||
this.throttleReasonHint = reasonHint
|
||||
}
|
||||
|
||||
func (this *MigrationContext) IsThrottled() (bool, string) {
|
||||
this.throttleMutex.Lock()
|
||||
defer this.throttleMutex.Unlock()
|
||||
return this.isThrottled, this.throttleReason
|
||||
}
|
||||
|
||||
func (this *MigrationContext) GetReplicationLagQuery() string {
|
||||
var query string
|
||||
|
||||
func (this *MigrationContext) IsThrottled() (bool, string, ThrottleReasonHint) {
|
||||
this.throttleMutex.Lock()
|
||||
defer this.throttleMutex.Unlock()
|
||||
|
||||
query = this.replicationLagQuery
|
||||
return query
|
||||
}
|
||||
|
||||
func (this *MigrationContext) SetReplicationLagQuery(newQuery string) {
|
||||
this.throttleMutex.Lock()
|
||||
defer this.throttleMutex.Unlock()
|
||||
|
||||
this.replicationLagQuery = newQuery
|
||||
// we don't throttle when cutting over. We _do_ throttle:
|
||||
// - during copy phase
|
||||
// - just before cut-over
|
||||
// - in between cut-over retries
|
||||
// When cutting over, we need to be aggressive. Cut-over holds table locks.
|
||||
// We need to release those asap.
|
||||
if atomic.LoadInt64(&this.InCutOverCriticalSectionFlag) > 0 {
|
||||
return false, "critical section", NoThrottleReasonHint
|
||||
}
|
||||
return this.isThrottled, this.throttleReason, this.throttleReasonHint
|
||||
}
|
||||
|
||||
func (this *MigrationContext) GetThrottleQuery() string {
|
||||
@ -537,7 +563,11 @@ func (this *MigrationContext) GetControlReplicasLagResult() mysql.ReplicationLag
|
||||
func (this *MigrationContext) SetControlReplicasLagResult(lagResult *mysql.ReplicationLagResult) {
|
||||
this.throttleMutex.Lock()
|
||||
defer this.throttleMutex.Unlock()
|
||||
this.controlReplicasLagResult = *lagResult
|
||||
if lagResult == nil {
|
||||
this.controlReplicasLagResult = *mysql.NewNoReplicationLagResult()
|
||||
} else {
|
||||
this.controlReplicasLagResult = *lagResult
|
||||
}
|
||||
}
|
||||
|
||||
func (this *MigrationContext) GetThrottleControlReplicaKeys() *mysql.InstanceKeyMap {
|
||||
|
@ -5,8 +5,6 @@
|
||||
|
||||
package binlog
|
||||
|
||||
import ()
|
||||
|
||||
// BinlogReader is a general interface whose implementations can choose their methods of reading
|
||||
// a binary log file and parsing it into binlog entries
|
||||
type BinlogReader interface {
|
||||
|
@ -9,18 +9,14 @@ import (
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"github.com/github/gh-ost/go/base"
|
||||
"github.com/github/gh-ost/go/mysql"
|
||||
"github.com/github/gh-ost/go/sql"
|
||||
|
||||
"github.com/outbrain/golib/log"
|
||||
gomysql "github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go-mysql/replication"
|
||||
)
|
||||
|
||||
var ()
|
||||
|
||||
const (
|
||||
serverId = 99999
|
||||
"golang.org/x/net/context"
|
||||
)
|
||||
|
||||
type GoMySQLReader struct {
|
||||
@ -30,6 +26,7 @@ type GoMySQLReader struct {
|
||||
currentCoordinates mysql.BinlogCoordinates
|
||||
currentCoordinatesMutex *sync.Mutex
|
||||
LastAppliedRowsEventHint mysql.BinlogCoordinates
|
||||
MigrationContext *base.MigrationContext
|
||||
}
|
||||
|
||||
func NewGoMySQLReader(connectionConfig *mysql.ConnectionConfig) (binlogReader *GoMySQLReader, err error) {
|
||||
@ -39,8 +36,20 @@ func NewGoMySQLReader(connectionConfig *mysql.ConnectionConfig) (binlogReader *G
|
||||
currentCoordinatesMutex: &sync.Mutex{},
|
||||
binlogSyncer: nil,
|
||||
binlogStreamer: nil,
|
||||
MigrationContext: base.GetMigrationContext(),
|
||||
}
|
||||
binlogReader.binlogSyncer = replication.NewBinlogSyncer(serverId, "mysql")
|
||||
|
||||
serverId := uint32(binlogReader.MigrationContext.ReplicaServerId)
|
||||
|
||||
binlogSyncerConfig := &replication.BinlogSyncerConfig{
|
||||
ServerID: serverId,
|
||||
Flavor: "mysql",
|
||||
Host: connectionConfig.Key.Hostname,
|
||||
Port: uint16(connectionConfig.Key.Port),
|
||||
User: connectionConfig.User,
|
||||
Password: connectionConfig.Password,
|
||||
}
|
||||
binlogReader.binlogSyncer = replication.NewBinlogSyncer(binlogSyncerConfig)
|
||||
|
||||
return binlogReader, err
|
||||
}
|
||||
@ -50,10 +59,6 @@ func (this *GoMySQLReader) ConnectBinlogStreamer(coordinates mysql.BinlogCoordin
|
||||
if coordinates.IsEmpty() {
|
||||
return log.Errorf("Emptry coordinates at ConnectBinlogStreamer()")
|
||||
}
|
||||
log.Infof("Registering replica at %+v:%+v", this.connectionConfig.Key.Hostname, uint16(this.connectionConfig.Key.Port))
|
||||
if err := this.binlogSyncer.RegisterSlave(this.connectionConfig.Key.Hostname, uint16(this.connectionConfig.Key.Port), this.connectionConfig.User, this.connectionConfig.Password); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
this.currentCoordinates = coordinates
|
||||
log.Infof("Connecting binlog streamer at %+v", this.currentCoordinates)
|
||||
@ -120,11 +125,14 @@ func (this *GoMySQLReader) handleRowsEvent(ev *replication.BinlogEvent, rowsEven
|
||||
|
||||
// StreamEvents
|
||||
func (this *GoMySQLReader) StreamEvents(canStopStreaming func() bool, entriesChannel chan<- *BinlogEntry) error {
|
||||
if canStopStreaming() {
|
||||
return nil
|
||||
}
|
||||
for {
|
||||
if canStopStreaming() {
|
||||
break
|
||||
}
|
||||
ev, err := this.binlogStreamer.GetEvent()
|
||||
ev, err := this.binlogStreamer.GetEvent(context.Background())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -150,3 +158,8 @@ func (this *GoMySQLReader) StreamEvents(canStopStreaming func() bool, entriesCha
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (this *GoMySQLReader) Close() error {
|
||||
this.binlogSyncer.Close()
|
||||
return nil
|
||||
}
|
||||
|
@ -15,6 +15,8 @@ import (
|
||||
"github.com/github/gh-ost/go/base"
|
||||
"github.com/github/gh-ost/go/logic"
|
||||
"github.com/outbrain/golib/log"
|
||||
|
||||
"golang.org/x/crypto/ssh/terminal"
|
||||
)
|
||||
|
||||
var AppVersion string
|
||||
@ -48,13 +50,16 @@ func main() {
|
||||
flag.IntVar(&migrationContext.InspectorConnectionConfig.Key.Port, "port", 3306, "MySQL port (preferably a replica, not the master)")
|
||||
flag.StringVar(&migrationContext.CliUser, "user", "", "MySQL user")
|
||||
flag.StringVar(&migrationContext.CliPassword, "password", "", "MySQL password")
|
||||
flag.StringVar(&migrationContext.CliMasterUser, "master-user", "", "MySQL user on master, if different from that on replica. Requires --assume-master-host")
|
||||
flag.StringVar(&migrationContext.CliMasterPassword, "master-password", "", "MySQL password on master, if different from that on replica. Requires --assume-master-host")
|
||||
flag.StringVar(&migrationContext.ConfigFile, "conf", "", "Config file")
|
||||
askPass := flag.Bool("ask-pass", false, "prompt for MySQL password")
|
||||
|
||||
flag.StringVar(&migrationContext.DatabaseName, "database", "", "database name (mandatory)")
|
||||
flag.StringVar(&migrationContext.OriginalTableName, "table", "", "table name (mandatory)")
|
||||
flag.StringVar(&migrationContext.AlterStatement, "alter", "", "alter statement (mandatory)")
|
||||
flag.BoolVar(&migrationContext.CountTableRows, "exact-rowcount", false, "actually count table rows as opposed to estimate them (results in more accurate progress estimation)")
|
||||
flag.BoolVar(&migrationContext.ConcurrentCountTableRows, "concurrent-rowcount", false, "(with --exact-rowcount), when true: count rows after row-copy begins, concurrently, and adjust row estimate later on; defaults false: first count rows, then start row copy")
|
||||
flag.BoolVar(&migrationContext.ConcurrentCountTableRows, "concurrent-rowcount", true, "(with --exact-rowcount), when true (default): count rows after row-copy begins, concurrently, and adjust row estimate later on; when false: first count rows, then start row copy")
|
||||
flag.BoolVar(&migrationContext.AllowedRunningOnMaster, "allow-on-master", false, "allow this migration to run directly on master. Preferably it would run on a replica")
|
||||
flag.BoolVar(&migrationContext.AllowedMasterMaster, "allow-master-master", false, "explicitly allow running in a master-master setup")
|
||||
flag.BoolVar(&migrationContext.NullableUniqueKeyAllowed, "allow-nullable-unique-key", false, "allow gh-ost to migrate based on a unique key with nullable columns. As long as no NULL values exist, this should be OK. If NULL values exist in chosen key, data may be corrupted. Use at your own risk!")
|
||||
@ -62,6 +67,7 @@ func main() {
|
||||
flag.BoolVar(&migrationContext.SkipRenamedColumns, "skip-renamed-columns", false, "in case your `ALTER` statement renames columns, gh-ost will note that and offer its interpretation of the rename. By default gh-ost does not proceed to execute. This flag tells gh-ost to skip the renamed columns, i.e. to treat what gh-ost thinks are renamed columns as unrelated columns. NOTE: you may lose column data")
|
||||
flag.BoolVar(&migrationContext.IsTungsten, "tungsten", false, "explicitly let gh-ost know that you are running on a tungsten-replication based topology (you are likely to also provide --assume-master-host)")
|
||||
flag.BoolVar(&migrationContext.DiscardForeignKeys, "discard-foreign-keys", false, "DANGER! This flag will migrate a table that has foreign keys and will NOT create foreign keys on the ghost table, thus your altered table will have NO foreign keys. This is useful for intentional dropping of foreign keys")
|
||||
flag.BoolVar(&migrationContext.SkipForeignKeyChecks, "skip-foreign-key-checks", false, "set to 'true' when you know for certain there are no foreign keys on your table, and wish to skip the time it takes for gh-ost to verify that")
|
||||
|
||||
executeFlag := flag.Bool("execute", false, "actually execute the alter & migrate the table. Default is noop: do some tests and exit")
|
||||
flag.BoolVar(&migrationContext.TestOnReplica, "test-on-replica", false, "Have the migration run on a replica, not on the master. At the end of migration replication is stopped, and tables are swapped and immediately swap-revert. Replication remains stopped and you can compare the two tables for building trust")
|
||||
@ -71,21 +77,23 @@ func main() {
|
||||
flag.BoolVar(&migrationContext.OkToDropTable, "ok-to-drop-table", false, "Shall the tool drop the old table at end of operation. DROPping tables can be a long locking operation, which is why I'm not doing it by default. I'm an online tool, yes?")
|
||||
flag.BoolVar(&migrationContext.InitiallyDropOldTable, "initially-drop-old-table", false, "Drop a possibly existing OLD table (remains from a previous run?) before beginning operation. Default is to panic and abort if such table exists")
|
||||
flag.BoolVar(&migrationContext.InitiallyDropGhostTable, "initially-drop-ghost-table", false, "Drop a possibly existing Ghost table (remains from a previous run?) before beginning operation. Default is to panic and abort if such table exists")
|
||||
flag.BoolVar(&migrationContext.TimestampOldTable, "timestamp-old-table", false, "Use a timestamp in old table name. This makes old table names unique and non conflicting cross migrations")
|
||||
cutOver := flag.String("cut-over", "atomic", "choose cut-over type (default|atomic, two-step)")
|
||||
flag.BoolVar(&migrationContext.ForceNamedCutOverCommand, "force-named-cut-over", false, "When true, the 'unpostpone|cut-over' interactive command must name the migrated table")
|
||||
|
||||
flag.BoolVar(&migrationContext.SwitchToRowBinlogFormat, "switch-to-rbr", false, "let this tool automatically switch binary log format to 'ROW' on the replica, if needed. The format will NOT be switched back. I'm too scared to do that, and wish to protect you if you happen to execute another migration while this one is running")
|
||||
flag.BoolVar(&migrationContext.AssumeRBR, "assume-rbr", false, "set to 'true' when you know for certain your server uses 'ROW' binlog_format. gh-ost is unable to tell, event after reading binlog_format, whether the replication process does indeed use 'ROW', and restarts replication to be certain RBR setting is applied. Such operation requires SUPER privileges which you might not have. Setting this flag avoids restarting replication and you can proceed to use gh-ost without SUPER privileges")
|
||||
chunkSize := flag.Int64("chunk-size", 1000, "amount of rows to handle in each iteration (allowed range: 100-100,000)")
|
||||
dmlBatchSize := flag.Int64("dml-batch-size", 10, "batch size for DML events to apply in a single transaction (range 1-100)")
|
||||
defaultRetries := flag.Int64("default-retries", 60, "Default number of retries for various operations before panicking")
|
||||
cutOverLockTimeoutSeconds := flag.Int64("cut-over-lock-timeout-seconds", 3, "Max number of seconds to hold locks on tables while attempting to cut-over (retry attempted when lock exceeds timeout)")
|
||||
niceRatio := flag.Float64("nice-ratio", 0, "force being 'nice', imply sleep time per chunk time; range: [0.0..100.0]. Example values: 0 is aggressive. 1: for every 1ms spent copying rows, sleep additional 1ms (effectively doubling runtime); 0.7: for every 10ms spend in a rowcopy chunk, spend 7ms sleeping immediately after")
|
||||
|
||||
maxLagMillis := flag.Int64("max-lag-millis", 1500, "replication lag at which to throttle operation")
|
||||
replicationLagQuery := flag.String("replication-lag-query", "", "Query that detects replication lag in seconds. Result can be a floating point (by default gh-ost issues SHOW SLAVE STATUS and reads Seconds_behind_master). If you're using pt-heartbeat, query would be something like: SELECT ROUND(UNIX_TIMESTAMP() - MAX(UNIX_TIMESTAMP(ts))) AS delay FROM my_schema.heartbeat")
|
||||
replicationLagQuery := flag.String("replication-lag-query", "", "Deprecated. gh-ost uses an internal, subsecond resolution query")
|
||||
throttleControlReplicas := flag.String("throttle-control-replicas", "", "List of replicas on which to check for lag; comma delimited. Example: myhost1.com:3306,myhost2.com,myhost3.com:3307")
|
||||
throttleQuery := flag.String("throttle-query", "", "when given, issued (every second) to check if operation should throttle. Expecting to return zero for no-throttle, >0 for throttle. Query is issued on the migrated server. Make sure this query is lightweight")
|
||||
heartbeatIntervalMillis := flag.Int64("heartbeat-interval-millis", 500, "how frequently would gh-ost inject a heartbeat value")
|
||||
heartbeatIntervalMillis := flag.Int64("heartbeat-interval-millis", 100, "how frequently would gh-ost inject a heartbeat value")
|
||||
flag.StringVar(&migrationContext.ThrottleFlagFile, "throttle-flag-file", "", "operation pauses when this file exists; hint: use a file that is specific to the table being altered")
|
||||
flag.StringVar(&migrationContext.ThrottleAdditionalFlagFile, "throttle-additional-flag-file", "/tmp/gh-ost.throttle", "operation pauses when this file exists; hint: keep default, use for throttling multiple gh-ost operations")
|
||||
flag.StringVar(&migrationContext.PostponeCutOverFlagFile, "postpone-cut-over-flag-file", "", "while this file exists, migration will postpone the final stage of swapping tables, and will keep on syncing the ghost table. Cut-over/swapping would be ready to perform the moment the file is deleted.")
|
||||
@ -98,6 +106,8 @@ func main() {
|
||||
flag.StringVar(&migrationContext.HooksPath, "hooks-path", "", "directory where hook files are found (default: empty, ie. hooks disabled). Hook files found on this path, and conforming to hook naming conventions will be executed")
|
||||
flag.StringVar(&migrationContext.HooksHintMessage, "hooks-hint", "", "arbitrary message to be injected to hooks via GH_OST_HOOKS_HINT, for your convenience")
|
||||
|
||||
flag.UintVar(&migrationContext.ReplicaServerId, "replica-server-id", 99999, "server id used by gh-ost process. Default: 99999")
|
||||
|
||||
maxLoad := flag.String("max-load", "", "Comma delimited status-name=threshold. e.g: 'Threads_running=100,Threads_connected=500'. When status exceeds threshold, app throttles writes")
|
||||
criticalLoad := flag.String("critical-load", "", "Comma delimited status-name=threshold, same format as --max-load. When status exceeds threshold, app panics and quits")
|
||||
flag.Int64Var(&migrationContext.CriticalLoadIntervalMilliseconds, "critical-load-interval-millis", 0, "When 0, migration immediately bails out upon meeting critical-load. When non-zero, a second check is done after given interval, and migration only bails out if 2nd check still meets critical load")
|
||||
@ -107,8 +117,13 @@ func main() {
|
||||
stack := flag.Bool("stack", false, "add stack trace upon error")
|
||||
help := flag.Bool("help", false, "Display usage")
|
||||
version := flag.Bool("version", false, "Print version & exit")
|
||||
checkFlag := flag.Bool("check-flag", false, "Check if another flag exists/supported. This allows for cross-version scripting. Exits with 0 when all additional provided flags exist, nonzero otherwise. You must provide (dummy) values for flags that require a value. Example: gh-ost --check-flag --cut-over-lock-timeout-seconds --nice-ratio 0")
|
||||
|
||||
flag.Parse()
|
||||
|
||||
if *checkFlag {
|
||||
return
|
||||
}
|
||||
if *help {
|
||||
fmt.Fprintf(os.Stderr, "Usage of gh-ost:\n")
|
||||
flag.PrintDefaults()
|
||||
@ -166,8 +181,14 @@ func main() {
|
||||
}
|
||||
log.Warning("--test-on-replica-skip-replica-stop enabled. We will not stop replication before cut-over. Ensure you have a plugin that does this.")
|
||||
}
|
||||
if migrationContext.AssumeMasterHostname != "" && !migrationContext.AllowedMasterMaster && !migrationContext.IsTungsten {
|
||||
log.Fatalf("--assume-master-host requires either --allow-master-master or --tungsten")
|
||||
if migrationContext.CliMasterUser != "" && migrationContext.AssumeMasterHostname == "" {
|
||||
log.Fatalf("--master-user requires --assume-master-host")
|
||||
}
|
||||
if migrationContext.CliMasterPassword != "" && migrationContext.AssumeMasterHostname == "" {
|
||||
log.Fatalf("--master-password requires --assume-master-host")
|
||||
}
|
||||
if *replicationLagQuery != "" {
|
||||
log.Warningf("--replication-lag-query is deprecated")
|
||||
}
|
||||
|
||||
switch *cutOver {
|
||||
@ -193,11 +214,19 @@ func main() {
|
||||
if migrationContext.ServeSocketFile == "" {
|
||||
migrationContext.ServeSocketFile = fmt.Sprintf("/tmp/gh-ost.%s.%s.sock", migrationContext.DatabaseName, migrationContext.OriginalTableName)
|
||||
}
|
||||
if *askPass {
|
||||
fmt.Println("Password:")
|
||||
bytePassword, err := terminal.ReadPassword(int(syscall.Stdin))
|
||||
if err != nil {
|
||||
log.Fatale(err)
|
||||
}
|
||||
migrationContext.CliPassword = string(bytePassword)
|
||||
}
|
||||
migrationContext.SetHeartbeatIntervalMilliseconds(*heartbeatIntervalMillis)
|
||||
migrationContext.SetNiceRatio(*niceRatio)
|
||||
migrationContext.SetChunkSize(*chunkSize)
|
||||
migrationContext.SetDMLBatchSize(*dmlBatchSize)
|
||||
migrationContext.SetMaxLagMillisecondsThrottleThreshold(*maxLagMillis)
|
||||
migrationContext.SetReplicationLagQuery(*replicationLagQuery)
|
||||
migrationContext.SetThrottleQuery(*throttleQuery)
|
||||
migrationContext.SetDefaultNumRetries(*defaultRetries)
|
||||
migrationContext.ApplyCredentials()
|
||||
@ -214,5 +243,5 @@ func main() {
|
||||
migrator.ExecOnFailureHook()
|
||||
log.Fatale(err)
|
||||
}
|
||||
log.Info("Done")
|
||||
fmt.Fprintf(os.Stdout, "# Done\n")
|
||||
}
|
||||
|
@ -67,14 +67,18 @@ func (this *Applier) InitDBConnections() (err error) {
|
||||
} else {
|
||||
this.connectionConfig.ImpliedKey = impliedKey
|
||||
}
|
||||
if err := this.readTableColumns(); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Infof("Applier initiated on %+v, version %+v", this.connectionConfig.ImpliedKey, this.migrationContext.ApplierMySQLVersion)
|
||||
return nil
|
||||
}
|
||||
|
||||
// validateConnection issues a simple can-connect to MySQL
|
||||
func (this *Applier) validateConnection(db *gosql.DB) error {
|
||||
query := `select @@global.port`
|
||||
query := `select @@global.port, @@global.version`
|
||||
var port int
|
||||
if err := db.QueryRow(query).Scan(&port); err != nil {
|
||||
if err := db.QueryRow(query).Scan(&port, &this.migrationContext.ApplierMySQLVersion); err != nil {
|
||||
return err
|
||||
}
|
||||
if port != this.connectionConfig.Key.Port {
|
||||
@ -95,6 +99,16 @@ func (this *Applier) validateAndReadTimeZone() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// readTableColumns reads table columns on applier
|
||||
func (this *Applier) readTableColumns() (err error) {
|
||||
log.Infof("Examining table structure on applier")
|
||||
this.migrationContext.OriginalTableColumnsOnApplier, err = mysql.GetTableColumns(this.db, this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// showTableStatus returns the output of `show table status like '...'` command
|
||||
func (this *Applier) showTableStatus(tableName string) (rowMap sqlutils.RowMap) {
|
||||
rowMap = nil
|
||||
@ -128,6 +142,10 @@ func (this *Applier) ValidateOrDropExistingTables() error {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if len(this.migrationContext.GetOldTableName()) > mysql.MaxTableNameLength {
|
||||
log.Fatalf("--timestamp-old-table defined, but resulting table name (%s) is too long (only %d characters allowed)", this.migrationContext.GetOldTableName(), mysql.MaxTableNameLength)
|
||||
}
|
||||
|
||||
if this.tableExists(this.migrationContext.GetOldTableName()) {
|
||||
return fmt.Errorf("Table %s already exists. Panicking. Use --initially-drop-old-table to force dropping it, though I really prefer that you drop it or rename it away", sql.EscapeName(this.migrationContext.GetOldTableName()))
|
||||
}
|
||||
@ -292,6 +310,9 @@ func (this *Applier) InitiateHeartbeat() {
|
||||
// Generally speaking, we would issue a goroutine, but I'd actually rather
|
||||
// have this block the loop rather than spam the master in the event something
|
||||
// goes wrong
|
||||
if throttle, _, reasonHint := this.migrationContext.IsThrottled(); throttle && (reasonHint == base.UserCommandThrottleReasonHint) {
|
||||
continue
|
||||
}
|
||||
if err := injectHeartbeat(); err != nil {
|
||||
return
|
||||
}
|
||||
@ -315,7 +336,7 @@ func (this *Applier) ExecuteThrottleQuery() (int64, error) {
|
||||
// ReadMigrationMinValues returns the minimum values to be iterated on rowcopy
|
||||
func (this *Applier) ReadMigrationMinValues(uniqueKey *sql.UniqueKey) error {
|
||||
log.Debugf("Reading migration range according to key: %s", uniqueKey.Name)
|
||||
query, err := sql.BuildUniqueKeyMinValuesPreparedQuery(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, uniqueKey.Columns.Names())
|
||||
query, err := sql.BuildUniqueKeyMinValuesPreparedQuery(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, &uniqueKey.Columns)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -336,7 +357,7 @@ func (this *Applier) ReadMigrationMinValues(uniqueKey *sql.UniqueKey) error {
|
||||
// ReadMigrationMaxValues returns the maximum values to be iterated on rowcopy
|
||||
func (this *Applier) ReadMigrationMaxValues(uniqueKey *sql.UniqueKey) error {
|
||||
log.Debugf("Reading migration range according to key: %s", uniqueKey.Name)
|
||||
query, err := sql.BuildUniqueKeyMaxValuesPreparedQuery(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, uniqueKey.Columns.Names())
|
||||
query, err := sql.BuildUniqueKeyMaxValuesPreparedQuery(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, &uniqueKey.Columns)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
@ -377,7 +398,7 @@ func (this *Applier) CalculateNextIterationRangeEndValues() (hasFurtherRange boo
|
||||
query, explodedArgs, err := sql.BuildUniqueKeyRangeEndPreparedQuery(
|
||||
this.migrationContext.DatabaseName,
|
||||
this.migrationContext.OriginalTableName,
|
||||
this.migrationContext.UniqueKey.Columns.Names(),
|
||||
&this.migrationContext.UniqueKey.Columns,
|
||||
this.migrationContext.MigrationIterationRangeMinValues.AbstractValues(),
|
||||
this.migrationContext.MigrationRangeMaxValues.AbstractValues(),
|
||||
atomic.LoadInt64(&this.migrationContext.ChunkSize),
|
||||
@ -419,7 +440,7 @@ func (this *Applier) ApplyIterationInsertQuery() (chunkSize int64, rowsAffected
|
||||
this.migrationContext.SharedColumns.Names(),
|
||||
this.migrationContext.MappedSharedColumns.Names(),
|
||||
this.migrationContext.UniqueKey.Name,
|
||||
this.migrationContext.UniqueKey.Columns.Names(),
|
||||
&this.migrationContext.UniqueKey.Columns,
|
||||
this.migrationContext.MigrationIterationRangeMinValues.AbstractValues(),
|
||||
this.migrationContext.MigrationIterationRangeMaxValues.AbstractValues(),
|
||||
this.migrationContext.GetIteration() == 0,
|
||||
@ -572,11 +593,22 @@ func (this *Applier) RenameTablesRollback() (renameError error) {
|
||||
// and have them written to the binary log, so that we can then read them via streamer.
|
||||
func (this *Applier) StopSlaveIOThread() error {
|
||||
query := `stop /* gh-ost */ slave io_thread`
|
||||
log.Infof("Stopping replication")
|
||||
log.Infof("Stopping replication IO thread")
|
||||
if _, err := sqlutils.ExecNoPrepare(this.db, query); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Infof("Replication stopped")
|
||||
log.Infof("Replication IO thread stopped")
|
||||
return nil
|
||||
}
|
||||
|
||||
// StartSlaveIOThread is applicable with --test-on-replica
|
||||
func (this *Applier) StartSlaveIOThread() error {
|
||||
query := `start /* gh-ost */ slave io_thread`
|
||||
log.Infof("Starting replication IO thread")
|
||||
if _, err := sqlutils.ExecNoPrepare(this.db, query); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Infof("Replication IO thread started")
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -619,6 +651,18 @@ func (this *Applier) StopReplication() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// StartReplication is used by `--test-on-replica` on cut-over failure
|
||||
func (this *Applier) StartReplication() error {
|
||||
if err := this.StartSlaveIOThread(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := this.StartSlaveSQLThread(); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Infof("Replication started")
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetSessionLockName returns a name for the special hint session voluntary lock
|
||||
func (this *Applier) GetSessionLockName(sessionId int64) string {
|
||||
return fmt.Sprintf("gh-ost.%d.lock", sessionId)
|
||||
@ -678,8 +722,7 @@ func (this *Applier) DropAtomicCutOverSentryTableIfExists() error {
|
||||
return this.dropTable(tableName)
|
||||
}
|
||||
|
||||
// DropAtomicCutOverSentryTableIfExists checks if the "old" table name
|
||||
// happens to be a cut-over magic table; if so, it drops it.
|
||||
// CreateAtomicCutOverSentryTable
|
||||
func (this *Applier) CreateAtomicCutOverSentryTable() error {
|
||||
if err := this.DropAtomicCutOverSentryTableIfExists(); err != nil {
|
||||
return err
|
||||
@ -688,10 +731,11 @@ func (this *Applier) CreateAtomicCutOverSentryTable() error {
|
||||
|
||||
query := fmt.Sprintf(`create /* gh-ost */ table %s.%s (
|
||||
id int auto_increment primary key
|
||||
) comment='%s'
|
||||
) engine=%s comment='%s'
|
||||
`,
|
||||
sql.EscapeName(this.migrationContext.DatabaseName),
|
||||
sql.EscapeName(tableName),
|
||||
this.migrationContext.TableEngine,
|
||||
atomicCutOverMagicHint,
|
||||
)
|
||||
log.Infof("Creating magic cut-over table %s.%s",
|
||||
@ -934,3 +978,55 @@ func (this *Applier) ApplyDMLEventQuery(dmlEvent *binlog.BinlogDMLEvent) error {
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ApplyDMLEventQueries applies multiple DML queries onto the _ghost_ table
|
||||
func (this *Applier) ApplyDMLEventQueries(dmlEvents [](*binlog.BinlogDMLEvent)) error {
|
||||
|
||||
var totalDelta int64
|
||||
|
||||
err := func() error {
|
||||
tx, err := this.db.Begin()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rollback := func(err error) error {
|
||||
tx.Rollback()
|
||||
return err
|
||||
}
|
||||
|
||||
sessionQuery := `SET
|
||||
SESSION time_zone = '+00:00',
|
||||
sql_mode = CONCAT(@@session.sql_mode, ',STRICT_ALL_TABLES')
|
||||
`
|
||||
if _, err := tx.Exec(sessionQuery); err != nil {
|
||||
return rollback(err)
|
||||
}
|
||||
for _, dmlEvent := range dmlEvents {
|
||||
query, args, rowDelta, err := this.buildDMLEventQuery(dmlEvent)
|
||||
if err != nil {
|
||||
return rollback(err)
|
||||
}
|
||||
if _, err := tx.Exec(query, args...); err != nil {
|
||||
err = fmt.Errorf("%s; query=%s; args=%+v", err.Error(), query, args)
|
||||
return rollback(err)
|
||||
}
|
||||
totalDelta += rowDelta
|
||||
}
|
||||
if err := tx.Commit(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}()
|
||||
|
||||
if err != nil {
|
||||
return log.Errore(err)
|
||||
}
|
||||
// no error
|
||||
atomic.AddInt64(&this.migrationContext.TotalDMLEventsApplied, int64(len(dmlEvents)))
|
||||
if this.migrationContext.CountTableRows {
|
||||
atomic.AddInt64(&this.migrationContext.RowsDeltaEstimate, totalDelta)
|
||||
}
|
||||
log.Debugf("ApplyDMLEventQueries() applied %d events in one transaction", len(dmlEvents))
|
||||
return nil
|
||||
}
|
||||
|
@ -30,6 +30,7 @@ const (
|
||||
onFailure = "gh-ost-on-failure"
|
||||
onStatus = "gh-ost-on-status"
|
||||
onStopReplication = "gh-ost-on-stop-replication"
|
||||
onStartReplication = "gh-ost-on-start-replication"
|
||||
)
|
||||
|
||||
type HooksExecutor struct {
|
||||
@ -152,3 +153,7 @@ func (this *HooksExecutor) onStatus(statusMessage string) error {
|
||||
func (this *HooksExecutor) onStopReplication() error {
|
||||
return this.executeHooks(onStopReplication)
|
||||
}
|
||||
|
||||
func (this *HooksExecutor) onStartReplication() error {
|
||||
return this.executeHooks(onStartReplication)
|
||||
}
|
||||
|
@ -8,8 +8,10 @@ package logic
|
||||
import (
|
||||
gosql "database/sql"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/github/gh-ost/go/base"
|
||||
"github.com/github/gh-ost/go/mysql"
|
||||
@ -19,6 +21,8 @@ import (
|
||||
"github.com/outbrain/golib/sqlutils"
|
||||
)
|
||||
|
||||
const startSlavePostWaitMilliseconds = 500 * time.Millisecond
|
||||
|
||||
// Inspector reads data from the read-MySQL-server (typically a replica, but can be the master)
|
||||
// It is used for gaining initial status and structure, and later also follow up on progress and changelog
|
||||
type Inspector struct {
|
||||
@ -56,6 +60,7 @@ func (this *Inspector) InitDBConnections() (err error) {
|
||||
if err := this.applyBinlogFormat(); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Infof("Inspector initiated on %+v, version %+v", this.connectionConfig.ImpliedKey, this.migrationContext.InspectorMySQLVersion)
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -83,7 +88,7 @@ func (this *Inspector) InspectTableColumnsAndUniqueKeys(tableName string) (colum
|
||||
if len(uniqueKeys) == 0 {
|
||||
return columns, uniqueKeys, fmt.Errorf("No PRIMARY nor UNIQUE key found in table! Bailing out")
|
||||
}
|
||||
columns, err = this.getTableColumns(this.migrationContext.DatabaseName, tableName)
|
||||
columns, err = mysql.GetTableColumns(this.db, this.migrationContext.DatabaseName, tableName)
|
||||
if err != nil {
|
||||
return columns, uniqueKeys, err
|
||||
}
|
||||
@ -93,15 +98,21 @@ func (this *Inspector) InspectTableColumnsAndUniqueKeys(tableName string) (colum
|
||||
|
||||
func (this *Inspector) InspectOriginalTable() (err error) {
|
||||
this.migrationContext.OriginalTableColumns, this.migrationContext.OriginalTableUniqueKeys, err = this.InspectTableColumnsAndUniqueKeys(this.migrationContext.OriginalTableName)
|
||||
if err == nil {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// InspectOriginalAndGhostTables compares original and ghost tables to see whether the migration
|
||||
// inspectOriginalAndGhostTables compares original and ghost tables to see whether the migration
|
||||
// makes sense and is valid. It extracts the list of shared columns and the chosen migration unique key
|
||||
func (this *Inspector) InspectOriginalAndGhostTables() (err error) {
|
||||
func (this *Inspector) inspectOriginalAndGhostTables() (err error) {
|
||||
originalNamesOnApplier := this.migrationContext.OriginalTableColumnsOnApplier.Names()
|
||||
originalNames := this.migrationContext.OriginalTableColumns.Names()
|
||||
if !reflect.DeepEqual(originalNames, originalNamesOnApplier) {
|
||||
return fmt.Errorf("It seems like table structure is not identical between master and replica. This scenario is not supported.")
|
||||
}
|
||||
|
||||
this.migrationContext.GhostTableColumns, this.migrationContext.GhostTableUniqueKeys, err = this.InspectTableColumnsAndUniqueKeys(this.migrationContext.GetGhostTableName())
|
||||
if err != nil {
|
||||
return err
|
||||
@ -136,6 +147,7 @@ func (this *Inspector) InspectOriginalAndGhostTables() (err error) {
|
||||
// the `getTableColumns()` function, but it's a later patch and introduces some complexity; I feel
|
||||
// comfortable in doing this as a separate step.
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, this.migrationContext.OriginalTableColumns, this.migrationContext.SharedColumns)
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, &this.migrationContext.UniqueKey.Columns)
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.GetGhostTableName(), this.migrationContext.GhostTableColumns, this.migrationContext.MappedSharedColumns)
|
||||
|
||||
for i := range this.migrationContext.SharedColumns.Columns() {
|
||||
@ -157,9 +169,9 @@ func (this *Inspector) InspectOriginalAndGhostTables() (err error) {
|
||||
|
||||
// validateConnection issues a simple can-connect to MySQL
|
||||
func (this *Inspector) validateConnection() error {
|
||||
query := `select @@global.port`
|
||||
query := `select @@global.port, @@global.version`
|
||||
var port int
|
||||
if err := this.db.QueryRow(query).Scan(&port); err != nil {
|
||||
if err := this.db.QueryRow(query).Scan(&port, &this.migrationContext.InspectorMySQLVersion); err != nil {
|
||||
return err
|
||||
}
|
||||
if port != this.connectionConfig.Key.Port {
|
||||
@ -249,6 +261,8 @@ func (this *Inspector) restartReplication() error {
|
||||
if startError != nil {
|
||||
return startError
|
||||
}
|
||||
time.Sleep(startSlavePostWaitMilliseconds)
|
||||
|
||||
log.Debugf("Replication restarted")
|
||||
return nil
|
||||
}
|
||||
@ -327,12 +341,27 @@ func (this *Inspector) validateLogSlaveUpdates() error {
|
||||
if err := this.db.QueryRow(query).Scan(&logSlaveUpdates); err != nil {
|
||||
return err
|
||||
}
|
||||
if !logSlaveUpdates && !this.migrationContext.InspectorIsAlsoApplier() && !this.migrationContext.IsTungsten {
|
||||
return fmt.Errorf("%s:%d must have log_slave_updates enabled", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
|
||||
if logSlaveUpdates {
|
||||
log.Infof("log_slave_updates validated on %s:%d", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Infof("binary logs updates validated on %s:%d", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
return nil
|
||||
if this.migrationContext.IsTungsten {
|
||||
log.Warningf("log_slave_updates not found on %s:%d, but --tungsten provided, so I'm proceeding", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
return nil
|
||||
}
|
||||
|
||||
if this.migrationContext.TestOnReplica || this.migrationContext.MigrateOnReplica {
|
||||
return fmt.Errorf("%s:%d must have log_slave_updates enabled for testing/migrating on replica", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
}
|
||||
|
||||
if this.migrationContext.InspectorIsAlsoApplier() {
|
||||
log.Warningf("log_slave_updates not found on %s:%d, but executing directly on master, so I'm proceeeding", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
return nil
|
||||
}
|
||||
|
||||
return fmt.Errorf("%s:%d must have log_slave_updates enabled for executing migration", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
}
|
||||
|
||||
// validateTable makes sure the table we need to operate on actually exists
|
||||
@ -364,6 +393,10 @@ func (this *Inspector) validateTable() error {
|
||||
|
||||
// validateTableForeignKeys makes sure no foreign keys exist on the migrated table
|
||||
func (this *Inspector) validateTableForeignKeys(allowChildForeignKeys bool) error {
|
||||
if this.migrationContext.SkipForeignKeyChecks {
|
||||
log.Warning("--skip-foreign-key-checks provided: will not check for foreign keys")
|
||||
return nil
|
||||
}
|
||||
query := `
|
||||
SELECT
|
||||
SUM(REFERENCED_TABLE_NAME IS NOT NULL AND TABLE_SCHEMA=? AND TABLE_NAME=?) as num_child_side_fk,
|
||||
@ -478,31 +511,6 @@ func (this *Inspector) CountTableRows() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// getTableColumns reads column list from given table
|
||||
func (this *Inspector) getTableColumns(databaseName, tableName string) (*sql.ColumnList, error) {
|
||||
query := fmt.Sprintf(`
|
||||
show columns from %s.%s
|
||||
`,
|
||||
sql.EscapeName(databaseName),
|
||||
sql.EscapeName(tableName),
|
||||
)
|
||||
columnNames := []string{}
|
||||
err := sqlutils.QueryRowsMap(this.db, query, func(rowMap sqlutils.RowMap) error {
|
||||
columnNames = append(columnNames, rowMap.GetString("Field"))
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(columnNames) == 0 {
|
||||
return nil, log.Errorf("Found 0 columns on %s.%s. Bailing out",
|
||||
sql.EscapeName(databaseName),
|
||||
sql.EscapeName(tableName),
|
||||
)
|
||||
}
|
||||
return sql.NewColumnList(columnNames), nil
|
||||
}
|
||||
|
||||
// applyColumnTypes
|
||||
func (this *Inspector) applyColumnTypes(databaseName, tableName string, columnsLists ...*sql.ColumnList) error {
|
||||
query := `
|
||||
@ -522,6 +530,11 @@ func (this *Inspector) applyColumnTypes(databaseName, tableName string, columnsL
|
||||
columnsList.SetUnsigned(columnName)
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "mediumint") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.MediumIntColumnType
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "timestamp") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.TimestampColumnType
|
||||
@ -532,6 +545,11 @@ func (this *Inspector) applyColumnTypes(databaseName, tableName string, columnsL
|
||||
columnsList.GetColumn(columnName).Type = sql.DateTimeColumnType
|
||||
}
|
||||
}
|
||||
if strings.HasPrefix(columnType, "enum") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.EnumColumnType
|
||||
}
|
||||
}
|
||||
if charset := m.GetString("CHARACTER_SET_NAME"); charset != "" {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.SetCharset(columnName, charset)
|
||||
@ -668,22 +686,30 @@ func (this *Inspector) showCreateTable(tableName string) (createTableStatement s
|
||||
}
|
||||
|
||||
// readChangelogState reads changelog hints
|
||||
func (this *Inspector) readChangelogState() (map[string]string, error) {
|
||||
func (this *Inspector) readChangelogState(hint string) (string, error) {
|
||||
query := fmt.Sprintf(`
|
||||
select hint, value from %s.%s where id <= 255
|
||||
select hint, value from %s.%s where hint = ? and id <= 255
|
||||
`,
|
||||
sql.EscapeName(this.migrationContext.DatabaseName),
|
||||
sql.EscapeName(this.migrationContext.GetChangelogTableName()),
|
||||
)
|
||||
result := make(map[string]string)
|
||||
result := ""
|
||||
err := sqlutils.QueryRowsMap(this.db, query, func(m sqlutils.RowMap) error {
|
||||
result[m.GetString("hint")] = m.GetString("value")
|
||||
result = m.GetString("value")
|
||||
return nil
|
||||
})
|
||||
}, hint)
|
||||
return result, err
|
||||
}
|
||||
|
||||
func (this *Inspector) getMasterConnectionConfig() (applierConfig *mysql.ConnectionConfig, err error) {
|
||||
log.Infof("Recursively searching for replication master")
|
||||
visitedKeys := mysql.NewInstanceKeyMap()
|
||||
return mysql.GetMasterConnectionConfigSafe(this.connectionConfig, visitedKeys, this.migrationContext.AllowedMasterMaster)
|
||||
}
|
||||
|
||||
func (this *Inspector) getReplicationLag() (replicationLag time.Duration, err error) {
|
||||
replicationLag, err = mysql.GetReplicationLag(
|
||||
this.migrationContext.InspectorConnectionConfig,
|
||||
)
|
||||
return replicationLag, err
|
||||
}
|
||||
|
@ -11,6 +11,7 @@ import (
|
||||
"math"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
"time"
|
||||
@ -26,12 +27,31 @@ import (
|
||||
type ChangelogState string
|
||||
|
||||
const (
|
||||
TablesInPlace ChangelogState = "TablesInPlace"
|
||||
GhostTableMigrated ChangelogState = "GhostTableMigrated"
|
||||
AllEventsUpToLockProcessed = "AllEventsUpToLockProcessed"
|
||||
)
|
||||
|
||||
func ReadChangelogState(s string) ChangelogState {
|
||||
return ChangelogState(strings.Split(s, ":")[0])
|
||||
}
|
||||
|
||||
type tableWriteFunc func() error
|
||||
|
||||
type applyEventStruct struct {
|
||||
writeFunc *tableWriteFunc
|
||||
dmlEvent *binlog.BinlogDMLEvent
|
||||
}
|
||||
|
||||
func newApplyEventStructByFunc(writeFunc *tableWriteFunc) *applyEventStruct {
|
||||
result := &applyEventStruct{writeFunc: writeFunc}
|
||||
return result
|
||||
}
|
||||
|
||||
func newApplyEventStructByDML(dmlEvent *binlog.BinlogDMLEvent) *applyEventStruct {
|
||||
result := &applyEventStruct{dmlEvent: dmlEvent}
|
||||
return result
|
||||
}
|
||||
|
||||
const (
|
||||
applyEventsQueueBuffer = 100
|
||||
)
|
||||
@ -58,16 +78,15 @@ type Migrator struct {
|
||||
migrationContext *base.MigrationContext
|
||||
|
||||
firstThrottlingCollected chan bool
|
||||
tablesInPlace chan bool
|
||||
ghostTableMigrated chan bool
|
||||
rowCopyComplete chan bool
|
||||
allEventsUpToLockProcessed chan bool
|
||||
allEventsUpToLockProcessed chan string
|
||||
|
||||
rowCopyCompleteFlag int64
|
||||
inCutOverCriticalActionFlag int64
|
||||
rowCopyCompleteFlag int64
|
||||
// copyRowsQueue should not be buffered; if buffered some non-damaging but
|
||||
// excessive work happens at the end of the iteration as new copy-jobs arrive befroe realizing the copy is complete
|
||||
copyRowsQueue chan tableWriteFunc
|
||||
applyEventsQueue chan tableWriteFunc
|
||||
applyEventsQueue chan *applyEventStruct
|
||||
|
||||
handledChangelogStates map[string]bool
|
||||
|
||||
@ -78,13 +97,13 @@ func NewMigrator() *Migrator {
|
||||
migrator := &Migrator{
|
||||
migrationContext: base.GetMigrationContext(),
|
||||
parser: sql.NewParser(),
|
||||
tablesInPlace: make(chan bool),
|
||||
ghostTableMigrated: make(chan bool),
|
||||
firstThrottlingCollected: make(chan bool, 1),
|
||||
rowCopyComplete: make(chan bool),
|
||||
allEventsUpToLockProcessed: make(chan bool),
|
||||
allEventsUpToLockProcessed: make(chan string),
|
||||
|
||||
copyRowsQueue: make(chan tableWriteFunc),
|
||||
applyEventsQueue: make(chan tableWriteFunc, applyEventsQueueBuffer),
|
||||
applyEventsQueue: make(chan *applyEventStruct, applyEventsQueueBuffer),
|
||||
handledChangelogStates: make(map[string]bool),
|
||||
progressHistory: NewProgressHistory(),
|
||||
}
|
||||
@ -174,7 +193,7 @@ func (this *Migrator) consumeRowCopyComplete() {
|
||||
}
|
||||
|
||||
func (this *Migrator) canStopStreaming() bool {
|
||||
return false
|
||||
return atomic.LoadInt64(&this.migrationContext.CutOverCompleteFlag) != 0
|
||||
}
|
||||
|
||||
// onChangelogStateEvent is called when a binlog event operation on the changelog table is intercepted.
|
||||
@ -183,16 +202,18 @@ func (this *Migrator) onChangelogStateEvent(dmlEvent *binlog.BinlogDMLEvent) (er
|
||||
if hint := dmlEvent.NewColumnValues.StringColumn(2); hint != "state" {
|
||||
return nil
|
||||
}
|
||||
changelogState := ChangelogState(dmlEvent.NewColumnValues.StringColumn(3))
|
||||
changelogStateString := dmlEvent.NewColumnValues.StringColumn(3)
|
||||
changelogState := ReadChangelogState(changelogStateString)
|
||||
log.Infof("Intercepted changelog state %s", changelogState)
|
||||
switch changelogState {
|
||||
case TablesInPlace:
|
||||
case GhostTableMigrated:
|
||||
{
|
||||
this.tablesInPlace <- true
|
||||
this.ghostTableMigrated <- true
|
||||
}
|
||||
case AllEventsUpToLockProcessed:
|
||||
{
|
||||
applyEventFunc := func() error {
|
||||
this.allEventsUpToLockProcessed <- true
|
||||
var applyEventFunc tableWriteFunc = func() error {
|
||||
this.allEventsUpToLockProcessed <- changelogStateString
|
||||
return nil
|
||||
}
|
||||
// at this point we know all events up to lock have been read from the streamer,
|
||||
@ -201,7 +222,7 @@ func (this *Migrator) onChangelogStateEvent(dmlEvent *binlog.BinlogDMLEvent) (er
|
||||
// So as not to create a potential deadlock, we write this func to applyEventsQueue
|
||||
// asynchronously, understanding it doesn't really matter.
|
||||
go func() {
|
||||
this.applyEventsQueue <- applyEventFunc
|
||||
this.applyEventsQueue <- newApplyEventStructByFunc(&applyEventFunc)
|
||||
}()
|
||||
}
|
||||
default:
|
||||
@ -209,7 +230,7 @@ func (this *Migrator) onChangelogStateEvent(dmlEvent *binlog.BinlogDMLEvent) (er
|
||||
return fmt.Errorf("Unknown changelog state: %+v", changelogState)
|
||||
}
|
||||
}
|
||||
log.Debugf("Received state %+v", changelogState)
|
||||
log.Infof("Handled changelog state %s", changelogState)
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -226,7 +247,7 @@ func (this *Migrator) validateStatement() (err error) {
|
||||
if this.parser.HasNonTrivialRenames() && !this.migrationContext.SkipRenamedColumns {
|
||||
this.migrationContext.ColumnRenameMap = this.parser.GetNonTrivialRenames()
|
||||
if !this.migrationContext.ApproveRenamedColumns {
|
||||
return fmt.Errorf("gh-ost believes the ALTER statement renames columns, as follows: %v; as precation, you are asked to confirm gh-ost is correct, and provide with `--approve-renamed-columns`, and we're all happy. Or you can skip renamed columns via `--skip-renamed-columns`, in which case column data may be lost", this.parser.GetNonTrivialRenames())
|
||||
return fmt.Errorf("gh-ost believes the ALTER statement renames columns, as follows: %v; as precaution, you are asked to confirm gh-ost is correct, and provide with `--approve-renamed-columns`, and we're all happy. Or you can skip renamed columns via `--skip-renamed-columns`, in which case column data may be lost", this.parser.GetNonTrivialRenames())
|
||||
}
|
||||
log.Infof("Alter statement has column(s) renamed. gh-ost finds the following renames: %v; --approve-renamed-columns is given and so migration proceeds.", this.parser.GetNonTrivialRenames())
|
||||
}
|
||||
@ -294,14 +315,15 @@ func (this *Migrator) Migrate() (err error) {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Infof("Waiting for tables to be in place")
|
||||
<-this.tablesInPlace
|
||||
log.Debugf("Tables are in place")
|
||||
initialLag, _ := this.inspector.getReplicationLag()
|
||||
log.Infof("Waiting for ghost table to be migrated. Current lag is %+v", initialLag)
|
||||
<-this.ghostTableMigrated
|
||||
log.Debugf("ghost table migrated")
|
||||
// Yay! We now know the Ghost and Changelog tables are good to examine!
|
||||
// When running on replica, this means the replica has those tables. When running
|
||||
// on master this is always true, of course, and yet it also implies this knowledge
|
||||
// is in the binlogs.
|
||||
if err := this.inspector.InspectOriginalAndGhostTables(); err != nil {
|
||||
if err := this.inspector.inspectOriginalAndGhostTables(); err != nil {
|
||||
return err
|
||||
}
|
||||
// Validation complete! We're good to execute this migration
|
||||
@ -345,9 +367,10 @@ func (this *Migrator) Migrate() (err error) {
|
||||
if err := this.hooksExecutor.onBeforeCutOver(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := this.cutOver(); err != nil {
|
||||
if err := this.retryOperation(this.cutOver); err != nil {
|
||||
return err
|
||||
}
|
||||
atomic.StoreInt64(&this.migrationContext.CutOverCompleteFlag, 1)
|
||||
|
||||
if err := this.finalCleanup(); err != nil {
|
||||
return nil
|
||||
@ -365,8 +388,38 @@ func (this *Migrator) ExecOnFailureHook() (err error) {
|
||||
return this.hooksExecutor.onFailure()
|
||||
}
|
||||
|
||||
func (this *Migrator) handleCutOverResult(cutOverError error) (err error) {
|
||||
if this.migrationContext.TestOnReplica {
|
||||
// We're merly testing, we don't want to keep this state. Rollback the renames as possible
|
||||
this.applier.RenameTablesRollback()
|
||||
}
|
||||
if cutOverError == nil {
|
||||
return nil
|
||||
}
|
||||
// Only on error:
|
||||
|
||||
if this.migrationContext.TestOnReplica {
|
||||
// With `--test-on-replica` we stop replication thread, and then proceed to use
|
||||
// the same cut-over phase as the master would use. That means we take locks
|
||||
// and swap the tables.
|
||||
// The difference is that we will later swap the tables back.
|
||||
if err := this.hooksExecutor.onStartReplication(); err != nil {
|
||||
return log.Errore(err)
|
||||
}
|
||||
if this.migrationContext.TestOnReplicaSkipReplicaStop {
|
||||
log.Warningf("--test-on-replica-skip-replica-stop enabled, we are not starting replication.")
|
||||
} else {
|
||||
log.Debugf("testing on replica. Starting replication IO thread after cut-over failure")
|
||||
if err := this.retryOperation(this.applier.StartReplication); err != nil {
|
||||
return log.Errore(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// cutOver performs the final step of migration, based on migration
|
||||
// type (on replica? bumpy? safe?)
|
||||
// type (on replica? atomic? safe?)
|
||||
func (this *Migrator) cutOver() (err error) {
|
||||
if this.migrationContext.Noop {
|
||||
log.Debugf("Noop operation; not really swapping tables")
|
||||
@ -378,16 +431,18 @@ func (this *Migrator) cutOver() (err error) {
|
||||
})
|
||||
|
||||
this.migrationContext.MarkPointOfInterest()
|
||||
log.Debugf("checking for cut-over postpone")
|
||||
this.sleepWhileTrue(
|
||||
func() (bool, error) {
|
||||
if this.migrationContext.PostponeCutOverFlagFile == "" {
|
||||
return false, nil
|
||||
}
|
||||
if atomic.LoadInt64(&this.migrationContext.UserCommandedUnpostponeFlag) > 0 {
|
||||
atomic.StoreInt64(&this.migrationContext.UserCommandedUnpostponeFlag, 0)
|
||||
return false, nil
|
||||
}
|
||||
if base.FileExists(this.migrationContext.PostponeCutOverFlagFile) {
|
||||
// Throttle file defined and exists!
|
||||
// Postpone file defined and exists!
|
||||
if atomic.LoadInt64(&this.migrationContext.IsPostponingCutOver) == 0 {
|
||||
if err := this.hooksExecutor.onBeginPostponed(); err != nil {
|
||||
return true, err
|
||||
@ -401,6 +456,7 @@ func (this *Migrator) cutOver() (err error) {
|
||||
)
|
||||
atomic.StoreInt64(&this.migrationContext.IsPostponingCutOver, 0)
|
||||
this.migrationContext.MarkPointOfInterest()
|
||||
log.Debugf("checking for cut-over postpone: complete")
|
||||
|
||||
if this.migrationContext.TestOnReplica {
|
||||
// With `--test-on-replica` we stop replication thread, and then proceed to use
|
||||
@ -418,26 +474,17 @@ func (this *Migrator) cutOver() (err error) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// We're merly testing, we don't want to keep this state. Rollback the renames as possible
|
||||
defer this.applier.RenameTablesRollback()
|
||||
// We further proceed to do the cutover by normal means; the 'defer' above will rollback the swap
|
||||
}
|
||||
if this.migrationContext.CutOverType == base.CutOverAtomic {
|
||||
// Atomic solution: we use low timeout and multiple attempts. But for
|
||||
// each failed attempt, we throttle until replication lag is back to normal
|
||||
err := this.retryOperation(
|
||||
func() error {
|
||||
return this.executeAndThrottleOnError(this.atomicCutOver)
|
||||
},
|
||||
)
|
||||
err := this.atomicCutOver()
|
||||
this.handleCutOverResult(err)
|
||||
return err
|
||||
}
|
||||
if this.migrationContext.CutOverType == base.CutOverTwoStep {
|
||||
err := this.retryOperation(
|
||||
func() error {
|
||||
return this.executeAndThrottleOnError(this.cutOverTwoStep)
|
||||
},
|
||||
)
|
||||
err := this.cutOverTwoStep()
|
||||
this.handleCutOverResult(err)
|
||||
return err
|
||||
}
|
||||
return log.Fatalf("Unknown cut-over type: %d; should never get here!", this.migrationContext.CutOverType)
|
||||
@ -446,16 +493,35 @@ func (this *Migrator) cutOver() (err error) {
|
||||
// Inject the "AllEventsUpToLockProcessed" state hint, wait for it to appear in the binary logs,
|
||||
// make sure the queue is drained.
|
||||
func (this *Migrator) waitForEventsUpToLock() (err error) {
|
||||
timeout := time.NewTimer(time.Second * time.Duration(this.migrationContext.CutOverLockTimeoutSeconds))
|
||||
|
||||
this.migrationContext.MarkPointOfInterest()
|
||||
waitForEventsUpToLockStartTime := time.Now()
|
||||
|
||||
log.Infof("Writing changelog state: %+v", AllEventsUpToLockProcessed)
|
||||
if _, err := this.applier.WriteChangelogState(string(AllEventsUpToLockProcessed)); err != nil {
|
||||
allEventsUpToLockProcessedChallenge := fmt.Sprintf("%s:%d", string(AllEventsUpToLockProcessed), waitForEventsUpToLockStartTime.UnixNano())
|
||||
log.Infof("Writing changelog state: %+v", allEventsUpToLockProcessedChallenge)
|
||||
if _, err := this.applier.WriteChangelogState(allEventsUpToLockProcessedChallenge); err != nil {
|
||||
return err
|
||||
}
|
||||
log.Infof("Waiting for events up to lock")
|
||||
atomic.StoreInt64(&this.migrationContext.AllEventsUpToLockProcessedInjectedFlag, 1)
|
||||
<-this.allEventsUpToLockProcessed
|
||||
for found := false; !found; {
|
||||
select {
|
||||
case <-timeout.C:
|
||||
{
|
||||
return log.Errorf("Timeout while waiting for events up to lock")
|
||||
}
|
||||
case state := <-this.allEventsUpToLockProcessed:
|
||||
{
|
||||
if state == allEventsUpToLockProcessedChallenge {
|
||||
log.Infof("Waiting for events up to lock: got %s", state)
|
||||
found = true
|
||||
} else {
|
||||
log.Infof("Waiting for events up to lock: skipping %s", state)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
waitForEventsUpToLockDuration := time.Since(waitForEventsUpToLockStartTime)
|
||||
|
||||
log.Infof("Done waiting for events up to lock; duration=%+v", waitForEventsUpToLockDuration)
|
||||
@ -469,8 +535,8 @@ func (this *Migrator) waitForEventsUpToLock() (err error) {
|
||||
// There is a point in time where the "original" table does not exist and queries are non-blocked
|
||||
// and failing.
|
||||
func (this *Migrator) cutOverTwoStep() (err error) {
|
||||
atomic.StoreInt64(&this.inCutOverCriticalActionFlag, 1)
|
||||
defer atomic.StoreInt64(&this.inCutOverCriticalActionFlag, 0)
|
||||
atomic.StoreInt64(&this.migrationContext.InCutOverCriticalSectionFlag, 1)
|
||||
defer atomic.StoreInt64(&this.migrationContext.InCutOverCriticalSectionFlag, 0)
|
||||
atomic.StoreInt64(&this.migrationContext.AllEventsUpToLockProcessedInjectedFlag, 0)
|
||||
|
||||
if err := this.retryOperation(this.applier.LockOriginalTable); err != nil {
|
||||
@ -495,10 +561,12 @@ func (this *Migrator) cutOverTwoStep() (err error) {
|
||||
|
||||
// atomicCutOver
|
||||
func (this *Migrator) atomicCutOver() (err error) {
|
||||
atomic.StoreInt64(&this.inCutOverCriticalActionFlag, 1)
|
||||
defer atomic.StoreInt64(&this.inCutOverCriticalActionFlag, 0)
|
||||
atomic.StoreInt64(&this.migrationContext.InCutOverCriticalSectionFlag, 1)
|
||||
defer atomic.StoreInt64(&this.migrationContext.InCutOverCriticalSectionFlag, 0)
|
||||
|
||||
okToUnlockTable := make(chan bool, 4)
|
||||
defer func() {
|
||||
okToUnlockTable <- true
|
||||
this.applier.DropAtomicCutOverSentryTableIfExists()
|
||||
}()
|
||||
|
||||
@ -506,7 +574,6 @@ func (this *Migrator) atomicCutOver() (err error) {
|
||||
|
||||
lockOriginalSessionIdChan := make(chan int64, 2)
|
||||
tableLocked := make(chan error, 2)
|
||||
okToUnlockTable := make(chan bool, 3)
|
||||
tableUnlocked := make(chan error, 2)
|
||||
go func() {
|
||||
if err := this.applier.AtomicCutOverMagicLock(lockOriginalSessionIdChan, tableLocked, okToUnlockTable, tableUnlocked); err != nil {
|
||||
@ -520,7 +587,9 @@ func (this *Migrator) atomicCutOver() (err error) {
|
||||
log.Infof("Session locking original & magic tables is %+v", lockOriginalSessionId)
|
||||
// At this point we know the original table is locked.
|
||||
// We know any newly incoming DML on original table is blocked.
|
||||
this.waitForEventsUpToLock()
|
||||
if err := this.waitForEventsUpToLock(); err != nil {
|
||||
return log.Errore(err)
|
||||
}
|
||||
|
||||
// Step 2
|
||||
// We now attempt an atomic RENAME on original & ghost tables, and expect it to block.
|
||||
@ -632,6 +701,12 @@ func (this *Migrator) initiateInspector() (err error) {
|
||||
return err
|
||||
}
|
||||
this.migrationContext.ApplierConnectionConfig = this.migrationContext.InspectorConnectionConfig.DuplicateCredentials(*key)
|
||||
if this.migrationContext.CliMasterUser != "" {
|
||||
this.migrationContext.ApplierConnectionConfig.User = this.migrationContext.CliMasterUser
|
||||
}
|
||||
if this.migrationContext.CliMasterPassword != "" {
|
||||
this.migrationContext.ApplierConnectionConfig.Password = this.migrationContext.CliMasterPassword
|
||||
}
|
||||
log.Infof("Master forced to be %+v", *this.migrationContext.ApplierConnectionConfig.ImpliedKey)
|
||||
}
|
||||
// validate configs
|
||||
@ -697,11 +772,6 @@ func (this *Migrator) printMigrationStatusHint(writers ...io.Writer) {
|
||||
criticalLoad.String(),
|
||||
this.migrationContext.GetNiceRatio(),
|
||||
))
|
||||
if replicationLagQuery := this.migrationContext.GetReplicationLagQuery(); replicationLagQuery != "" {
|
||||
fmt.Fprintln(w, fmt.Sprintf("# replication-lag-query: %+v",
|
||||
replicationLagQuery,
|
||||
))
|
||||
}
|
||||
if this.migrationContext.ThrottleFlagFile != "" {
|
||||
setIndicator := ""
|
||||
if base.FileExists(this.migrationContext.ThrottleFlagFile) {
|
||||
@ -725,6 +795,12 @@ func (this *Migrator) printMigrationStatusHint(writers ...io.Writer) {
|
||||
throttleQuery,
|
||||
))
|
||||
}
|
||||
if throttleControlReplicaKeys := this.migrationContext.GetThrottleControlReplicaKeys(); throttleControlReplicaKeys.Len() > 0 {
|
||||
fmt.Fprintln(w, fmt.Sprintf("# throttle-control-replicas count: %+v",
|
||||
throttleControlReplicaKeys.Len(),
|
||||
))
|
||||
}
|
||||
|
||||
if this.migrationContext.PostponeCutOverFlagFile != "" {
|
||||
setIndicator := ""
|
||||
if base.FileExists(this.migrationContext.PostponeCutOverFlagFile) {
|
||||
@ -806,7 +882,7 @@ func (this *Migrator) printStatus(rule PrintStatusRule, writers ...io.Writer) {
|
||||
} else if atomic.LoadInt64(&this.migrationContext.IsPostponingCutOver) > 0 {
|
||||
visualETA = "due"
|
||||
state = "postponing cut-over"
|
||||
} else if isThrottled, throttleReason := this.migrationContext.IsThrottled(); isThrottled {
|
||||
} else if isThrottled, throttleReason, _ := this.migrationContext.IsThrottled(); isThrottled {
|
||||
state = fmt.Sprintf("throttled, %s", throttleReason)
|
||||
}
|
||||
|
||||
@ -891,11 +967,7 @@ func (this *Migrator) addDMLEventsListener() error {
|
||||
this.migrationContext.DatabaseName,
|
||||
this.migrationContext.OriginalTableName,
|
||||
func(dmlEvent *binlog.BinlogDMLEvent) error {
|
||||
// Create a task to apply the DML event; this will be execute by executeWriteFuncs()
|
||||
applyEventFunc := func() error {
|
||||
return this.applier.ApplyDMLEventQuery(dmlEvent)
|
||||
}
|
||||
this.applyEventsQueue <- applyEventFunc
|
||||
this.applyEventsQueue <- newApplyEventStructByDML(dmlEvent)
|
||||
return nil
|
||||
},
|
||||
)
|
||||
@ -908,7 +980,9 @@ func (this *Migrator) initiateThrottler() error {
|
||||
|
||||
go this.throttler.initiateThrottlerCollection(this.firstThrottlingCollected)
|
||||
log.Infof("Waiting for first throttle metrics to be collected")
|
||||
<-this.firstThrottlingCollected
|
||||
<-this.firstThrottlingCollected // replication lag
|
||||
<-this.firstThrottlingCollected // other metrics
|
||||
log.Infof("First throttle metrics collected")
|
||||
go this.throttler.initiateThrottlerChecks()
|
||||
|
||||
return nil
|
||||
@ -930,12 +1004,13 @@ func (this *Migrator) initiateApplier() error {
|
||||
log.Errorf("Unable to create ghost table, see further error details. Perhaps a previous migration failed without dropping the table? Bailing out")
|
||||
return err
|
||||
}
|
||||
|
||||
if err := this.applier.AlterGhost(); err != nil {
|
||||
log.Errorf("Unable to ALTER ghost table, see further error details. Bailing out")
|
||||
return err
|
||||
}
|
||||
|
||||
this.applier.WriteChangelogState(string(TablesInPlace))
|
||||
this.applier.WriteChangelogState(string(GhostTableMigrated))
|
||||
go this.applier.InitiateHeartbeat()
|
||||
return nil
|
||||
}
|
||||
@ -959,11 +1034,13 @@ func (this *Migrator) iterateChunks() error {
|
||||
for {
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 {
|
||||
// Done
|
||||
// There's another such check down the line
|
||||
return nil
|
||||
}
|
||||
copyRowsFunc := func() error {
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 {
|
||||
// Done
|
||||
// Done.
|
||||
// There's another such check down the line
|
||||
return nil
|
||||
}
|
||||
hasFurtherRange, err := this.applier.CalculateNextIterationRangeEndValues()
|
||||
@ -975,6 +1052,17 @@ func (this *Migrator) iterateChunks() error {
|
||||
}
|
||||
// Copy task:
|
||||
applyCopyRowsFunc := func() error {
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 {
|
||||
// No need for more writes.
|
||||
// This is the de-facto place where we avoid writing in the event of completed cut-over.
|
||||
// There could _still_ be a race condition, but that's as close as we can get.
|
||||
// What about the race condition? Well, there's actually no data integrity issue.
|
||||
// when rowCopyCompleteFlag==1 that means **guaranteed** all necessary rows have been copied.
|
||||
// But some are still then collected at the binary log, and these are the ones we're trying to
|
||||
// not apply here. If the race condition wins over us, then we just attempt to apply onto the
|
||||
// _ghost_ table, which no longer exists. So, bothering error messages and all, but no damage.
|
||||
return nil
|
||||
}
|
||||
_, rowsAffected, _, err := this.applier.ApplyIterationInsertQuery()
|
||||
if err != nil {
|
||||
return terminateRowIteration(err)
|
||||
@ -991,6 +1079,57 @@ func (this *Migrator) iterateChunks() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (this *Migrator) onApplyEventStruct(eventStruct *applyEventStruct) error {
|
||||
handleNonDMLEventStruct := func(eventStruct *applyEventStruct) error {
|
||||
if eventStruct.writeFunc != nil {
|
||||
if err := this.retryOperation(*eventStruct.writeFunc); err != nil {
|
||||
return log.Errore(err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if eventStruct.dmlEvent == nil {
|
||||
return handleNonDMLEventStruct(eventStruct)
|
||||
}
|
||||
if eventStruct.dmlEvent != nil {
|
||||
dmlEvents := [](*binlog.BinlogDMLEvent){}
|
||||
dmlEvents = append(dmlEvents, eventStruct.dmlEvent)
|
||||
var nonDmlStructToApply *applyEventStruct
|
||||
|
||||
availableEvents := len(this.applyEventsQueue)
|
||||
batchSize := int(atomic.LoadInt64(&this.migrationContext.DMLBatchSize))
|
||||
if availableEvents > batchSize-1 {
|
||||
// The "- 1" is because we already consumed one event: the original event that led to this function getting called.
|
||||
// So, if DMLBatchSize==1 we wish to not process any further events
|
||||
availableEvents = batchSize - 1
|
||||
}
|
||||
for i := 0; i < availableEvents; i++ {
|
||||
additionalStruct := <-this.applyEventsQueue
|
||||
if additionalStruct.dmlEvent == nil {
|
||||
// Not a DML. We don't group this, and we don't batch any further
|
||||
nonDmlStructToApply = additionalStruct
|
||||
break
|
||||
}
|
||||
dmlEvents = append(dmlEvents, additionalStruct.dmlEvent)
|
||||
}
|
||||
// Create a task to apply the DML event; this will be execute by executeWriteFuncs()
|
||||
var applyEventFunc tableWriteFunc = func() error {
|
||||
return this.applier.ApplyDMLEventQueries(dmlEvents)
|
||||
}
|
||||
if err := this.retryOperation(applyEventFunc); err != nil {
|
||||
return log.Errore(err)
|
||||
}
|
||||
if nonDmlStructToApply != nil {
|
||||
// We pulled DML events from the queue, and then we hit a non-DML event. Wait!
|
||||
// We need to handle it!
|
||||
if err := handleNonDMLEventStruct(nonDmlStructToApply); err != nil {
|
||||
return log.Errore(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// executeWriteFuncs writes data via applier: both the rowcopy and the events backlog.
|
||||
// This is where the ghost table gets the data. The function fills the data single-threaded.
|
||||
// Both event backlog and rowcopy events are polled; the backlog events have precedence.
|
||||
@ -1000,22 +1139,15 @@ func (this *Migrator) executeWriteFuncs() error {
|
||||
return nil
|
||||
}
|
||||
for {
|
||||
if atomic.LoadInt64(&this.inCutOverCriticalActionFlag) == 0 {
|
||||
// we don't throttle when cutting over. We _do_ throttle:
|
||||
// - during copy phase
|
||||
// - just before cut-over
|
||||
// - in between cut-over retries
|
||||
this.throttler.throttle(nil)
|
||||
// When cutting over, we need to be aggressive. Cut-over holds table locks.
|
||||
// We need to release those asap.
|
||||
}
|
||||
this.throttler.throttle(nil)
|
||||
|
||||
// We give higher priority to event processing, then secondary priority to
|
||||
// rowcopy
|
||||
select {
|
||||
case applyEventFunc := <-this.applyEventsQueue:
|
||||
case eventStruct := <-this.applyEventsQueue:
|
||||
{
|
||||
if err := this.retryOperation(applyEventFunc); err != nil {
|
||||
return log.Errore(err)
|
||||
if err := this.onApplyEventStruct(eventStruct); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
default:
|
||||
@ -1061,6 +1193,9 @@ func (this *Migrator) finalCleanup() error {
|
||||
log.Errore(err)
|
||||
}
|
||||
}
|
||||
if err := this.eventsStreamer.Close(); err != nil {
|
||||
log.Errore(err)
|
||||
}
|
||||
|
||||
if err := this.retryOperation(this.applier.DropChangelogTable); err != nil {
|
||||
return err
|
||||
|
@ -126,7 +126,7 @@ func (this *Server) applyServerCommand(command string, writer *bufio.Writer) (pr
|
||||
if len(tokens) > 1 {
|
||||
arg = strings.TrimSpace(tokens[1])
|
||||
}
|
||||
|
||||
argIsQuestion := (arg == "?")
|
||||
throttleHint := "# Note: you may only throttle for as long as your binary logs are not purged\n"
|
||||
|
||||
if err := this.hooksExecutor.onInteractiveCommand(command); err != nil {
|
||||
@ -152,6 +152,7 @@ no-throttle # End forced throttling (other throttling m
|
||||
unpostpone # Bail out a cut-over postpone; proceed to cut-over
|
||||
panic # panic and quit without cleanup
|
||||
help # This message
|
||||
- use '?' (question mark) as argument to get info rather than set. e.g. "max-load=?" will just print out current max-load.
|
||||
`)
|
||||
}
|
||||
case "sup":
|
||||
@ -160,6 +161,10 @@ help # This message
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
case "chunk-size":
|
||||
{
|
||||
if argIsQuestion {
|
||||
fmt.Fprintf(writer, "%+v\n", atomic.LoadInt64(&this.migrationContext.ChunkSize))
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
if chunkSize, err := strconv.Atoi(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
} else {
|
||||
@ -169,6 +174,10 @@ help # This message
|
||||
}
|
||||
case "max-lag-millis":
|
||||
{
|
||||
if argIsQuestion {
|
||||
fmt.Fprintf(writer, "%+v\n", atomic.LoadInt64(&this.migrationContext.MaxLagMillisecondsThrottleThreshold))
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
if maxLagMillis, err := strconv.Atoi(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
} else {
|
||||
@ -178,11 +187,14 @@ help # This message
|
||||
}
|
||||
case "replication-lag-query":
|
||||
{
|
||||
this.migrationContext.SetReplicationLagQuery(arg)
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
return NoPrintStatusRule, fmt.Errorf("replication-lag-query is deprecated. gh-ost uses an internal, subsecond resolution query")
|
||||
}
|
||||
case "nice-ratio":
|
||||
{
|
||||
if argIsQuestion {
|
||||
fmt.Fprintf(writer, "%+v\n", this.migrationContext.GetNiceRatio())
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
if niceRatio, err := strconv.ParseFloat(arg, 64); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
} else {
|
||||
@ -192,6 +204,11 @@ help # This message
|
||||
}
|
||||
case "max-load":
|
||||
{
|
||||
if argIsQuestion {
|
||||
maxLoad := this.migrationContext.GetMaxLoad()
|
||||
fmt.Fprintf(writer, "%s\n", maxLoad.String())
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
if err := this.migrationContext.ReadMaxLoad(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
@ -199,6 +216,11 @@ help # This message
|
||||
}
|
||||
case "critical-load":
|
||||
{
|
||||
if argIsQuestion {
|
||||
criticalLoad := this.migrationContext.GetCriticalLoad()
|
||||
fmt.Fprintf(writer, "%s\n", criticalLoad.String())
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
if err := this.migrationContext.ReadCriticalLoad(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
@ -206,12 +228,20 @@ help # This message
|
||||
}
|
||||
case "throttle-query":
|
||||
{
|
||||
if argIsQuestion {
|
||||
fmt.Fprintf(writer, "%+v\n", this.migrationContext.GetThrottleQuery())
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
this.migrationContext.SetThrottleQuery(arg)
|
||||
fmt.Fprintf(writer, throttleHint)
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "throttle-control-replicas":
|
||||
{
|
||||
if argIsQuestion {
|
||||
fmt.Fprintf(writer, "%s\n", this.migrationContext.GetThrottleControlReplicaKeys().ToCommaDelimitedList())
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
if err := this.migrationContext.ReadThrottleControlReplicaKeys(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
|
@ -217,3 +217,9 @@ func (this *EventsStreamer) StreamEvents(canStopStreaming func() bool) error {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (this *EventsStreamer) Close() (err error) {
|
||||
err = this.binlogReader.Close()
|
||||
log.Infof("Closed streamer connection. err=%+v", err)
|
||||
return err
|
||||
}
|
||||
|
@ -12,7 +12,9 @@ import (
|
||||
|
||||
"github.com/github/gh-ost/go/base"
|
||||
"github.com/github/gh-ost/go/mysql"
|
||||
"github.com/github/gh-ost/go/sql"
|
||||
"github.com/outbrain/golib/log"
|
||||
"github.com/outbrain/golib/sqlutils"
|
||||
)
|
||||
|
||||
// Throttler collects metrics related to throttling and makes informed decisison
|
||||
@ -34,16 +36,16 @@ func NewThrottler(applier *Applier, inspector *Inspector) *Throttler {
|
||||
// shouldThrottle performs checks to see whether we should currently be throttling.
|
||||
// It merely observes the metrics collected by other components, it does not issue
|
||||
// its own metric collection.
|
||||
func (this *Throttler) shouldThrottle() (result bool, reason string) {
|
||||
func (this *Throttler) shouldThrottle() (result bool, reason string, reasonHint base.ThrottleReasonHint) {
|
||||
generalCheckResult := this.migrationContext.GetThrottleGeneralCheckResult()
|
||||
if generalCheckResult.ShouldThrottle {
|
||||
return generalCheckResult.ShouldThrottle, generalCheckResult.Reason
|
||||
return generalCheckResult.ShouldThrottle, generalCheckResult.Reason, generalCheckResult.ReasonHint
|
||||
}
|
||||
// Replication lag throttle
|
||||
maxLagMillisecondsThrottleThreshold := atomic.LoadInt64(&this.migrationContext.MaxLagMillisecondsThrottleThreshold)
|
||||
lag := atomic.LoadInt64(&this.migrationContext.CurrentLag)
|
||||
if time.Duration(lag) > time.Duration(maxLagMillisecondsThrottleThreshold)*time.Millisecond {
|
||||
return true, fmt.Sprintf("lag=%fs", time.Duration(lag).Seconds())
|
||||
return true, fmt.Sprintf("lag=%fs", time.Duration(lag).Seconds()), base.NoThrottleReasonHint
|
||||
}
|
||||
checkThrottleControlReplicas := true
|
||||
if (this.migrationContext.TestOnReplica || this.migrationContext.MigrateOnReplica) && (atomic.LoadInt64(&this.migrationContext.AllEventsUpToLockProcessedInjectedFlag) > 0) {
|
||||
@ -52,66 +54,134 @@ func (this *Throttler) shouldThrottle() (result bool, reason string) {
|
||||
if checkThrottleControlReplicas {
|
||||
lagResult := this.migrationContext.GetControlReplicasLagResult()
|
||||
if lagResult.Err != nil {
|
||||
return true, fmt.Sprintf("%+v %+v", lagResult.Key, lagResult.Err)
|
||||
return true, fmt.Sprintf("%+v %+v", lagResult.Key, lagResult.Err), base.NoThrottleReasonHint
|
||||
}
|
||||
if lagResult.Lag > time.Duration(maxLagMillisecondsThrottleThreshold)*time.Millisecond {
|
||||
return true, fmt.Sprintf("%+v replica-lag=%fs", lagResult.Key, lagResult.Lag.Seconds())
|
||||
return true, fmt.Sprintf("%+v replica-lag=%fs", lagResult.Key, lagResult.Lag.Seconds()), base.NoThrottleReasonHint
|
||||
}
|
||||
}
|
||||
// Got here? No metrics indicates we need throttling.
|
||||
return false, ""
|
||||
return false, "", base.NoThrottleReasonHint
|
||||
}
|
||||
|
||||
// parseChangelogHeartbeat is called when a heartbeat event is intercepted
|
||||
func (this *Throttler) parseChangelogHeartbeat(heartbeatValue string) (err error) {
|
||||
// parseChangelogHeartbeat parses a string timestamp and deduces replication lag
|
||||
func parseChangelogHeartbeat(heartbeatValue string) (lag time.Duration, err error) {
|
||||
heartbeatTime, err := time.Parse(time.RFC3339Nano, heartbeatValue)
|
||||
if err != nil {
|
||||
return log.Errore(err)
|
||||
return lag, err
|
||||
}
|
||||
lag := time.Since(heartbeatTime)
|
||||
atomic.StoreInt64(&this.migrationContext.CurrentLag, int64(lag))
|
||||
return nil
|
||||
lag = time.Since(heartbeatTime)
|
||||
return lag, nil
|
||||
}
|
||||
|
||||
// collectHeartbeat reads the latest changelog heartbeat value
|
||||
func (this *Throttler) collectHeartbeat() {
|
||||
ticker := time.Tick(time.Duration(this.migrationContext.HeartbeatIntervalMilliseconds) * time.Millisecond)
|
||||
for range ticker {
|
||||
go func() error {
|
||||
if atomic.LoadInt64(&this.migrationContext.CleanupImminentFlag) > 0 {
|
||||
return nil
|
||||
}
|
||||
changelogState, err := this.inspector.readChangelogState()
|
||||
if err != nil {
|
||||
// parseChangelogHeartbeat parses a string timestamp and deduces replication lag
|
||||
func (this *Throttler) parseChangelogHeartbeat(heartbeatValue string) (err error) {
|
||||
if lag, err := parseChangelogHeartbeat(heartbeatValue); err != nil {
|
||||
return log.Errore(err)
|
||||
} else {
|
||||
atomic.StoreInt64(&this.migrationContext.CurrentLag, int64(lag))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// collectReplicationLag reads the latest changelog heartbeat value
|
||||
func (this *Throttler) collectReplicationLag(firstThrottlingCollected chan<- bool) {
|
||||
collectFunc := func() error {
|
||||
if atomic.LoadInt64(&this.migrationContext.CleanupImminentFlag) > 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
if this.migrationContext.TestOnReplica || this.migrationContext.MigrateOnReplica {
|
||||
// when running on replica, the heartbeat injection is also done on the replica.
|
||||
// This means we will always get a good heartbeat value.
|
||||
// When runnign on replica, we should instead check the `SHOW SLAVE STATUS` output.
|
||||
if lag, err := mysql.GetReplicationLag(this.inspector.connectionConfig); err != nil {
|
||||
return log.Errore(err)
|
||||
} else {
|
||||
atomic.StoreInt64(&this.migrationContext.CurrentLag, int64(lag))
|
||||
}
|
||||
if heartbeatValue, ok := changelogState["heartbeat"]; ok {
|
||||
} else {
|
||||
if heartbeatValue, err := this.inspector.readChangelogState("heartbeat"); err != nil {
|
||||
return log.Errore(err)
|
||||
} else {
|
||||
this.parseChangelogHeartbeat(heartbeatValue)
|
||||
}
|
||||
return nil
|
||||
}()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
collectFunc()
|
||||
firstThrottlingCollected <- true
|
||||
|
||||
ticker := time.Tick(time.Duration(this.migrationContext.HeartbeatIntervalMilliseconds) * time.Millisecond)
|
||||
for range ticker {
|
||||
go collectFunc()
|
||||
}
|
||||
}
|
||||
|
||||
// collectControlReplicasLag polls all the control replicas to get maximum lag value
|
||||
func (this *Throttler) collectControlReplicasLag() {
|
||||
readControlReplicasLag := func(replicationLagQuery string) error {
|
||||
if (this.migrationContext.TestOnReplica || this.migrationContext.MigrateOnReplica) && (atomic.LoadInt64(&this.migrationContext.AllEventsUpToLockProcessedInjectedFlag) > 0) {
|
||||
return nil
|
||||
|
||||
replicationLagQuery := fmt.Sprintf(`
|
||||
select value from %s.%s where hint = 'heartbeat' and id <= 255
|
||||
`,
|
||||
sql.EscapeName(this.migrationContext.DatabaseName),
|
||||
sql.EscapeName(this.migrationContext.GetChangelogTableName()),
|
||||
)
|
||||
|
||||
readReplicaLag := func(connectionConfig *mysql.ConnectionConfig) (lag time.Duration, err error) {
|
||||
dbUri := connectionConfig.GetDBUri("information_schema")
|
||||
|
||||
var heartbeatValue string
|
||||
if db, _, err := sqlutils.GetDB(dbUri); err != nil {
|
||||
return lag, err
|
||||
} else if err = db.QueryRow(replicationLagQuery).Scan(&heartbeatValue); err != nil {
|
||||
return lag, err
|
||||
}
|
||||
lagResult := mysql.GetMaxReplicationLag(
|
||||
this.migrationContext.InspectorConnectionConfig,
|
||||
this.migrationContext.GetThrottleControlReplicaKeys(),
|
||||
replicationLagQuery,
|
||||
)
|
||||
this.migrationContext.SetControlReplicasLagResult(lagResult)
|
||||
return nil
|
||||
lag, err = parseChangelogHeartbeat(heartbeatValue)
|
||||
return lag, err
|
||||
}
|
||||
|
||||
readControlReplicasLag := func() (result *mysql.ReplicationLagResult) {
|
||||
instanceKeyMap := this.migrationContext.GetThrottleControlReplicaKeys()
|
||||
if instanceKeyMap.Len() == 0 {
|
||||
return result
|
||||
}
|
||||
lagResults := make(chan *mysql.ReplicationLagResult, instanceKeyMap.Len())
|
||||
for replicaKey := range *instanceKeyMap {
|
||||
connectionConfig := this.migrationContext.InspectorConnectionConfig.Duplicate()
|
||||
connectionConfig.Key = replicaKey
|
||||
|
||||
lagResult := &mysql.ReplicationLagResult{Key: connectionConfig.Key}
|
||||
go func() {
|
||||
lagResult.Lag, lagResult.Err = readReplicaLag(connectionConfig)
|
||||
lagResults <- lagResult
|
||||
}()
|
||||
}
|
||||
for range *instanceKeyMap {
|
||||
lagResult := <-lagResults
|
||||
if result == nil {
|
||||
result = lagResult
|
||||
} else if lagResult.Err != nil {
|
||||
result = lagResult
|
||||
} else if lagResult.Lag.Nanoseconds() > result.Lag.Nanoseconds() {
|
||||
result = lagResult
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
checkControlReplicasLag := func() {
|
||||
if (this.migrationContext.TestOnReplica || this.migrationContext.MigrateOnReplica) && (atomic.LoadInt64(&this.migrationContext.AllEventsUpToLockProcessedInjectedFlag) > 0) {
|
||||
// No need to read lag
|
||||
return
|
||||
}
|
||||
this.migrationContext.SetControlReplicasLagResult(readControlReplicasLag())
|
||||
}
|
||||
aggressiveTicker := time.Tick(100 * time.Millisecond)
|
||||
relaxedFactor := 10
|
||||
counter := 0
|
||||
shouldReadLagAggressively := false
|
||||
replicationLagQuery := ""
|
||||
|
||||
for range aggressiveTicker {
|
||||
if counter%relaxedFactor == 0 {
|
||||
@ -119,12 +189,11 @@ func (this *Throttler) collectControlReplicasLag() {
|
||||
// do not typically change at all throughout the migration, but nonetheless we check them.
|
||||
counter = 0
|
||||
maxLagMillisecondsThrottleThreshold := atomic.LoadInt64(&this.migrationContext.MaxLagMillisecondsThrottleThreshold)
|
||||
replicationLagQuery = this.migrationContext.GetReplicationLagQuery()
|
||||
shouldReadLagAggressively = (replicationLagQuery != "" && maxLagMillisecondsThrottleThreshold < 1000)
|
||||
shouldReadLagAggressively = (maxLagMillisecondsThrottleThreshold < 1000)
|
||||
}
|
||||
if counter == 0 || shouldReadLagAggressively {
|
||||
// We check replication lag every so often, or if we wish to be aggressive
|
||||
readControlReplicasLag(replicationLagQuery)
|
||||
checkControlReplicasLag()
|
||||
}
|
||||
counter++
|
||||
}
|
||||
@ -147,8 +216,8 @@ func (this *Throttler) criticalLoadIsMet() (met bool, variableName string, value
|
||||
// collectGeneralThrottleMetrics reads the once-per-sec metrics, and stores them onto this.migrationContext
|
||||
func (this *Throttler) collectGeneralThrottleMetrics() error {
|
||||
|
||||
setThrottle := func(throttle bool, reason string) error {
|
||||
this.migrationContext.SetThrottleGeneralCheckResult(base.NewThrottleCheckResult(throttle, reason))
|
||||
setThrottle := func(throttle bool, reason string, reasonHint base.ThrottleReasonHint) error {
|
||||
this.migrationContext.SetThrottleGeneralCheckResult(base.NewThrottleCheckResult(throttle, reason, reasonHint))
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -161,7 +230,7 @@ func (this *Throttler) collectGeneralThrottleMetrics() error {
|
||||
|
||||
criticalLoadMet, variableName, value, threshold, err := this.criticalLoadIsMet()
|
||||
if err != nil {
|
||||
return setThrottle(true, fmt.Sprintf("%s %s", variableName, err))
|
||||
return setThrottle(true, fmt.Sprintf("%s %s", variableName, err), base.NoThrottleReasonHint)
|
||||
}
|
||||
if criticalLoadMet && this.migrationContext.CriticalLoadIntervalMilliseconds == 0 {
|
||||
this.migrationContext.PanicAbort <- fmt.Errorf("critical-load met: %s=%d, >=%d", variableName, value, threshold)
|
||||
@ -181,18 +250,18 @@ func (this *Throttler) collectGeneralThrottleMetrics() error {
|
||||
|
||||
// User-based throttle
|
||||
if atomic.LoadInt64(&this.migrationContext.ThrottleCommandedByUser) > 0 {
|
||||
return setThrottle(true, "commanded by user")
|
||||
return setThrottle(true, "commanded by user", base.UserCommandThrottleReasonHint)
|
||||
}
|
||||
if this.migrationContext.ThrottleFlagFile != "" {
|
||||
if base.FileExists(this.migrationContext.ThrottleFlagFile) {
|
||||
// Throttle file defined and exists!
|
||||
return setThrottle(true, "flag-file")
|
||||
return setThrottle(true, "flag-file", base.NoThrottleReasonHint)
|
||||
}
|
||||
}
|
||||
if this.migrationContext.ThrottleAdditionalFlagFile != "" {
|
||||
if base.FileExists(this.migrationContext.ThrottleAdditionalFlagFile) {
|
||||
// 2nd Throttle file defined and exists!
|
||||
return setThrottle(true, "flag-file")
|
||||
return setThrottle(true, "flag-file", base.NoThrottleReasonHint)
|
||||
}
|
||||
}
|
||||
|
||||
@ -200,32 +269,33 @@ func (this *Throttler) collectGeneralThrottleMetrics() error {
|
||||
for variableName, threshold := range maxLoad {
|
||||
value, err := this.applier.ShowStatusVariable(variableName)
|
||||
if err != nil {
|
||||
return setThrottle(true, fmt.Sprintf("%s %s", variableName, err))
|
||||
return setThrottle(true, fmt.Sprintf("%s %s", variableName, err), base.NoThrottleReasonHint)
|
||||
}
|
||||
if value >= threshold {
|
||||
return setThrottle(true, fmt.Sprintf("max-load %s=%d >= %d", variableName, value, threshold))
|
||||
return setThrottle(true, fmt.Sprintf("max-load %s=%d >= %d", variableName, value, threshold), base.NoThrottleReasonHint)
|
||||
}
|
||||
}
|
||||
if this.migrationContext.GetThrottleQuery() != "" {
|
||||
if res, _ := this.applier.ExecuteThrottleQuery(); res > 0 {
|
||||
return setThrottle(true, "throttle-query")
|
||||
return setThrottle(true, "throttle-query", base.NoThrottleReasonHint)
|
||||
}
|
||||
}
|
||||
|
||||
return setThrottle(false, "")
|
||||
return setThrottle(false, "", base.NoThrottleReasonHint)
|
||||
}
|
||||
|
||||
// initiateThrottlerMetrics initiates the various processes that collect measurements
|
||||
// that may affect throttling. There are several components, all running independently,
|
||||
// that collect such metrics.
|
||||
func (this *Throttler) initiateThrottlerCollection(firstThrottlingCollected chan<- bool) {
|
||||
go this.collectHeartbeat()
|
||||
go this.collectReplicationLag(firstThrottlingCollected)
|
||||
go this.collectControlReplicasLag()
|
||||
|
||||
go func() {
|
||||
throttlerMetricsTick := time.Tick(1 * time.Second)
|
||||
this.collectGeneralThrottleMetrics()
|
||||
firstThrottlingCollected <- true
|
||||
|
||||
throttlerMetricsTick := time.Tick(1 * time.Second)
|
||||
for range throttlerMetricsTick {
|
||||
this.collectGeneralThrottleMetrics()
|
||||
}
|
||||
@ -237,8 +307,8 @@ func (this *Throttler) initiateThrottlerChecks() error {
|
||||
throttlerTick := time.Tick(100 * time.Millisecond)
|
||||
|
||||
throttlerFunction := func() {
|
||||
alreadyThrottling, currentReason := this.migrationContext.IsThrottled()
|
||||
shouldThrottle, throttleReason := this.shouldThrottle()
|
||||
alreadyThrottling, currentReason, _ := this.migrationContext.IsThrottled()
|
||||
shouldThrottle, throttleReason, throttleReasonHint := this.shouldThrottle()
|
||||
if shouldThrottle && !alreadyThrottling {
|
||||
// New throttling
|
||||
this.applier.WriteAndLogChangelog("throttle", throttleReason)
|
||||
@ -249,7 +319,7 @@ func (this *Throttler) initiateThrottlerChecks() error {
|
||||
// End of throttling
|
||||
this.applier.WriteAndLogChangelog("throttle", "done throttling")
|
||||
}
|
||||
this.migrationContext.SetThrottled(shouldThrottle, throttleReason)
|
||||
this.migrationContext.SetThrottled(shouldThrottle, throttleReason, throttleReasonHint)
|
||||
}
|
||||
throttlerFunction()
|
||||
for range throttlerTick {
|
||||
@ -265,7 +335,7 @@ func (this *Throttler) throttle(onThrottled func()) {
|
||||
for {
|
||||
// IsThrottled() is non-blocking; the throttling decision making takes place asynchronously.
|
||||
// Therefore calling IsThrottled() is cheap
|
||||
if shouldThrottle, _ := this.migrationContext.IsThrottled(); !shouldThrottle {
|
||||
if shouldThrottle, _, _ := this.migrationContext.IsThrottled(); !shouldThrottle {
|
||||
return
|
||||
}
|
||||
if onThrottled != nil {
|
||||
|
@ -10,35 +10,43 @@ import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/github/gh-ost/go/sql"
|
||||
|
||||
"github.com/outbrain/golib/log"
|
||||
"github.com/outbrain/golib/sqlutils"
|
||||
)
|
||||
|
||||
const MaxTableNameLength = 64
|
||||
|
||||
type ReplicationLagResult struct {
|
||||
Key InstanceKey
|
||||
Lag time.Duration
|
||||
Err error
|
||||
}
|
||||
|
||||
func NewNoReplicationLagResult() *ReplicationLagResult {
|
||||
return &ReplicationLagResult{Lag: 0, Err: nil}
|
||||
}
|
||||
|
||||
func (this *ReplicationLagResult) HasLag() bool {
|
||||
return this.Lag > 0
|
||||
}
|
||||
|
||||
// GetReplicationLag returns replication lag for a given connection config; either by explicit query
|
||||
// or via SHOW SLAVE STATUS
|
||||
func GetReplicationLag(connectionConfig *ConnectionConfig, replicationLagQuery string) (replicationLag time.Duration, err error) {
|
||||
func GetReplicationLag(connectionConfig *ConnectionConfig) (replicationLag time.Duration, err error) {
|
||||
dbUri := connectionConfig.GetDBUri("information_schema")
|
||||
var db *gosql.DB
|
||||
if db, _, err = sqlutils.GetDB(dbUri); err != nil {
|
||||
return replicationLag, err
|
||||
}
|
||||
|
||||
if replicationLagQuery != "" {
|
||||
var floatLag float64
|
||||
err = db.QueryRow(replicationLagQuery).Scan(&floatLag)
|
||||
return time.Duration(int64(floatLag*1000)) * time.Millisecond, err
|
||||
}
|
||||
// No explicit replication lag query.
|
||||
err = sqlutils.QueryRowsMap(db, `show slave status`, func(m sqlutils.RowMap) error {
|
||||
slaveIORunning := m.GetString("Slave_IO_Running")
|
||||
slaveSQLRunning := m.GetString("Slave_SQL_Running")
|
||||
secondsBehindMaster := m.GetNullInt64("Seconds_Behind_Master")
|
||||
if !secondsBehindMaster.Valid {
|
||||
return fmt.Errorf("replication not running")
|
||||
return fmt.Errorf("replication not running; Slave_IO_Running=%+v, Slave_SQL_Running=%+v", slaveIORunning, slaveSQLRunning)
|
||||
}
|
||||
replicationLag = time.Duration(secondsBehindMaster.Int64) * time.Second
|
||||
return nil
|
||||
@ -46,34 +54,6 @@ func GetReplicationLag(connectionConfig *ConnectionConfig, replicationLagQuery s
|
||||
return replicationLag, err
|
||||
}
|
||||
|
||||
// GetMaxReplicationLag concurrently checks for replication lag on given list of instance keys,
|
||||
// each via GetReplicationLag
|
||||
func GetMaxReplicationLag(baseConnectionConfig *ConnectionConfig, instanceKeyMap *InstanceKeyMap, replicationLagQuery string) (result *ReplicationLagResult) {
|
||||
result = &ReplicationLagResult{Lag: 0}
|
||||
if instanceKeyMap.Len() == 0 {
|
||||
return result
|
||||
}
|
||||
lagResults := make(chan *ReplicationLagResult, instanceKeyMap.Len())
|
||||
for key := range *instanceKeyMap {
|
||||
connectionConfig := baseConnectionConfig.Duplicate()
|
||||
connectionConfig.Key = key
|
||||
result := &ReplicationLagResult{Key: connectionConfig.Key}
|
||||
go func() {
|
||||
result.Lag, result.Err = GetReplicationLag(connectionConfig, replicationLagQuery)
|
||||
lagResults <- result
|
||||
}()
|
||||
}
|
||||
for range *instanceKeyMap {
|
||||
lagResult := <-lagResults
|
||||
if lagResult.Err != nil {
|
||||
result = lagResult
|
||||
} else if lagResult.Lag.Nanoseconds() > result.Lag.Nanoseconds() {
|
||||
result = lagResult
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func GetMasterKeyFromSlaveStatus(connectionConfig *ConnectionConfig) (masterKey *InstanceKey, err error) {
|
||||
currentUri := connectionConfig.GetDBUri("information_schema")
|
||||
db, _, err := sqlutils.GetDB(currentUri)
|
||||
@ -81,12 +61,33 @@ func GetMasterKeyFromSlaveStatus(connectionConfig *ConnectionConfig) (masterKey
|
||||
return nil, err
|
||||
}
|
||||
err = sqlutils.QueryRowsMap(db, `show slave status`, func(rowMap sqlutils.RowMap) error {
|
||||
// We wish to recognize the case where the topology's master actually has replication configuration.
|
||||
// This can happen when a DBA issues a `RESET SLAVE` instead of `RESET SLAVE ALL`.
|
||||
|
||||
// An empty log file indicates this is a master:
|
||||
if rowMap.GetString("Master_Log_File") == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
slaveIORunning := rowMap.GetString("Slave_IO_Running")
|
||||
slaveSQLRunning := rowMap.GetString("Slave_SQL_Running")
|
||||
|
||||
//
|
||||
if slaveIORunning != "Yes" || slaveSQLRunning != "Yes" {
|
||||
return fmt.Errorf("Replication on %+v is broken: Slave_IO_Running: %s, Slave_SQL_Running: %s. Please make sure replication runs before using gh-ost.",
|
||||
connectionConfig.Key,
|
||||
slaveIORunning,
|
||||
slaveSQLRunning,
|
||||
)
|
||||
}
|
||||
|
||||
masterKey = &InstanceKey{
|
||||
Hostname: rowMap.GetString("Master_Host"),
|
||||
Port: rowMap.GetInt("Master_Port"),
|
||||
}
|
||||
return nil
|
||||
})
|
||||
|
||||
return masterKey, err
|
||||
}
|
||||
|
||||
@ -149,3 +150,28 @@ func GetInstanceKey(db *gosql.DB) (instanceKey *InstanceKey, err error) {
|
||||
err = db.QueryRow(`select @@global.hostname, @@global.port`).Scan(&instanceKey.Hostname, &instanceKey.Port)
|
||||
return instanceKey, err
|
||||
}
|
||||
|
||||
// GetTableColumns reads column list from given table
|
||||
func GetTableColumns(db *gosql.DB, databaseName, tableName string) (*sql.ColumnList, error) {
|
||||
query := fmt.Sprintf(`
|
||||
show columns from %s.%s
|
||||
`,
|
||||
sql.EscapeName(databaseName),
|
||||
sql.EscapeName(tableName),
|
||||
)
|
||||
columnNames := []string{}
|
||||
err := sqlutils.QueryRowsMap(db, query, func(rowMap sqlutils.RowMap) error {
|
||||
columnNames = append(columnNames, rowMap.GetString("Field"))
|
||||
return nil
|
||||
})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(columnNames) == 0 {
|
||||
return nil, log.Errorf("Found 0 columns on %s.%s. Bailing out",
|
||||
sql.EscapeName(databaseName),
|
||||
sql.EscapeName(tableName),
|
||||
)
|
||||
}
|
||||
return sql.NewColumnList(columnNames), nil
|
||||
}
|
||||
|
@ -170,12 +170,12 @@ func BuildRangeComparison(columns []string, values []string, args []interface{},
|
||||
return result, explodedArgs, nil
|
||||
}
|
||||
|
||||
func BuildRangePreparedComparison(columns []string, args []interface{}, comparisonSign ValueComparisonSign) (result string, explodedArgs []interface{}, err error) {
|
||||
values := buildPreparedValues(len(columns))
|
||||
return BuildRangeComparison(columns, values, args, comparisonSign)
|
||||
func BuildRangePreparedComparison(columns *ColumnList, args []interface{}, comparisonSign ValueComparisonSign) (result string, explodedArgs []interface{}, err error) {
|
||||
values := buildColumnsPreparedValues(columns)
|
||||
return BuildRangeComparison(columns.Names(), values, args, comparisonSign)
|
||||
}
|
||||
|
||||
func BuildRangeInsertQuery(databaseName, originalTableName, ghostTableName string, sharedColumns []string, mappedSharedColumns []string, uniqueKey string, uniqueKeyColumns, rangeStartValues, rangeEndValues []string, rangeStartArgs, rangeEndArgs []interface{}, includeRangeStartValues bool, transactionalTable bool) (result string, explodedArgs []interface{}, err error) {
|
||||
func BuildRangeInsertQuery(databaseName, originalTableName, ghostTableName string, sharedColumns []string, mappedSharedColumns []string, uniqueKey string, uniqueKeyColumns *ColumnList, rangeStartValues, rangeEndValues []string, rangeStartArgs, rangeEndArgs []interface{}, includeRangeStartValues bool, transactionalTable bool) (result string, explodedArgs []interface{}, err error) {
|
||||
if len(sharedColumns) == 0 {
|
||||
return "", explodedArgs, fmt.Errorf("Got 0 shared columns in BuildRangeInsertQuery")
|
||||
}
|
||||
@ -200,12 +200,12 @@ func BuildRangeInsertQuery(databaseName, originalTableName, ghostTableName strin
|
||||
if includeRangeStartValues {
|
||||
minRangeComparisonSign = GreaterThanOrEqualsComparisonSign
|
||||
}
|
||||
rangeStartComparison, rangeExplodedArgs, err := BuildRangeComparison(uniqueKeyColumns, rangeStartValues, rangeStartArgs, minRangeComparisonSign)
|
||||
rangeStartComparison, rangeExplodedArgs, err := BuildRangeComparison(uniqueKeyColumns.Names(), rangeStartValues, rangeStartArgs, minRangeComparisonSign)
|
||||
if err != nil {
|
||||
return "", explodedArgs, err
|
||||
}
|
||||
explodedArgs = append(explodedArgs, rangeExplodedArgs...)
|
||||
rangeEndComparison, rangeExplodedArgs, err := BuildRangeComparison(uniqueKeyColumns, rangeEndValues, rangeEndArgs, LessThanOrEqualsComparisonSign)
|
||||
rangeEndComparison, rangeExplodedArgs, err := BuildRangeComparison(uniqueKeyColumns.Names(), rangeEndValues, rangeEndArgs, LessThanOrEqualsComparisonSign)
|
||||
if err != nil {
|
||||
return "", explodedArgs, err
|
||||
}
|
||||
@ -225,14 +225,14 @@ func BuildRangeInsertQuery(databaseName, originalTableName, ghostTableName strin
|
||||
return result, explodedArgs, nil
|
||||
}
|
||||
|
||||
func BuildRangeInsertPreparedQuery(databaseName, originalTableName, ghostTableName string, sharedColumns []string, mappedSharedColumns []string, uniqueKey string, uniqueKeyColumns []string, rangeStartArgs, rangeEndArgs []interface{}, includeRangeStartValues bool, transactionalTable bool) (result string, explodedArgs []interface{}, err error) {
|
||||
rangeStartValues := buildPreparedValues(len(uniqueKeyColumns))
|
||||
rangeEndValues := buildPreparedValues(len(uniqueKeyColumns))
|
||||
func BuildRangeInsertPreparedQuery(databaseName, originalTableName, ghostTableName string, sharedColumns []string, mappedSharedColumns []string, uniqueKey string, uniqueKeyColumns *ColumnList, rangeStartArgs, rangeEndArgs []interface{}, includeRangeStartValues bool, transactionalTable bool) (result string, explodedArgs []interface{}, err error) {
|
||||
rangeStartValues := buildColumnsPreparedValues(uniqueKeyColumns)
|
||||
rangeEndValues := buildColumnsPreparedValues(uniqueKeyColumns)
|
||||
return BuildRangeInsertQuery(databaseName, originalTableName, ghostTableName, sharedColumns, mappedSharedColumns, uniqueKey, uniqueKeyColumns, rangeStartValues, rangeEndValues, rangeStartArgs, rangeEndArgs, includeRangeStartValues, transactionalTable)
|
||||
}
|
||||
|
||||
func BuildUniqueKeyRangeEndPreparedQuery(databaseName, tableName string, uniqueKeyColumns []string, rangeStartArgs, rangeEndArgs []interface{}, chunkSize int64, includeRangeStartValues bool, hint string) (result string, explodedArgs []interface{}, err error) {
|
||||
if len(uniqueKeyColumns) == 0 {
|
||||
func BuildUniqueKeyRangeEndPreparedQuery(databaseName, tableName string, uniqueKeyColumns *ColumnList, rangeStartArgs, rangeEndArgs []interface{}, chunkSize int64, includeRangeStartValues bool, hint string) (result string, explodedArgs []interface{}, err error) {
|
||||
if uniqueKeyColumns.Len() == 0 {
|
||||
return "", explodedArgs, fmt.Errorf("Got 0 columns in BuildUniqueKeyRangeEndPreparedQuery")
|
||||
}
|
||||
databaseName = EscapeName(databaseName)
|
||||
@ -253,13 +253,18 @@ func BuildUniqueKeyRangeEndPreparedQuery(databaseName, tableName string, uniqueK
|
||||
}
|
||||
explodedArgs = append(explodedArgs, rangeExplodedArgs...)
|
||||
|
||||
uniqueKeyColumns = duplicateNames(uniqueKeyColumns)
|
||||
uniqueKeyColumnAscending := make([]string, len(uniqueKeyColumns), len(uniqueKeyColumns))
|
||||
uniqueKeyColumnDescending := make([]string, len(uniqueKeyColumns), len(uniqueKeyColumns))
|
||||
for i := range uniqueKeyColumns {
|
||||
uniqueKeyColumns[i] = EscapeName(uniqueKeyColumns[i])
|
||||
uniqueKeyColumnAscending[i] = fmt.Sprintf("%s asc", uniqueKeyColumns[i])
|
||||
uniqueKeyColumnDescending[i] = fmt.Sprintf("%s desc", uniqueKeyColumns[i])
|
||||
uniqueKeyColumnNames := duplicateNames(uniqueKeyColumns.Names())
|
||||
uniqueKeyColumnAscending := make([]string, len(uniqueKeyColumnNames), len(uniqueKeyColumnNames))
|
||||
uniqueKeyColumnDescending := make([]string, len(uniqueKeyColumnNames), len(uniqueKeyColumnNames))
|
||||
for i, column := range uniqueKeyColumns.Columns() {
|
||||
uniqueKeyColumnNames[i] = EscapeName(uniqueKeyColumnNames[i])
|
||||
if column.Type == EnumColumnType {
|
||||
uniqueKeyColumnAscending[i] = fmt.Sprintf("concat(%s) asc", uniqueKeyColumnNames[i])
|
||||
uniqueKeyColumnDescending[i] = fmt.Sprintf("concat(%s) desc", uniqueKeyColumnNames[i])
|
||||
} else {
|
||||
uniqueKeyColumnAscending[i] = fmt.Sprintf("%s asc", uniqueKeyColumnNames[i])
|
||||
uniqueKeyColumnDescending[i] = fmt.Sprintf("%s desc", uniqueKeyColumnNames[i])
|
||||
}
|
||||
}
|
||||
result = fmt.Sprintf(`
|
||||
select /* gh-ost %s.%s %s */ %s
|
||||
@ -276,8 +281,8 @@ func BuildUniqueKeyRangeEndPreparedQuery(databaseName, tableName string, uniqueK
|
||||
order by
|
||||
%s
|
||||
limit 1
|
||||
`, databaseName, tableName, hint, strings.Join(uniqueKeyColumns, ", "),
|
||||
strings.Join(uniqueKeyColumns, ", "), databaseName, tableName,
|
||||
`, databaseName, tableName, hint, strings.Join(uniqueKeyColumnNames, ", "),
|
||||
strings.Join(uniqueKeyColumnNames, ", "), databaseName, tableName,
|
||||
rangeStartComparison, rangeEndComparison,
|
||||
strings.Join(uniqueKeyColumnAscending, ", "), chunkSize,
|
||||
strings.Join(uniqueKeyColumnDescending, ", "),
|
||||
@ -285,26 +290,30 @@ func BuildUniqueKeyRangeEndPreparedQuery(databaseName, tableName string, uniqueK
|
||||
return result, explodedArgs, nil
|
||||
}
|
||||
|
||||
func BuildUniqueKeyMinValuesPreparedQuery(databaseName, tableName string, uniqueKeyColumns []string) (string, error) {
|
||||
func BuildUniqueKeyMinValuesPreparedQuery(databaseName, tableName string, uniqueKeyColumns *ColumnList) (string, error) {
|
||||
return buildUniqueKeyMinMaxValuesPreparedQuery(databaseName, tableName, uniqueKeyColumns, "asc")
|
||||
}
|
||||
|
||||
func BuildUniqueKeyMaxValuesPreparedQuery(databaseName, tableName string, uniqueKeyColumns []string) (string, error) {
|
||||
func BuildUniqueKeyMaxValuesPreparedQuery(databaseName, tableName string, uniqueKeyColumns *ColumnList) (string, error) {
|
||||
return buildUniqueKeyMinMaxValuesPreparedQuery(databaseName, tableName, uniqueKeyColumns, "desc")
|
||||
}
|
||||
|
||||
func buildUniqueKeyMinMaxValuesPreparedQuery(databaseName, tableName string, uniqueKeyColumns []string, order string) (string, error) {
|
||||
if len(uniqueKeyColumns) == 0 {
|
||||
func buildUniqueKeyMinMaxValuesPreparedQuery(databaseName, tableName string, uniqueKeyColumns *ColumnList, order string) (string, error) {
|
||||
if uniqueKeyColumns.Len() == 0 {
|
||||
return "", fmt.Errorf("Got 0 columns in BuildUniqueKeyMinMaxValuesPreparedQuery")
|
||||
}
|
||||
databaseName = EscapeName(databaseName)
|
||||
tableName = EscapeName(tableName)
|
||||
|
||||
uniqueKeyColumns = duplicateNames(uniqueKeyColumns)
|
||||
uniqueKeyColumnOrder := make([]string, len(uniqueKeyColumns), len(uniqueKeyColumns))
|
||||
for i := range uniqueKeyColumns {
|
||||
uniqueKeyColumns[i] = EscapeName(uniqueKeyColumns[i])
|
||||
uniqueKeyColumnOrder[i] = fmt.Sprintf("%s %s", uniqueKeyColumns[i], order)
|
||||
uniqueKeyColumnNames := duplicateNames(uniqueKeyColumns.Names())
|
||||
uniqueKeyColumnOrder := make([]string, len(uniqueKeyColumnNames), len(uniqueKeyColumnNames))
|
||||
for i, column := range uniqueKeyColumns.Columns() {
|
||||
uniqueKeyColumnNames[i] = EscapeName(uniqueKeyColumnNames[i])
|
||||
if column.Type == EnumColumnType {
|
||||
uniqueKeyColumnOrder[i] = fmt.Sprintf("concat(%s) %s", uniqueKeyColumnNames[i], order)
|
||||
} else {
|
||||
uniqueKeyColumnOrder[i] = fmt.Sprintf("%s %s", uniqueKeyColumnNames[i], order)
|
||||
}
|
||||
}
|
||||
query := fmt.Sprintf(`
|
||||
select /* gh-ost %s.%s */ %s
|
||||
@ -313,7 +322,7 @@ func buildUniqueKeyMinMaxValuesPreparedQuery(databaseName, tableName string, uni
|
||||
order by
|
||||
%s
|
||||
limit 1
|
||||
`, databaseName, tableName, strings.Join(uniqueKeyColumns, ", "),
|
||||
`, databaseName, tableName, strings.Join(uniqueKeyColumnNames, ", "),
|
||||
databaseName, tableName,
|
||||
strings.Join(uniqueKeyColumnOrder, ", "),
|
||||
)
|
||||
|
@ -166,7 +166,7 @@ func TestBuildRangeInsertQuery(t *testing.T) {
|
||||
sharedColumns := []string{"id", "name", "position"}
|
||||
{
|
||||
uniqueKey := "PRIMARY"
|
||||
uniqueKeyColumns := []string{"id"}
|
||||
uniqueKeyColumns := NewColumnList([]string{"id"})
|
||||
rangeStartValues := []string{"@v1s"}
|
||||
rangeEndValues := []string{"@v1e"}
|
||||
rangeStartArgs := []interface{}{3}
|
||||
@ -185,7 +185,7 @@ func TestBuildRangeInsertQuery(t *testing.T) {
|
||||
}
|
||||
{
|
||||
uniqueKey := "name_position_uidx"
|
||||
uniqueKeyColumns := []string{"name", "position"}
|
||||
uniqueKeyColumns := NewColumnList([]string{"name", "position"})
|
||||
rangeStartValues := []string{"@v1s", "@v2s"}
|
||||
rangeEndValues := []string{"@v1e", "@v2e"}
|
||||
rangeStartArgs := []interface{}{3, 17}
|
||||
@ -212,7 +212,7 @@ func TestBuildRangeInsertQueryRenameMap(t *testing.T) {
|
||||
mappedSharedColumns := []string{"id", "name", "location"}
|
||||
{
|
||||
uniqueKey := "PRIMARY"
|
||||
uniqueKeyColumns := []string{"id"}
|
||||
uniqueKeyColumns := NewColumnList([]string{"id"})
|
||||
rangeStartValues := []string{"@v1s"}
|
||||
rangeEndValues := []string{"@v1e"}
|
||||
rangeStartArgs := []interface{}{3}
|
||||
@ -231,7 +231,7 @@ func TestBuildRangeInsertQueryRenameMap(t *testing.T) {
|
||||
}
|
||||
{
|
||||
uniqueKey := "name_position_uidx"
|
||||
uniqueKeyColumns := []string{"name", "position"}
|
||||
uniqueKeyColumns := NewColumnList([]string{"name", "position"})
|
||||
rangeStartValues := []string{"@v1s", "@v2s"}
|
||||
rangeEndValues := []string{"@v1e", "@v2e"}
|
||||
rangeStartArgs := []interface{}{3, 17}
|
||||
@ -257,7 +257,7 @@ func TestBuildRangeInsertPreparedQuery(t *testing.T) {
|
||||
sharedColumns := []string{"id", "name", "position"}
|
||||
{
|
||||
uniqueKey := "name_position_uidx"
|
||||
uniqueKeyColumns := []string{"name", "position"}
|
||||
uniqueKeyColumns := NewColumnList([]string{"name", "position"})
|
||||
rangeStartArgs := []interface{}{3, 17}
|
||||
rangeEndArgs := []interface{}{103, 117}
|
||||
|
||||
@ -279,7 +279,7 @@ func TestBuildUniqueKeyRangeEndPreparedQuery(t *testing.T) {
|
||||
originalTableName := "tbl"
|
||||
var chunkSize int64 = 500
|
||||
{
|
||||
uniqueKeyColumns := []string{"name", "position"}
|
||||
uniqueKeyColumns := NewColumnList([]string{"name", "position"})
|
||||
rangeStartArgs := []interface{}{3, 17}
|
||||
rangeEndArgs := []interface{}{103, 117}
|
||||
|
||||
@ -309,7 +309,7 @@ func TestBuildUniqueKeyRangeEndPreparedQuery(t *testing.T) {
|
||||
func TestBuildUniqueKeyMinValuesPreparedQuery(t *testing.T) {
|
||||
databaseName := "mydb"
|
||||
originalTableName := "tbl"
|
||||
uniqueKeyColumns := []string{"name", "position"}
|
||||
uniqueKeyColumns := NewColumnList([]string{"name", "position"})
|
||||
{
|
||||
query, err := BuildUniqueKeyMinValuesPreparedQuery(databaseName, originalTableName, uniqueKeyColumns)
|
||||
test.S(t).ExpectNil(err)
|
||||
|
@ -8,10 +8,12 @@ package sql
|
||||
import (
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
var (
|
||||
renameColumnRegexp = regexp.MustCompile(`(?i)change\s+(column\s+|)([\S]+)\s+([\S]+)\s+`)
|
||||
sanitizeQuotesRegexp = regexp.MustCompile("('[^']*')")
|
||||
renameColumnRegexp = regexp.MustCompile(`(?i)\bchange\s+(column\s+|)([\S]+)\s+([\S]+)\s+`)
|
||||
)
|
||||
|
||||
type Parser struct {
|
||||
@ -24,17 +26,54 @@ func NewParser() *Parser {
|
||||
}
|
||||
}
|
||||
|
||||
func (this *Parser) ParseAlterStatement(alterStatement string) (err error) {
|
||||
allStringSubmatch := renameColumnRegexp.FindAllStringSubmatch(alterStatement, -1)
|
||||
for _, submatch := range allStringSubmatch {
|
||||
if unquoted, err := strconv.Unquote(submatch[2]); err == nil {
|
||||
submatch[2] = unquoted
|
||||
}
|
||||
if unquoted, err := strconv.Unquote(submatch[3]); err == nil {
|
||||
submatch[3] = unquoted
|
||||
func (this *Parser) tokenizeAlterStatement(alterStatement string) (tokens []string, err error) {
|
||||
terminatingQuote := rune(0)
|
||||
f := func(c rune) bool {
|
||||
switch {
|
||||
case c == terminatingQuote:
|
||||
terminatingQuote = rune(0)
|
||||
return false
|
||||
case terminatingQuote != rune(0):
|
||||
return false
|
||||
case c == '\'':
|
||||
terminatingQuote = c
|
||||
return false
|
||||
case c == '(':
|
||||
terminatingQuote = ')'
|
||||
return false
|
||||
default:
|
||||
return c == ','
|
||||
}
|
||||
}
|
||||
|
||||
this.columnRenameMap[submatch[2]] = submatch[3]
|
||||
tokens = strings.FieldsFunc(alterStatement, f)
|
||||
for i := range tokens {
|
||||
tokens[i] = strings.TrimSpace(tokens[i])
|
||||
}
|
||||
return tokens, nil
|
||||
}
|
||||
|
||||
func (this *Parser) sanitizeQuotesFromAlterStatement(alterStatement string) (strippedStatement string) {
|
||||
strippedStatement = alterStatement
|
||||
strippedStatement = sanitizeQuotesRegexp.ReplaceAllString(strippedStatement, "''")
|
||||
return strippedStatement
|
||||
}
|
||||
|
||||
func (this *Parser) ParseAlterStatement(alterStatement string) (err error) {
|
||||
alterTokens, _ := this.tokenizeAlterStatement(alterStatement)
|
||||
for _, alterToken := range alterTokens {
|
||||
alterToken = this.sanitizeQuotesFromAlterStatement(alterToken)
|
||||
allStringSubmatch := renameColumnRegexp.FindAllStringSubmatch(alterToken, -1)
|
||||
for _, submatch := range allStringSubmatch {
|
||||
if unquoted, err := strconv.Unquote(submatch[2]); err == nil {
|
||||
submatch[2] = unquoted
|
||||
}
|
||||
if unquoted, err := strconv.Unquote(submatch[3]); err == nil {
|
||||
submatch[3] = unquoted
|
||||
}
|
||||
|
||||
this.columnRenameMap[submatch[2]] = submatch[3]
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
@ -6,6 +6,7 @@
|
||||
package sql
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/outbrain/golib/log"
|
||||
@ -66,3 +67,56 @@ func TestParseAlterStatementNonTrivial(t *testing.T) {
|
||||
test.S(t).ExpectEquals(renames["f"], "fl")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTokenizeAlterStatement(t *testing.T) {
|
||||
parser := NewParser()
|
||||
{
|
||||
alterStatement := "add column t int"
|
||||
tokens, _ := parser.tokenizeAlterStatement(alterStatement)
|
||||
test.S(t).ExpectTrue(reflect.DeepEqual(tokens, []string{"add column t int"}))
|
||||
}
|
||||
{
|
||||
alterStatement := "add column t int, change column i int"
|
||||
tokens, _ := parser.tokenizeAlterStatement(alterStatement)
|
||||
test.S(t).ExpectTrue(reflect.DeepEqual(tokens, []string{"add column t int", "change column i int"}))
|
||||
}
|
||||
{
|
||||
alterStatement := "add column t int, change column i int 'some comment'"
|
||||
tokens, _ := parser.tokenizeAlterStatement(alterStatement)
|
||||
test.S(t).ExpectTrue(reflect.DeepEqual(tokens, []string{"add column t int", "change column i int 'some comment'"}))
|
||||
}
|
||||
{
|
||||
alterStatement := "add column t int, change column i int 'some comment, with comma'"
|
||||
tokens, _ := parser.tokenizeAlterStatement(alterStatement)
|
||||
test.S(t).ExpectTrue(reflect.DeepEqual(tokens, []string{"add column t int", "change column i int 'some comment, with comma'"}))
|
||||
}
|
||||
{
|
||||
alterStatement := "add column t int, add column d decimal(10,2)"
|
||||
tokens, _ := parser.tokenizeAlterStatement(alterStatement)
|
||||
test.S(t).ExpectTrue(reflect.DeepEqual(tokens, []string{"add column t int", "add column d decimal(10,2)"}))
|
||||
}
|
||||
{
|
||||
alterStatement := "add column t int, add column e enum('a','b','c')"
|
||||
tokens, _ := parser.tokenizeAlterStatement(alterStatement)
|
||||
test.S(t).ExpectTrue(reflect.DeepEqual(tokens, []string{"add column t int", "add column e enum('a','b','c')"}))
|
||||
}
|
||||
{
|
||||
alterStatement := "add column t int(11), add column e enum('a','b','c')"
|
||||
tokens, _ := parser.tokenizeAlterStatement(alterStatement)
|
||||
test.S(t).ExpectTrue(reflect.DeepEqual(tokens, []string{"add column t int(11)", "add column e enum('a','b','c')"}))
|
||||
}
|
||||
}
|
||||
|
||||
func TestSanitizeQuotesFromAlterStatement(t *testing.T) {
|
||||
parser := NewParser()
|
||||
{
|
||||
alterStatement := "add column e enum('a','b','c')"
|
||||
strippedStatement := parser.sanitizeQuotesFromAlterStatement(alterStatement)
|
||||
test.S(t).ExpectEquals(strippedStatement, "add column e enum('','','')")
|
||||
}
|
||||
{
|
||||
alterStatement := "change column i int 'some comment, with comma'"
|
||||
strippedStatement := parser.sanitizeQuotesFromAlterStatement(alterStatement)
|
||||
test.S(t).ExpectEquals(strippedStatement, "change column i int ''")
|
||||
}
|
||||
}
|
||||
|
@ -18,8 +18,12 @@ const (
|
||||
UnknownColumnType ColumnType = iota
|
||||
TimestampColumnType = iota
|
||||
DateTimeColumnType = iota
|
||||
EnumColumnType = iota
|
||||
MediumIntColumnType = iota
|
||||
)
|
||||
|
||||
const maxMediumintUnsigned int32 = 16777215
|
||||
|
||||
type TimezoneConvertion struct {
|
||||
ToTimezone string
|
||||
}
|
||||
@ -49,6 +53,14 @@ func (this *Column) convertArg(arg interface{}) interface{} {
|
||||
return uint16(i)
|
||||
}
|
||||
if i, ok := arg.(int32); ok {
|
||||
if this.Type == MediumIntColumnType {
|
||||
// problem with mediumint is that it's a 3-byte type. There is no compatible golang type to match that.
|
||||
// So to convert from negative to positive we'd need to convert the value manually
|
||||
if i >= 0 {
|
||||
return i
|
||||
}
|
||||
return uint32(maxMediumintUnsigned + i + 1)
|
||||
}
|
||||
return uint32(i)
|
||||
}
|
||||
if i, ok := arg.(int64); ok {
|
||||
|
28
localtests/datetime-submillis/create.sql
Normal file
28
localtests/datetime-submillis/create.sql
Normal file
@ -0,0 +1,28 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
dt0 datetime(6),
|
||||
dt1 datetime(6),
|
||||
ts2 timestamp(6),
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, now(), now(), now(), 0);
|
||||
update gh_ost_test set dt1='2016-10-31 11:22:33.444', updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 13, now(), now(), now(), 0);
|
||||
update gh_ost_test set ts1='2016-11-01 11:22:33.444', updated = 1 where i = 13 order by id desc limit 1;
|
||||
end ;;
|
29
localtests/enum-pk/create.sql
Normal file
29
localtests/enum-pk/create.sql
Normal file
@ -0,0 +1,29 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
e enum('red', 'green', 'blue', 'orange') null default null collate 'utf8_bin',
|
||||
primary key(id, e)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, 'red');
|
||||
set @last_insert_id := last_insert_id();
|
||||
insert into gh_ost_test values (@last_insert_id, 11, 'green');
|
||||
insert into gh_ost_test values (null, 13, 'green');
|
||||
insert into gh_ost_test values (null, 17, 'blue');
|
||||
set @last_insert_id := last_insert_id();
|
||||
update gh_ost_test set e='orange' where id = @last_insert_id;
|
||||
insert into gh_ost_test values (null, 23, null);
|
||||
set @last_insert_id := last_insert_id();
|
||||
update gh_ost_test set i=i+1, e=null where id = @last_insert_id;
|
||||
end ;;
|
22
localtests/fail-drop-pk/create.sql
Normal file
22
localtests/fail-drop-pk/create.sql
Normal file
@ -0,0 +1,22 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts timestamp,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, now());
|
||||
insert into gh_ost_test values (null, 13, now());
|
||||
insert into gh_ost_test values (null, 17, now());
|
||||
end ;;
|
1
localtests/fail-drop-pk/expect_failure
Normal file
1
localtests/fail-drop-pk/expect_failure
Normal file
@ -0,0 +1 @@
|
||||
No PRIMARY nor UNIQUE key found in table
|
1
localtests/fail-drop-pk/extra_args
Normal file
1
localtests/fail-drop-pk/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change id id int, drop primary key"
|
22
localtests/fail-no-shared-uk/create.sql
Normal file
22
localtests/fail-no-shared-uk/create.sql
Normal file
@ -0,0 +1,22 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts timestamp,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, now());
|
||||
insert into gh_ost_test values (null, 13, now());
|
||||
insert into gh_ost_test values (null, 17, now());
|
||||
end ;;
|
1
localtests/fail-no-shared-uk/expect_failure
Normal file
1
localtests/fail-no-shared-uk/expect_failure
Normal file
@ -0,0 +1 @@
|
||||
No shared unique key can be found after ALTER
|
1
localtests/fail-no-shared-uk/extra_args
Normal file
1
localtests/fail-no-shared-uk/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="drop primary key, add primary key (id, i)"
|
@ -1,7 +1,7 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
t varchar(128),
|
||||
t varchar(128) charset latin1 collate latin1_swedish_ci,
|
||||
primary key(id)
|
||||
) auto_increment=1 charset latin1 collate latin1_swedish_ci;
|
||||
|
||||
@ -17,5 +17,9 @@ create event gh_ost_test
|
||||
begin
|
||||
insert into gh_ost_test values (null, md5(rand()));
|
||||
insert into gh_ost_test values (null, 'átesting');
|
||||
insert into gh_ost_test values (null, 'ádelete');
|
||||
insert into gh_ost_test values (null, 'testátest');
|
||||
update gh_ost_test set t='áupdated' order by id desc limit 1;
|
||||
update gh_ost_test set t='áupdated1' where t='áupdated' order by id desc limit 1;
|
||||
delete from gh_ost_test where t='ádelete';
|
||||
end ;;
|
||||
|
9
localtests/no-unique-key/create.sql
Normal file
9
localtests/no-unique-key/create.sql
Normal file
@ -0,0 +1,9 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
i int not null,
|
||||
ts timestamp default current_timestamp,
|
||||
dt datetime,
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
1
localtests/no-unique-key/expect_failure
Normal file
1
localtests/no-unique-key/expect_failure
Normal file
@ -0,0 +1 @@
|
||||
No PRIMARY nor UNIQUE key found in table
|
1
localtests/no-unique-key/extra_args
Normal file
1
localtests/no-unique-key/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="add column v varchar(32)"
|
8
localtests/rename-none-column/create.sql
Normal file
8
localtests/rename-none-column/create.sql
Normal file
@ -0,0 +1,8 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
c1 int not null,
|
||||
primary key (id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
1
localtests/rename-none-column/extra_args
Normal file
1
localtests/rename-none-column/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="add column exchange double comment 'exchange rate used for pay in your own currency'"
|
8
localtests/rename-none-comment/create.sql
Normal file
8
localtests/rename-none-comment/create.sql
Normal file
@ -0,0 +1,8 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
c1 int not null,
|
||||
primary key (id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
1
localtests/rename-none-comment/extra_args
Normal file
1
localtests/rename-none-comment/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="add column exchange_rate double comment 'change rate used for pay in your own currency'"
|
24
localtests/swap-pk-uk/create.sql
Normal file
24
localtests/swap-pk-uk/create.sql
Normal file
@ -0,0 +1,24 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id bigint,
|
||||
i int not null,
|
||||
ts timestamp(6),
|
||||
primary key(id),
|
||||
unique key its_uidx(i, ts)
|
||||
) ;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 0, 11, now(6));
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 1, 13, now(6));
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 2, 17, now(6));
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 3, 19, now(6));
|
||||
end ;;
|
1
localtests/swap-pk-uk/extra_args
Normal file
1
localtests/swap-pk-uk/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="drop primary key, drop key its_uidx, add primary key (i, ts), add unique key id_uidx(id)"
|
1
localtests/swap-pk-uk/order_by
Normal file
1
localtests/swap-pk-uk/order_by
Normal file
@ -0,0 +1 @@
|
||||
id
|
24
localtests/swap-uk-uk/create.sql
Normal file
24
localtests/swap-uk-uk/create.sql
Normal file
@ -0,0 +1,24 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id bigint,
|
||||
i int not null,
|
||||
ts timestamp(6),
|
||||
unique key id_uidx(id),
|
||||
unique key its_uidx(i, ts)
|
||||
) ;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 0, 11, now(6));
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 1, 13, now(6));
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 2, 17, now(6));
|
||||
insert into gh_ost_test values ((unix_timestamp() << 2) + 3, 19, now(6));
|
||||
end ;;
|
1
localtests/swap-uk-uk/extra_args
Normal file
1
localtests/swap-uk-uk/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="drop key id_uidx, drop key its_uidx, add unique key its2_uidx(i, ts), add unique key id2_uidx(id)"
|
1
localtests/swap-uk-uk/order_by
Normal file
1
localtests/swap-uk-uk/order_by
Normal file
@ -0,0 +1 @@
|
||||
id
|
22
localtests/swap-uk/create.sql
Normal file
22
localtests/swap-uk/create.sql
Normal file
@ -0,0 +1,22 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts timestamp,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, now());
|
||||
insert into gh_ost_test values (null, 13, now());
|
||||
insert into gh_ost_test values (null, 17, now());
|
||||
end ;;
|
1
localtests/swap-uk/extra_args
Normal file
1
localtests/swap-uk/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="drop primary key, add unique key(id)"
|
@ -49,7 +49,7 @@ test_single() {
|
||||
echo -n "Testing: $test_name"
|
||||
|
||||
echo_dot
|
||||
gh-ost-test-mysql-replica -e "start slave"
|
||||
gh-ost-test-mysql-replica -e "stop slave; start slave; do sleep(1)"
|
||||
echo_dot
|
||||
gh-ost-test-mysql-master --default-character-set=utf8mb4 test < $tests_path/$test_name/create.sql
|
||||
|
||||
@ -59,12 +59,16 @@ test_single() {
|
||||
fi
|
||||
orig_columns="*"
|
||||
ghost_columns="*"
|
||||
order_by=""
|
||||
if [ -f $tests_path/$test_name/orig_columns ] ; then
|
||||
orig_columns=$(cat $tests_path/$test_name/orig_columns)
|
||||
fi
|
||||
if [ -f $tests_path/$test_name/ghost_columns ] ; then
|
||||
ghost_columns=$(cat $tests_path/$test_name/ghost_columns)
|
||||
fi
|
||||
if [ -f $tests_path/$test_name/order_by ] ; then
|
||||
order_by="order by $(cat $tests_path/$test_name/order_by)"
|
||||
fi
|
||||
# graceful sleep for replica to catch up
|
||||
echo_dot
|
||||
sleep 1
|
||||
@ -84,7 +88,7 @@ test_single() {
|
||||
--throttle-query='select timestampdiff(second, min(last_update), now()) < 5 from _gh_ost_test_ghc' \
|
||||
--serve-socket-file=/tmp/gh-ost.test.sock \
|
||||
--initially-drop-socket-file \
|
||||
--postpone-cut-over-flag-file="" \
|
||||
--postpone-cut-over-flag-file=/tmp/gh-ost.test.postpone.flag \
|
||||
--test-on-replica \
|
||||
--default-retries=1 \
|
||||
--verbose \
|
||||
@ -129,8 +133,8 @@ test_single() {
|
||||
fi
|
||||
|
||||
echo_dot
|
||||
orig_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${orig_columns} from gh_ost_test" -ss | md5sum)
|
||||
ghost_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${ghost_columns} from _gh_ost_test_gho" -ss | md5sum)
|
||||
orig_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${orig_columns} from gh_ost_test ${order_by}" -ss | md5sum)
|
||||
ghost_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${ghost_columns} from _gh_ost_test_gho ${order_by}" -ss | md5sum)
|
||||
|
||||
if [ "$orig_checksum" != "$ghost_checksum" ] ; then
|
||||
echo "ERROR $test_name: checksum mismatch"
|
||||
|
13
localtests/trivial/create.sql
Normal file
13
localtests/trivial/create.sql
Normal file
@ -0,0 +1,13 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
color varchar(32),
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
|
||||
insert into gh_ost_test values (null, 11, 'red');
|
||||
insert into gh_ost_test values (null, 13, 'green');
|
||||
insert into gh_ost_test values (null, 17, 'blue');
|
1
localtests/trivial/extra_args
Normal file
1
localtests/trivial/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--throttle-query='select false' \
|
28
localtests/unsigned-modify/create.sql
Normal file
28
localtests/unsigned-modify/create.sql
Normal file
@ -0,0 +1,28 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id bigint(20) NOT NULL AUTO_INCREMENT,
|
||||
column1 int(11) NOT NULL,
|
||||
column2 smallint(5) unsigned NOT NULL,
|
||||
column3 mediumint(8) unsigned NOT NULL,
|
||||
column4 tinyint(3) unsigned NOT NULL,
|
||||
column5 int(11) NOT NULL,
|
||||
column6 int(11) NOT NULL,
|
||||
PRIMARY KEY (id),
|
||||
KEY c12_ix (column1, column2)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
-- mediumint maxvalue: 16777215 (unsigned), 8388607 (signed)
|
||||
insert into gh_ost_test values (NULL, 13382498, 536, 8388607, 3, 1483892217, 1483892218);
|
||||
insert into gh_ost_test values (NULL, 13382498, 536, 8388607, 250, 1483892217, 1483892218);
|
||||
insert into gh_ost_test values (NULL, 13382498, 536, 10000000, 3, 1483892217, 1483892218);
|
||||
end ;;
|
6
resources/hooks-sample/gh-ost-on-start-replication-hook
Normal file
6
resources/hooks-sample/gh-ost-on-start-replication-hook
Normal file
@ -0,0 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Sample hook file for gh-ost-on-start-replication
|
||||
# Useful for RDS/Aurora setups, see https://github.com/github/gh-ost/issues/163
|
||||
|
||||
echo "$(date) gh-ost-on-start-replication $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME $GH_OST_MIGRATED_HOST" >> /tmp/gh-ost.log
|
@ -1,5 +1,6 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Sample hook file for gh-ost-on-stop-replication
|
||||
# Useful for RDS/Aurora setups, see https://github.com/github/gh-ost/issues/163
|
||||
|
||||
echo "$(date) gh-ost-on-stop-replication $GH_OST_DATABASE_NAME.$GH_OST_TABLE_NAME $GH_OST_MIGRATED_HOST" >> /tmp/gh-ost.log
|
||||
|
16
script/bootstrap
Executable file
16
script/bootstrap
Executable file
@ -0,0 +1,16 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
# Make sure we have the version of Go we want to depend on, either from the
|
||||
# system or one we grab ourselves.
|
||||
. script/ensure-go-installed
|
||||
|
||||
# Since we want to be able to build this outside of GOPATH, we set it
|
||||
# up so it points back to us and go is none the wiser
|
||||
|
||||
set -x
|
||||
rm -rf .gopath
|
||||
mkdir -p .gopath/src/github.com/github
|
||||
ln -s "$PWD" .gopath/src/github.com/github/gh-ost
|
||||
export GOPATH=$PWD/.gopath:$GOPATH
|
20
script/build
Executable file
20
script/build
Executable file
@ -0,0 +1,20 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
. script/bootstrap
|
||||
|
||||
mkdir -p bin
|
||||
bindir="$PWD"/bin
|
||||
scriptdir="$PWD"/script
|
||||
|
||||
# We have a few binaries that we want to build, so let's put them into bin/
|
||||
|
||||
version=$(git rev-parse HEAD)
|
||||
describe=$(git describe --tags --always --dirty)
|
||||
|
||||
export GOPATH="$PWD/.gopath"
|
||||
cd .gopath/src/github.com/github/gh-ost
|
||||
|
||||
# We put the binaries directly into the bindir, because we have no need for shim wrappers
|
||||
go build -o "$bindir/gh-ost" -ldflags "-X main.AppVersion=${version} -X main.BuildDescribe=${describe}" ./go/cmd/gh-ost/main.go
|
17
script/cibuild
Executable file
17
script/cibuild
Executable file
@ -0,0 +1,17 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
. script/bootstrap
|
||||
|
||||
echo "Verifying code is formatted via 'gofmt -s -w go/'"
|
||||
gofmt -s -w go/
|
||||
git diff --exit-code --quiet
|
||||
|
||||
echo "Building"
|
||||
script/build
|
||||
|
||||
cd .gopath/src/github.com/github/gh-ost
|
||||
|
||||
echo "Running unit tests"
|
||||
go test ./go/...
|
37
script/cibuild-gh-ost-build-deploy-tarball
Executable file
37
script/cibuild-gh-ost-build-deploy-tarball
Executable file
@ -0,0 +1,37 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
script/cibuild
|
||||
|
||||
# Get a fresh directory and make sure to delete it afterwards
|
||||
build_dir=tmp/build
|
||||
rm -rf $build_dir
|
||||
mkdir -p $build_dir
|
||||
trap "rm -rf $build_dir" EXIT
|
||||
|
||||
commit_sha=$(git rev-parse HEAD)
|
||||
|
||||
if [ $(uname -s) = "Darwin" ]; then
|
||||
build_arch="$(uname -sr | tr -d ' ' | tr '[:upper:]' '[:lower:]')-$(uname -m)"
|
||||
else
|
||||
build_arch="$(lsb_release -sc | tr -d ' ' | tr '[:upper:]' '[:lower:]')-$(uname -m)"
|
||||
fi
|
||||
|
||||
tarball=$build_dir/${commit_sha}-${build_arch}.tar
|
||||
|
||||
# Create the tarball
|
||||
tar cvf $tarball --mode="ugo=rx" bin/
|
||||
|
||||
# Compress it and copy it to the directory for the CI to upload it
|
||||
gzip $tarball
|
||||
mkdir -p "$BUILD_ARTIFACT_DIR"/gh-ost
|
||||
cp ${tarball}.gz "$BUILD_ARTIFACT_DIR"/gh-ost/
|
||||
|
||||
### HACK HACK HACK ###
|
||||
# Blame @carlosmn. In the good way.
|
||||
# We don't have any jessie machines for building, but a pure-Go binary depends
|
||||
# on a version of libc and ld which are widely available, so we can copy the
|
||||
# tarball over with jessie in its name so we can deploy it on jessie machines.
|
||||
jessie_tarball_name=$(echo $(basename "${tarball}") | sed s/-precise-/-jessie-/)
|
||||
cp ${tarball}.gz "$BUILD_ARTIFACT_DIR/gh-ost/${jessie_tarball_name}.gz"
|
51
script/ensure-go-installed
Executable file
51
script/ensure-go-installed
Executable file
@ -0,0 +1,51 @@
|
||||
#!/bin/bash
|
||||
|
||||
GO_VERSION=go1.7
|
||||
|
||||
GO_PKG_DARWIN=${GO_VERSION}.darwin-amd64.pkg
|
||||
GO_PKG_DARWIN_SHA=e7089843bc7148ffcc147759985b213604d22bb9fd19bd930b515aa981bf1b22
|
||||
|
||||
GO_PKG_LINUX=${GO_VERSION}.linux-amd64.tar.gz
|
||||
GO_PKG_LINUX_SHA=702ad90f705365227e902b42d91dd1a40e48ca7f67a2f4b2fd052aaa4295cd95
|
||||
|
||||
export ROOTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/.." && pwd )"
|
||||
cd $ROOTDIR
|
||||
|
||||
# If Go isn't installed globally, setup environment variables for local install.
|
||||
if [ -z "$(which go)" ] || [ -z "$(go version | grep $GO_VERSION)" ]; then
|
||||
GODIR="$ROOTDIR/.vendor/go17"
|
||||
|
||||
if [ $(uname -s) = "Darwin" ]; then
|
||||
export GOROOT="$GODIR/usr/local/go"
|
||||
else
|
||||
export GOROOT="$GODIR/go"
|
||||
fi
|
||||
|
||||
export PATH="$GOROOT/bin:$PATH"
|
||||
fi
|
||||
|
||||
# Check if local install exists, and install otherwise.
|
||||
if [ -z "$(which go)" ] || [ -z "$(go version | grep $GO_VERSION)" ]; then
|
||||
[ -d "$GODIR" ] && rm -rf $GODIR
|
||||
mkdir -p "$GODIR"
|
||||
cd "$GODIR";
|
||||
|
||||
if [ $(uname -s) = "Darwin" ]; then
|
||||
curl -L -O https://storage.googleapis.com/golang/$GO_PKG_DARWIN
|
||||
shasum -a256 $GO_PKG_DARWIN | grep $GO_PKG_DARWIN_SHA
|
||||
xar -xf $GO_PKG_DARWIN
|
||||
cpio -i < com.googlecode.go.pkg/Payload
|
||||
else
|
||||
curl -L -O https://storage.googleapis.com/golang/$GO_PKG_LINUX
|
||||
shasum -a256 $GO_PKG_LINUX | grep $GO_PKG_LINUX_SHA
|
||||
tar xf $GO_PKG_LINUX
|
||||
fi
|
||||
|
||||
# Prove we did something right
|
||||
echo "$GO_VERSION installed in $GODIR: Go Binary: $(which go)"
|
||||
fi
|
||||
|
||||
cd $ROOTDIR
|
||||
|
||||
# Configure the new go to be the first go found
|
||||
export GOPATH=$ROOTDIR/.vendor
|
11
script/go
Executable file
11
script/go
Executable file
@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
. script/bootstrap
|
||||
|
||||
mkdir -p bin
|
||||
bindir="$PWD"/bin
|
||||
|
||||
cd .gopath/src/github.com/github/gh-ost
|
||||
go "$@"
|
50
vendor/github.com/ngaut/deadline/rw.go
generated
vendored
Normal file
50
vendor/github.com/ngaut/deadline/rw.go
generated
vendored
Normal file
@ -0,0 +1,50 @@
|
||||
package deadline
|
||||
|
||||
import (
|
||||
"io"
|
||||
"time"
|
||||
)
|
||||
|
||||
type DeadlineReader interface {
|
||||
io.Reader
|
||||
SetReadDeadline(t time.Time) error
|
||||
}
|
||||
|
||||
type DeadlineWriter interface {
|
||||
io.Writer
|
||||
SetWriteDeadline(t time.Time) error
|
||||
}
|
||||
|
||||
type DeadlineReadWriter interface {
|
||||
io.ReadWriter
|
||||
SetReadDeadline(t time.Time) error
|
||||
SetWriteDeadline(t time.Time) error
|
||||
}
|
||||
|
||||
type deadlineReader struct {
|
||||
DeadlineReader
|
||||
timeout time.Duration
|
||||
}
|
||||
|
||||
func (r *deadlineReader) Read(p []byte) (int, error) {
|
||||
r.DeadlineReader.SetReadDeadline(time.Now().Add(r.timeout))
|
||||
return r.DeadlineReader.Read(p)
|
||||
}
|
||||
|
||||
func NewDeadlineReader(r DeadlineReader, timeout time.Duration) io.Reader {
|
||||
return &deadlineReader{DeadlineReader: r, timeout: timeout}
|
||||
}
|
||||
|
||||
type deadlineWriter struct {
|
||||
DeadlineWriter
|
||||
timeout time.Duration
|
||||
}
|
||||
|
||||
func (r *deadlineWriter) Write(p []byte) (int, error) {
|
||||
r.DeadlineWriter.SetWriteDeadline(time.Now().Add(r.timeout))
|
||||
return r.DeadlineWriter.Write(p)
|
||||
}
|
||||
|
||||
func NewDeadlineWriter(r DeadlineWriter, timeout time.Duration) io.Writer {
|
||||
return &deadlineWriter{DeadlineWriter: r, timeout: timeout}
|
||||
}
|
165
vendor/github.com/ngaut/log/LICENSE
generated
vendored
Normal file
165
vendor/github.com/ngaut/log/LICENSE
generated
vendored
Normal file
@ -0,0 +1,165 @@
|
||||
GNU LESSER GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
|
||||
This version of the GNU Lesser General Public License incorporates
|
||||
the terms and conditions of version 3 of the GNU General Public
|
||||
License, supplemented by the additional permissions listed below.
|
||||
|
||||
0. Additional Definitions.
|
||||
|
||||
As used herein, "this License" refers to version 3 of the GNU Lesser
|
||||
General Public License, and the "GNU GPL" refers to version 3 of the GNU
|
||||
General Public License.
|
||||
|
||||
"The Library" refers to a covered work governed by this License,
|
||||
other than an Application or a Combined Work as defined below.
|
||||
|
||||
An "Application" is any work that makes use of an interface provided
|
||||
by the Library, but which is not otherwise based on the Library.
|
||||
Defining a subclass of a class defined by the Library is deemed a mode
|
||||
of using an interface provided by the Library.
|
||||
|
||||
A "Combined Work" is a work produced by combining or linking an
|
||||
Application with the Library. The particular version of the Library
|
||||
with which the Combined Work was made is also called the "Linked
|
||||
Version".
|
||||
|
||||
The "Minimal Corresponding Source" for a Combined Work means the
|
||||
Corresponding Source for the Combined Work, excluding any source code
|
||||
for portions of the Combined Work that, considered in isolation, are
|
||||
based on the Application, and not on the Linked Version.
|
||||
|
||||
The "Corresponding Application Code" for a Combined Work means the
|
||||
object code and/or source code for the Application, including any data
|
||||
and utility programs needed for reproducing the Combined Work from the
|
||||
Application, but excluding the System Libraries of the Combined Work.
|
||||
|
||||
1. Exception to Section 3 of the GNU GPL.
|
||||
|
||||
You may convey a covered work under sections 3 and 4 of this License
|
||||
without being bound by section 3 of the GNU GPL.
|
||||
|
||||
2. Conveying Modified Versions.
|
||||
|
||||
If you modify a copy of the Library, and, in your modifications, a
|
||||
facility refers to a function or data to be supplied by an Application
|
||||
that uses the facility (other than as an argument passed when the
|
||||
facility is invoked), then you may convey a copy of the modified
|
||||
version:
|
||||
|
||||
a) under this License, provided that you make a good faith effort to
|
||||
ensure that, in the event an Application does not supply the
|
||||
function or data, the facility still operates, and performs
|
||||
whatever part of its purpose remains meaningful, or
|
||||
|
||||
b) under the GNU GPL, with none of the additional permissions of
|
||||
this License applicable to that copy.
|
||||
|
||||
3. Object Code Incorporating Material from Library Header Files.
|
||||
|
||||
The object code form of an Application may incorporate material from
|
||||
a header file that is part of the Library. You may convey such object
|
||||
code under terms of your choice, provided that, if the incorporated
|
||||
material is not limited to numerical parameters, data structure
|
||||
layouts and accessors, or small macros, inline functions and templates
|
||||
(ten or fewer lines in length), you do both of the following:
|
||||
|
||||
a) Give prominent notice with each copy of the object code that the
|
||||
Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the object code with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
4. Combined Works.
|
||||
|
||||
You may convey a Combined Work under terms of your choice that,
|
||||
taken together, effectively do not restrict modification of the
|
||||
portions of the Library contained in the Combined Work and reverse
|
||||
engineering for debugging such modifications, if you also do each of
|
||||
the following:
|
||||
|
||||
a) Give prominent notice with each copy of the Combined Work that
|
||||
the Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the Combined Work with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
c) For a Combined Work that displays copyright notices during
|
||||
execution, include the copyright notice for the Library among
|
||||
these notices, as well as a reference directing the user to the
|
||||
copies of the GNU GPL and this license document.
|
||||
|
||||
d) Do one of the following:
|
||||
|
||||
0) Convey the Minimal Corresponding Source under the terms of this
|
||||
License, and the Corresponding Application Code in a form
|
||||
suitable for, and under terms that permit, the user to
|
||||
recombine or relink the Application with a modified version of
|
||||
the Linked Version to produce a modified Combined Work, in the
|
||||
manner specified by section 6 of the GNU GPL for conveying
|
||||
Corresponding Source.
|
||||
|
||||
1) Use a suitable shared library mechanism for linking with the
|
||||
Library. A suitable mechanism is one that (a) uses at run time
|
||||
a copy of the Library already present on the user's computer
|
||||
system, and (b) will operate properly with a modified version
|
||||
of the Library that is interface-compatible with the Linked
|
||||
Version.
|
||||
|
||||
e) Provide Installation Information, but only if you would otherwise
|
||||
be required to provide such information under section 6 of the
|
||||
GNU GPL, and only to the extent that such information is
|
||||
necessary to install and execute a modified version of the
|
||||
Combined Work produced by recombining or relinking the
|
||||
Application with a modified version of the Linked Version. (If
|
||||
you use option 4d0, the Installation Information must accompany
|
||||
the Minimal Corresponding Source and Corresponding Application
|
||||
Code. If you use option 4d1, you must provide the Installation
|
||||
Information in the manner specified by section 6 of the GNU GPL
|
||||
for conveying Corresponding Source.)
|
||||
|
||||
5. Combined Libraries.
|
||||
|
||||
You may place library facilities that are a work based on the
|
||||
Library side by side in a single library together with other library
|
||||
facilities that are not Applications and are not covered by this
|
||||
License, and convey such a combined library under terms of your
|
||||
choice, if you do both of the following:
|
||||
|
||||
a) Accompany the combined library with a copy of the same work based
|
||||
on the Library, uncombined with any other library facilities,
|
||||
conveyed under the terms of this License.
|
||||
|
||||
b) Give prominent notice with the combined library that part of it
|
||||
is a work based on the Library, and explaining where to find the
|
||||
accompanying uncombined form of the same work.
|
||||
|
||||
6. Revised Versions of the GNU Lesser General Public License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions
|
||||
of the GNU Lesser General Public License from time to time. Such new
|
||||
versions will be similar in spirit to the present version, but may
|
||||
differ in detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Library as you received it specifies that a certain numbered version
|
||||
of the GNU Lesser General Public License "or any later version"
|
||||
applies to it, you have the option of following the terms and
|
||||
conditions either of that published version or of any later version
|
||||
published by the Free Software Foundation. If the Library as you
|
||||
received it does not specify a version number of the GNU Lesser
|
||||
General Public License, you may choose any version of the GNU Lesser
|
||||
General Public License ever published by the Free Software Foundation.
|
||||
|
||||
If the Library as you received it specifies that a proxy can decide
|
||||
whether future versions of the GNU Lesser General Public License shall
|
||||
apply, that proxy's public statement of acceptance of any version is
|
||||
permanent authorization for you to choose that version for the
|
||||
Library.
|
2
vendor/github.com/ngaut/log/README.md
generated
vendored
Normal file
2
vendor/github.com/ngaut/log/README.md
generated
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
logging
|
||||
=======
|
18
vendor/github.com/ngaut/log/crash_unix.go
generated
vendored
Normal file
18
vendor/github.com/ngaut/log/crash_unix.go
generated
vendored
Normal file
@ -0,0 +1,18 @@
|
||||
// +build freebsd openbsd netbsd dragonfly darwin linux
|
||||
|
||||
package log
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
func CrashLog(file string) {
|
||||
f, err := os.OpenFile(file, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
|
||||
if err != nil {
|
||||
log.Println(err.Error())
|
||||
} else {
|
||||
syscall.Dup2(int(f.Fd()), 2)
|
||||
}
|
||||
}
|
37
vendor/github.com/ngaut/log/crash_win.go
generated
vendored
Normal file
37
vendor/github.com/ngaut/log/crash_win.go
generated
vendored
Normal file
@ -0,0 +1,37 @@
|
||||
// +build windows
|
||||
|
||||
package log
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
var (
|
||||
kernel32 = syscall.MustLoadDLL("kernel32.dll")
|
||||
procSetStdHandle = kernel32.MustFindProc("SetStdHandle")
|
||||
)
|
||||
|
||||
func setStdHandle(stdhandle int32, handle syscall.Handle) error {
|
||||
r0, _, e1 := syscall.Syscall(procSetStdHandle.Addr(), 2, uintptr(stdhandle), uintptr(handle), 0)
|
||||
if r0 == 0 {
|
||||
if e1 != 0 {
|
||||
return error(e1)
|
||||
}
|
||||
return syscall.EINVAL
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func CrashLog(file string) {
|
||||
f, err := os.OpenFile(file, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
|
||||
if err != nil {
|
||||
log.Println(err.Error())
|
||||
} else {
|
||||
err = setStdHandle(syscall.STD_ERROR_HANDLE, syscall.Handle(f.Fd()))
|
||||
if err != nil {
|
||||
log.Println(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
380
vendor/github.com/ngaut/log/log.go
generated
vendored
Normal file
380
vendor/github.com/ngaut/log/log.go
generated
vendored
Normal file
@ -0,0 +1,380 @@
|
||||
//high level log wrapper, so it can output different log based on level
|
||||
package log
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
Ldate = log.Ldate
|
||||
Llongfile = log.Llongfile
|
||||
Lmicroseconds = log.Lmicroseconds
|
||||
Lshortfile = log.Lshortfile
|
||||
LstdFlags = log.LstdFlags
|
||||
Ltime = log.Ltime
|
||||
)
|
||||
|
||||
type (
|
||||
LogLevel int
|
||||
LogType int
|
||||
)
|
||||
|
||||
const (
|
||||
LOG_FATAL = LogType(0x1)
|
||||
LOG_ERROR = LogType(0x2)
|
||||
LOG_WARNING = LogType(0x4)
|
||||
LOG_INFO = LogType(0x8)
|
||||
LOG_DEBUG = LogType(0x10)
|
||||
)
|
||||
|
||||
const (
|
||||
LOG_LEVEL_NONE = LogLevel(0x0)
|
||||
LOG_LEVEL_FATAL = LOG_LEVEL_NONE | LogLevel(LOG_FATAL)
|
||||
LOG_LEVEL_ERROR = LOG_LEVEL_FATAL | LogLevel(LOG_ERROR)
|
||||
LOG_LEVEL_WARN = LOG_LEVEL_ERROR | LogLevel(LOG_WARNING)
|
||||
LOG_LEVEL_INFO = LOG_LEVEL_WARN | LogLevel(LOG_INFO)
|
||||
LOG_LEVEL_DEBUG = LOG_LEVEL_INFO | LogLevel(LOG_DEBUG)
|
||||
LOG_LEVEL_ALL = LOG_LEVEL_DEBUG
|
||||
)
|
||||
|
||||
const FORMAT_TIME_DAY string = "20060102"
|
||||
const FORMAT_TIME_HOUR string = "2006010215"
|
||||
|
||||
var _log *logger = New()
|
||||
|
||||
func init() {
|
||||
SetFlags(Ldate | Ltime | Lshortfile)
|
||||
SetHighlighting(runtime.GOOS != "windows")
|
||||
}
|
||||
|
||||
func Logger() *log.Logger {
|
||||
return _log._log
|
||||
}
|
||||
|
||||
func SetLevel(level LogLevel) {
|
||||
_log.SetLevel(level)
|
||||
}
|
||||
func GetLogLevel() LogLevel {
|
||||
return _log.level
|
||||
}
|
||||
|
||||
func SetOutput(out io.Writer) {
|
||||
_log.SetOutput(out)
|
||||
}
|
||||
|
||||
func SetOutputByName(path string) error {
|
||||
return _log.SetOutputByName(path)
|
||||
}
|
||||
|
||||
func SetFlags(flags int) {
|
||||
_log._log.SetFlags(flags)
|
||||
}
|
||||
|
||||
func Info(v ...interface{}) {
|
||||
_log.Info(v...)
|
||||
}
|
||||
|
||||
func Infof(format string, v ...interface{}) {
|
||||
_log.Infof(format, v...)
|
||||
}
|
||||
|
||||
func Debug(v ...interface{}) {
|
||||
_log.Debug(v...)
|
||||
}
|
||||
|
||||
func Debugf(format string, v ...interface{}) {
|
||||
_log.Debugf(format, v...)
|
||||
}
|
||||
|
||||
func Warn(v ...interface{}) {
|
||||
_log.Warning(v...)
|
||||
}
|
||||
|
||||
func Warnf(format string, v ...interface{}) {
|
||||
_log.Warningf(format, v...)
|
||||
}
|
||||
|
||||
func Warning(v ...interface{}) {
|
||||
_log.Warning(v...)
|
||||
}
|
||||
|
||||
func Warningf(format string, v ...interface{}) {
|
||||
_log.Warningf(format, v...)
|
||||
}
|
||||
|
||||
func Error(v ...interface{}) {
|
||||
_log.Error(v...)
|
||||
}
|
||||
|
||||
func Errorf(format string, v ...interface{}) {
|
||||
_log.Errorf(format, v...)
|
||||
}
|
||||
|
||||
func Fatal(v ...interface{}) {
|
||||
_log.Fatal(v...)
|
||||
}
|
||||
|
||||
func Fatalf(format string, v ...interface{}) {
|
||||
_log.Fatalf(format, v...)
|
||||
}
|
||||
|
||||
func SetLevelByString(level string) {
|
||||
_log.SetLevelByString(level)
|
||||
}
|
||||
|
||||
func SetHighlighting(highlighting bool) {
|
||||
_log.SetHighlighting(highlighting)
|
||||
}
|
||||
|
||||
func SetRotateByDay() {
|
||||
_log.SetRotateByDay()
|
||||
}
|
||||
|
||||
func SetRotateByHour() {
|
||||
_log.SetRotateByHour()
|
||||
}
|
||||
|
||||
type logger struct {
|
||||
_log *log.Logger
|
||||
level LogLevel
|
||||
highlighting bool
|
||||
|
||||
dailyRolling bool
|
||||
hourRolling bool
|
||||
|
||||
fileName string
|
||||
logSuffix string
|
||||
fd *os.File
|
||||
|
||||
lock sync.Mutex
|
||||
}
|
||||
|
||||
func (l *logger) SetHighlighting(highlighting bool) {
|
||||
l.highlighting = highlighting
|
||||
}
|
||||
|
||||
func (l *logger) SetLevel(level LogLevel) {
|
||||
l.level = level
|
||||
}
|
||||
|
||||
func (l *logger) SetLevelByString(level string) {
|
||||
l.level = StringToLogLevel(level)
|
||||
}
|
||||
|
||||
func (l *logger) SetRotateByDay() {
|
||||
l.dailyRolling = true
|
||||
l.logSuffix = genDayTime(time.Now())
|
||||
}
|
||||
|
||||
func (l *logger) SetRotateByHour() {
|
||||
l.hourRolling = true
|
||||
l.logSuffix = genHourTime(time.Now())
|
||||
}
|
||||
|
||||
func (l *logger) rotate() error {
|
||||
l.lock.Lock()
|
||||
defer l.lock.Unlock()
|
||||
|
||||
var suffix string
|
||||
if l.dailyRolling {
|
||||
suffix = genDayTime(time.Now())
|
||||
} else if l.hourRolling {
|
||||
suffix = genHourTime(time.Now())
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Notice: if suffix is not equal to l.LogSuffix, then rotate
|
||||
if suffix != l.logSuffix {
|
||||
err := l.doRotate(suffix)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *logger) doRotate(suffix string) error {
|
||||
// Notice: Not check error, is this ok?
|
||||
l.fd.Close()
|
||||
|
||||
lastFileName := l.fileName + "." + l.logSuffix
|
||||
err := os.Rename(l.fileName, lastFileName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = l.SetOutputByName(l.fileName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.logSuffix = suffix
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *logger) SetOutput(out io.Writer) {
|
||||
l._log = log.New(out, l._log.Prefix(), l._log.Flags())
|
||||
}
|
||||
|
||||
func (l *logger) SetOutputByName(path string) error {
|
||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_APPEND|os.O_RDWR, 0666)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
l.SetOutput(f)
|
||||
|
||||
l.fileName = path
|
||||
l.fd = f
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (l *logger) log(t LogType, v ...interface{}) {
|
||||
if l.level|LogLevel(t) != l.level {
|
||||
return
|
||||
}
|
||||
|
||||
err := l.rotate()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%s\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
v1 := make([]interface{}, len(v)+2)
|
||||
logStr, logColor := LogTypeToString(t)
|
||||
if l.highlighting {
|
||||
v1[0] = "\033" + logColor + "m[" + logStr + "]"
|
||||
copy(v1[1:], v)
|
||||
v1[len(v)+1] = "\033[0m"
|
||||
} else {
|
||||
v1[0] = "[" + logStr + "]"
|
||||
copy(v1[1:], v)
|
||||
v1[len(v)+1] = ""
|
||||
}
|
||||
|
||||
s := fmt.Sprintln(v1...)
|
||||
l._log.Output(4, s)
|
||||
}
|
||||
|
||||
func (l *logger) logf(t LogType, format string, v ...interface{}) {
|
||||
if l.level|LogLevel(t) != l.level {
|
||||
return
|
||||
}
|
||||
|
||||
err := l.rotate()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%s\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
logStr, logColor := LogTypeToString(t)
|
||||
var s string
|
||||
if l.highlighting {
|
||||
s = "\033" + logColor + "m[" + logStr + "] " + fmt.Sprintf(format, v...) + "\033[0m"
|
||||
} else {
|
||||
s = "[" + logStr + "] " + fmt.Sprintf(format, v...)
|
||||
}
|
||||
l._log.Output(4, s)
|
||||
}
|
||||
|
||||
func (l *logger) Fatal(v ...interface{}) {
|
||||
l.log(LOG_FATAL, v...)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
func (l *logger) Fatalf(format string, v ...interface{}) {
|
||||
l.logf(LOG_FATAL, format, v...)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
func (l *logger) Error(v ...interface{}) {
|
||||
l.log(LOG_ERROR, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Errorf(format string, v ...interface{}) {
|
||||
l.logf(LOG_ERROR, format, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Warning(v ...interface{}) {
|
||||
l.log(LOG_WARNING, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Warningf(format string, v ...interface{}) {
|
||||
l.logf(LOG_WARNING, format, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Debug(v ...interface{}) {
|
||||
l.log(LOG_DEBUG, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Debugf(format string, v ...interface{}) {
|
||||
l.logf(LOG_DEBUG, format, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Info(v ...interface{}) {
|
||||
l.log(LOG_INFO, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Infof(format string, v ...interface{}) {
|
||||
l.logf(LOG_INFO, format, v...)
|
||||
}
|
||||
|
||||
func StringToLogLevel(level string) LogLevel {
|
||||
switch level {
|
||||
case "fatal":
|
||||
return LOG_LEVEL_FATAL
|
||||
case "error":
|
||||
return LOG_LEVEL_ERROR
|
||||
case "warn":
|
||||
return LOG_LEVEL_WARN
|
||||
case "warning":
|
||||
return LOG_LEVEL_WARN
|
||||
case "debug":
|
||||
return LOG_LEVEL_DEBUG
|
||||
case "info":
|
||||
return LOG_LEVEL_INFO
|
||||
}
|
||||
return LOG_LEVEL_ALL
|
||||
}
|
||||
|
||||
func LogTypeToString(t LogType) (string, string) {
|
||||
switch t {
|
||||
case LOG_FATAL:
|
||||
return "fatal", "[0;31"
|
||||
case LOG_ERROR:
|
||||
return "error", "[0;31"
|
||||
case LOG_WARNING:
|
||||
return "warning", "[0;33"
|
||||
case LOG_DEBUG:
|
||||
return "debug", "[0;36"
|
||||
case LOG_INFO:
|
||||
return "info", "[0;37"
|
||||
}
|
||||
return "unknown", "[0;37"
|
||||
}
|
||||
|
||||
func genDayTime(t time.Time) string {
|
||||
return t.Format(FORMAT_TIME_DAY)
|
||||
}
|
||||
|
||||
func genHourTime(t time.Time) string {
|
||||
return t.Format(FORMAT_TIME_HOUR)
|
||||
}
|
||||
|
||||
func New() *logger {
|
||||
return Newlogger(os.Stderr, "")
|
||||
}
|
||||
|
||||
func Newlogger(w io.Writer, prefix string) *logger {
|
||||
return &logger{_log: log.New(w, prefix, LstdFlags), level: LOG_LEVEL_ALL, highlighting: true}
|
||||
}
|
197
vendor/github.com/ngaut/log/log_test.go
generated
vendored
Normal file
197
vendor/github.com/ngaut/log/log_test.go
generated
vendored
Normal file
@ -0,0 +1,197 @@
|
||||
package log
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func isFileExists(name string) bool {
|
||||
f, err := os.Stat(name)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
if f.IsDir() {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
func parseDate(value string, format string) (time.Time, error) {
|
||||
tt, err := time.ParseInLocation(format, value, time.Local)
|
||||
if err != nil {
|
||||
fmt.Println("[Error]" + err.Error())
|
||||
return tt, err
|
||||
}
|
||||
|
||||
return tt, nil
|
||||
}
|
||||
|
||||
func checkLogData(fileName string, containData string, num int64) error {
|
||||
input, err := os.OpenFile(fileName, os.O_RDONLY, 0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer input.Close()
|
||||
|
||||
var lineNum int64
|
||||
br := bufio.NewReader(input)
|
||||
for {
|
||||
line, err := br.ReadString('\n')
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
|
||||
realLine := strings.TrimRight(line, "\n")
|
||||
if strings.Contains(realLine, containData) {
|
||||
lineNum += 1
|
||||
}
|
||||
}
|
||||
|
||||
// check whether num is equal to lineNum
|
||||
if lineNum != num {
|
||||
return fmt.Errorf("checkLogData fail - %d vs %d", lineNum, num)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestDayRotateCase(t *testing.T) {
|
||||
_log = New()
|
||||
|
||||
logName := "example_day_test.log"
|
||||
if isFileExists(logName) {
|
||||
err := os.Remove(logName)
|
||||
if err != nil {
|
||||
t.Errorf("Remove old log file fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
}
|
||||
|
||||
SetRotateByDay()
|
||||
err := SetOutputByName(logName)
|
||||
if err != nil {
|
||||
t.Errorf("SetOutputByName fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
|
||||
if _log.logSuffix == "" {
|
||||
t.Errorf("bad log suffix fail - %s\n", _log.logSuffix)
|
||||
}
|
||||
|
||||
day, err := parseDate(_log.logSuffix, FORMAT_TIME_DAY)
|
||||
if err != nil {
|
||||
t.Errorf("parseDate fail - %s, %s\n", err.Error(), _log.logSuffix)
|
||||
}
|
||||
|
||||
_log.Info("Test data")
|
||||
_log.Infof("Test data - %s", day.String())
|
||||
|
||||
// mock log suffix to check rotate
|
||||
lastDay := day.AddDate(0, 0, -1)
|
||||
_log.logSuffix = genDayTime(lastDay)
|
||||
oldLogSuffix := _log.logSuffix
|
||||
|
||||
_log.Info("Test new data")
|
||||
_log.Infof("Test new data - %s", day.String())
|
||||
|
||||
err = _log.fd.Close()
|
||||
if err != nil {
|
||||
t.Errorf("close log fd fail - %s, %s\n", err.Error(), _log.fileName)
|
||||
}
|
||||
|
||||
// check both old and new log file datas
|
||||
oldLogName := logName + "." + oldLogSuffix
|
||||
err = checkLogData(oldLogName, "Test data", 2)
|
||||
if err != nil {
|
||||
t.Errorf("old log file checkLogData fail - %s, %s\n", err.Error(), oldLogName)
|
||||
}
|
||||
|
||||
err = checkLogData(logName, "Test new data", 2)
|
||||
if err != nil {
|
||||
t.Errorf("new log file checkLogData fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
|
||||
// remove test log files
|
||||
err = os.Remove(oldLogName)
|
||||
if err != nil {
|
||||
t.Errorf("Remove final old log file fail - %s, %s\n", err.Error(), oldLogName)
|
||||
}
|
||||
|
||||
err = os.Remove(logName)
|
||||
if err != nil {
|
||||
t.Errorf("Remove final new log file fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHourRotateCase(t *testing.T) {
|
||||
_log = New()
|
||||
|
||||
logName := "example_hour_test.log"
|
||||
if isFileExists(logName) {
|
||||
err := os.Remove(logName)
|
||||
if err != nil {
|
||||
t.Errorf("Remove old log file fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
}
|
||||
|
||||
SetRotateByHour()
|
||||
err := SetOutputByName(logName)
|
||||
if err != nil {
|
||||
t.Errorf("SetOutputByName fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
|
||||
if _log.logSuffix == "" {
|
||||
t.Errorf("bad log suffix fail - %s\n", _log.logSuffix)
|
||||
}
|
||||
|
||||
hour, err := parseDate(_log.logSuffix, FORMAT_TIME_HOUR)
|
||||
if err != nil {
|
||||
t.Errorf("parseDate fail - %s, %s\n", err.Error(), _log.logSuffix)
|
||||
}
|
||||
|
||||
_log.Info("Test data")
|
||||
_log.Infof("Test data - %s", hour.String())
|
||||
|
||||
// mock log suffix to check rotate
|
||||
lastHour := hour.Add(time.Duration(-1 * time.Hour))
|
||||
_log.logSuffix = genHourTime(lastHour)
|
||||
oldLogSuffix := _log.logSuffix
|
||||
|
||||
_log.Info("Test new data")
|
||||
_log.Infof("Test new data - %s", hour.String())
|
||||
|
||||
err = _log.fd.Close()
|
||||
if err != nil {
|
||||
t.Errorf("close log fd fail - %s, %s\n", err.Error(), _log.fileName)
|
||||
}
|
||||
|
||||
// check both old and new log file datas
|
||||
oldLogName := logName + "." + oldLogSuffix
|
||||
err = checkLogData(oldLogName, "Test data", 2)
|
||||
if err != nil {
|
||||
t.Errorf("old log file checkLogData fail - %s, %s\n", err.Error(), oldLogName)
|
||||
}
|
||||
|
||||
err = checkLogData(logName, "Test new data", 2)
|
||||
if err != nil {
|
||||
t.Errorf("new log file checkLogData fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
|
||||
// remove test log files
|
||||
err = os.Remove(oldLogName)
|
||||
if err != nil {
|
||||
t.Errorf("Remove final old log file fail - %s, %s\n", err.Error(), oldLogName)
|
||||
}
|
||||
|
||||
err = os.Remove(logName)
|
||||
if err != nil {
|
||||
t.Errorf("Remove final new log file fail - %s, %s\n", err.Error(), logName)
|
||||
}
|
||||
}
|
72
vendor/github.com/ngaut/pools/id_pool.go
generated
vendored
Normal file
72
vendor/github.com/ngaut/pools/id_pool.go
generated
vendored
Normal file
@ -0,0 +1,72 @@
|
||||
// Copyright 2014, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pools
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// IDPool is used to ensure that the set of IDs in use concurrently never
|
||||
// contains any duplicates. The IDs start at 1 and increase without bound, but
|
||||
// will never be larger than the peak number of concurrent uses.
|
||||
//
|
||||
// IDPool's Get() and Set() methods can be used concurrently.
|
||||
type IDPool struct {
|
||||
sync.Mutex
|
||||
|
||||
// used holds the set of values that have been returned to us with Put().
|
||||
used map[uint32]bool
|
||||
// maxUsed remembers the largest value we've given out.
|
||||
maxUsed uint32
|
||||
}
|
||||
|
||||
// NewIDPool creates and initializes an IDPool.
|
||||
func NewIDPool() *IDPool {
|
||||
return &IDPool{
|
||||
used: make(map[uint32]bool),
|
||||
}
|
||||
}
|
||||
|
||||
// Get returns an ID that is unique among currently active users of this pool.
|
||||
func (pool *IDPool) Get() (id uint32) {
|
||||
pool.Lock()
|
||||
defer pool.Unlock()
|
||||
|
||||
// Pick a value that's been returned, if any.
|
||||
for key, _ := range pool.used {
|
||||
delete(pool.used, key)
|
||||
return key
|
||||
}
|
||||
|
||||
// No recycled IDs are available, so increase the pool size.
|
||||
pool.maxUsed += 1
|
||||
return pool.maxUsed
|
||||
}
|
||||
|
||||
// Put recycles an ID back into the pool for others to use. Putting back a value
|
||||
// or 0, or a value that is not currently "checked out", will result in a panic
|
||||
// because that should never happen except in the case of a programming error.
|
||||
func (pool *IDPool) Put(id uint32) {
|
||||
pool.Lock()
|
||||
defer pool.Unlock()
|
||||
|
||||
if id < 1 || id > pool.maxUsed {
|
||||
panic(fmt.Errorf("IDPool.Put(%v): invalid value, must be in the range [1,%v]", id, pool.maxUsed))
|
||||
}
|
||||
|
||||
if pool.used[id] {
|
||||
panic(fmt.Errorf("IDPool.Put(%v): can't put value that was already recycled", id))
|
||||
}
|
||||
|
||||
// If we're recycling maxUsed, just shrink the pool.
|
||||
if id == pool.maxUsed {
|
||||
pool.maxUsed = id - 1
|
||||
return
|
||||
}
|
||||
|
||||
// Add it to the set of recycled IDs.
|
||||
pool.used[id] = true
|
||||
}
|
118
vendor/github.com/ngaut/pools/id_pool_test.go
generated
vendored
Normal file
118
vendor/github.com/ngaut/pools/id_pool_test.go
generated
vendored
Normal file
@ -0,0 +1,118 @@
|
||||
// Copyright 2014, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pools
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func (pool *IDPool) want(want *IDPool, t *testing.T) {
|
||||
if pool.maxUsed != want.maxUsed {
|
||||
t.Errorf("pool.maxUsed = %#v, want %#v", pool.maxUsed, want.maxUsed)
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(pool.used, want.used) {
|
||||
t.Errorf("pool.used = %#v, want %#v", pool.used, want.used)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIDPoolFirstGet(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
|
||||
if got := pool.Get(); got != 1 {
|
||||
t.Errorf("pool.Get() = %v, want 1", got)
|
||||
}
|
||||
|
||||
pool.want(&IDPool{used: map[uint32]bool{}, maxUsed: 1}, t)
|
||||
}
|
||||
|
||||
func TestIDPoolSecondGet(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
pool.Get()
|
||||
|
||||
if got := pool.Get(); got != 2 {
|
||||
t.Errorf("pool.Get() = %v, want 2", got)
|
||||
}
|
||||
|
||||
pool.want(&IDPool{used: map[uint32]bool{}, maxUsed: 2}, t)
|
||||
}
|
||||
|
||||
func TestIDPoolPutToUsedSet(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
id1 := pool.Get()
|
||||
pool.Get()
|
||||
pool.Put(id1)
|
||||
|
||||
pool.want(&IDPool{used: map[uint32]bool{1: true}, maxUsed: 2}, t)
|
||||
}
|
||||
|
||||
func TestIDPoolPutMaxUsed1(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
id1 := pool.Get()
|
||||
pool.Put(id1)
|
||||
|
||||
pool.want(&IDPool{used: map[uint32]bool{}, maxUsed: 0}, t)
|
||||
}
|
||||
|
||||
func TestIDPoolPutMaxUsed2(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
pool.Get()
|
||||
id2 := pool.Get()
|
||||
pool.Put(id2)
|
||||
|
||||
pool.want(&IDPool{used: map[uint32]bool{}, maxUsed: 1}, t)
|
||||
}
|
||||
|
||||
func TestIDPoolGetFromUsedSet(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
id1 := pool.Get()
|
||||
pool.Get()
|
||||
pool.Put(id1)
|
||||
|
||||
if got := pool.Get(); got != 1 {
|
||||
t.Errorf("pool.Get() = %v, want 1", got)
|
||||
}
|
||||
|
||||
pool.want(&IDPool{used: map[uint32]bool{}, maxUsed: 2}, t)
|
||||
}
|
||||
|
||||
func wantError(want string, t *testing.T) {
|
||||
rec := recover()
|
||||
if rec == nil {
|
||||
t.Errorf("expected panic, but there wasn't one")
|
||||
}
|
||||
err, ok := rec.(error)
|
||||
if !ok || !strings.Contains(err.Error(), want) {
|
||||
t.Errorf("wrong error, got '%v', want '%v'", err, want)
|
||||
}
|
||||
}
|
||||
|
||||
func TestIDPoolPut0(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
pool.Get()
|
||||
|
||||
defer wantError("invalid value", t)
|
||||
pool.Put(0)
|
||||
}
|
||||
|
||||
func TestIDPoolPutInvalid(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
pool.Get()
|
||||
|
||||
defer wantError("invalid value", t)
|
||||
pool.Put(5)
|
||||
}
|
||||
|
||||
func TestIDPoolPutDuplicate(t *testing.T) {
|
||||
pool := NewIDPool()
|
||||
pool.Get()
|
||||
pool.Get()
|
||||
pool.Put(1)
|
||||
|
||||
defer wantError("already recycled", t)
|
||||
pool.Put(1)
|
||||
}
|
149
vendor/github.com/ngaut/pools/numbered.go
generated
vendored
Normal file
149
vendor/github.com/ngaut/pools/numbered.go
generated
vendored
Normal file
@ -0,0 +1,149 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pools
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Numbered allows you to manage resources by tracking them with numbers.
|
||||
// There are no interface restrictions on what you can track.
|
||||
type Numbered struct {
|
||||
mu sync.Mutex
|
||||
empty *sync.Cond // Broadcast when pool becomes empty
|
||||
resources map[int64]*numberedWrapper
|
||||
}
|
||||
|
||||
type numberedWrapper struct {
|
||||
val interface{}
|
||||
inUse bool
|
||||
purpose string
|
||||
timeCreated time.Time
|
||||
timeUsed time.Time
|
||||
}
|
||||
|
||||
func NewNumbered() *Numbered {
|
||||
n := &Numbered{resources: make(map[int64]*numberedWrapper)}
|
||||
n.empty = sync.NewCond(&n.mu)
|
||||
return n
|
||||
}
|
||||
|
||||
// Register starts tracking a resource by the supplied id.
|
||||
// It does not lock the object.
|
||||
// It returns an error if the id already exists.
|
||||
func (nu *Numbered) Register(id int64, val interface{}) error {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
if _, ok := nu.resources[id]; ok {
|
||||
return fmt.Errorf("already present")
|
||||
}
|
||||
now := time.Now()
|
||||
nu.resources[id] = &numberedWrapper{
|
||||
val: val,
|
||||
timeCreated: now,
|
||||
timeUsed: now,
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unregiester forgets the specified resource.
|
||||
// If the resource is not present, it's ignored.
|
||||
func (nu *Numbered) Unregister(id int64) {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
delete(nu.resources, id)
|
||||
if len(nu.resources) == 0 {
|
||||
nu.empty.Broadcast()
|
||||
}
|
||||
}
|
||||
|
||||
// Get locks the resource for use. It accepts a purpose as a string.
|
||||
// If it cannot be found, it returns a "not found" error. If in use,
|
||||
// it returns a "in use: purpose" error.
|
||||
func (nu *Numbered) Get(id int64, purpose string) (val interface{}, err error) {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
nw, ok := nu.resources[id]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("not found")
|
||||
}
|
||||
if nw.inUse {
|
||||
return nil, fmt.Errorf("in use: %s", nw.purpose)
|
||||
}
|
||||
nw.inUse = true
|
||||
nw.purpose = purpose
|
||||
return nw.val, nil
|
||||
}
|
||||
|
||||
// Put unlocks a resource for someone else to use.
|
||||
func (nu *Numbered) Put(id int64) {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
if nw, ok := nu.resources[id]; ok {
|
||||
nw.inUse = false
|
||||
nw.purpose = ""
|
||||
nw.timeUsed = time.Now()
|
||||
}
|
||||
}
|
||||
|
||||
// GetOutdated returns a list of resources that are older than age, and locks them.
|
||||
// It does not return any resources that are already locked.
|
||||
func (nu *Numbered) GetOutdated(age time.Duration, purpose string) (vals []interface{}) {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
now := time.Now()
|
||||
for _, nw := range nu.resources {
|
||||
if nw.inUse {
|
||||
continue
|
||||
}
|
||||
if nw.timeCreated.Add(age).Sub(now) <= 0 {
|
||||
nw.inUse = true
|
||||
nw.purpose = purpose
|
||||
vals = append(vals, nw.val)
|
||||
}
|
||||
}
|
||||
return vals
|
||||
}
|
||||
|
||||
// GetIdle returns a list of resurces that have been idle for longer
|
||||
// than timeout, and locks them. It does not return any resources that
|
||||
// are already locked.
|
||||
func (nu *Numbered) GetIdle(timeout time.Duration, purpose string) (vals []interface{}) {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
now := time.Now()
|
||||
for _, nw := range nu.resources {
|
||||
if nw.inUse {
|
||||
continue
|
||||
}
|
||||
if nw.timeUsed.Add(timeout).Sub(now) <= 0 {
|
||||
nw.inUse = true
|
||||
nw.purpose = purpose
|
||||
vals = append(vals, nw.val)
|
||||
}
|
||||
}
|
||||
return vals
|
||||
}
|
||||
|
||||
// WaitForEmpty returns as soon as the pool becomes empty
|
||||
func (nu *Numbered) WaitForEmpty() {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
for len(nu.resources) != 0 {
|
||||
nu.empty.Wait()
|
||||
}
|
||||
}
|
||||
|
||||
func (nu *Numbered) StatsJSON() string {
|
||||
return fmt.Sprintf("{\"Size\": %v}", nu.Size())
|
||||
}
|
||||
|
||||
func (nu *Numbered) Size() (size int64) {
|
||||
nu.mu.Lock()
|
||||
defer nu.mu.Unlock()
|
||||
return int64(len(nu.resources))
|
||||
}
|
84
vendor/github.com/ngaut/pools/numbered_test.go
generated
vendored
Normal file
84
vendor/github.com/ngaut/pools/numbered_test.go
generated
vendored
Normal file
@ -0,0 +1,84 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pools
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestNumbered(t *testing.T) {
|
||||
id := int64(0)
|
||||
p := NewNumbered()
|
||||
|
||||
var err error
|
||||
if err = p.Register(id, id); err != nil {
|
||||
t.Errorf("Error %v", err)
|
||||
}
|
||||
if err = p.Register(id, id); err.Error() != "already present" {
|
||||
t.Errorf("want 'already present', got '%v'", err)
|
||||
}
|
||||
var v interface{}
|
||||
if v, err = p.Get(id, "test"); err != nil {
|
||||
t.Errorf("Error %v", err)
|
||||
}
|
||||
if v.(int64) != id {
|
||||
t.Errorf("want %v, got %v", id, v.(int64))
|
||||
}
|
||||
if v, err = p.Get(id, "test1"); err.Error() != "in use: test" {
|
||||
t.Errorf("want 'in use: test', got '%v'", err)
|
||||
}
|
||||
p.Put(id)
|
||||
if v, err = p.Get(1, "test2"); err.Error() != "not found" {
|
||||
t.Errorf("want 'not found', got '%v'", err)
|
||||
}
|
||||
p.Unregister(1) // Should not fail
|
||||
p.Unregister(0)
|
||||
// p is now empty
|
||||
|
||||
p.Register(id, id)
|
||||
id++
|
||||
p.Register(id, id)
|
||||
time.Sleep(300 * time.Millisecond)
|
||||
id++
|
||||
p.Register(id, id)
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// p has 0, 1, 2 (0 & 1 are aged)
|
||||
vals := p.GetOutdated(200*time.Millisecond, "by outdated")
|
||||
if len(vals) != 2 {
|
||||
t.Errorf("want 2, got %v", len(vals))
|
||||
}
|
||||
if v, err = p.Get(vals[0].(int64), "test1"); err.Error() != "in use: by outdated" {
|
||||
t.Errorf("want 'in use: by outdated', got '%v'", err)
|
||||
}
|
||||
for _, v := range vals {
|
||||
p.Put(v.(int64))
|
||||
}
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
// p has 0, 1, 2 (2 is idle)
|
||||
vals = p.GetIdle(200*time.Millisecond, "by idle")
|
||||
if len(vals) != 1 {
|
||||
t.Errorf("want 1, got %v", len(vals))
|
||||
}
|
||||
if v, err = p.Get(vals[0].(int64), "test1"); err.Error() != "in use: by idle" {
|
||||
t.Errorf("want 'in use: by idle', got '%v'", err)
|
||||
}
|
||||
if vals[0].(int64) != 2 {
|
||||
t.Errorf("want 2, got %v", vals[0])
|
||||
}
|
||||
p.Unregister(vals[0].(int64))
|
||||
|
||||
// p has 0 & 1
|
||||
if p.Size() != 2 {
|
||||
t.Errorf("want 2, got %v", p.Size())
|
||||
}
|
||||
go func() {
|
||||
p.Unregister(0)
|
||||
p.Unregister(1)
|
||||
}()
|
||||
p.WaitForEmpty()
|
||||
}
|
228
vendor/github.com/ngaut/pools/resource_pool.go
generated
vendored
Normal file
228
vendor/github.com/ngaut/pools/resource_pool.go
generated
vendored
Normal file
@ -0,0 +1,228 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package pools provides functionality to manage and reuse resources
|
||||
// like connections.
|
||||
package pools
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/ngaut/sync2"
|
||||
)
|
||||
|
||||
var (
|
||||
CLOSED_ERR = fmt.Errorf("ResourcePool is closed")
|
||||
)
|
||||
|
||||
// Factory is a function that can be used to create a resource.
|
||||
type Factory func() (Resource, error)
|
||||
|
||||
// Every resource needs to suport the Resource interface.
|
||||
// Thread synchronization between Close() and IsClosed()
|
||||
// is the responsibility the caller.
|
||||
type Resource interface {
|
||||
Close()
|
||||
}
|
||||
|
||||
// ResourcePool allows you to use a pool of resources.
|
||||
type ResourcePool struct {
|
||||
resources chan resourceWrapper
|
||||
factory Factory
|
||||
capacity sync2.AtomicInt64
|
||||
idleTimeout sync2.AtomicDuration
|
||||
|
||||
// stats
|
||||
waitCount sync2.AtomicInt64
|
||||
waitTime sync2.AtomicDuration
|
||||
}
|
||||
|
||||
type resourceWrapper struct {
|
||||
resource Resource
|
||||
timeUsed time.Time
|
||||
}
|
||||
|
||||
// NewResourcePool creates a new ResourcePool pool.
|
||||
// capacity is the initial capacity of the pool.
|
||||
// maxCap is the maximum capacity.
|
||||
// If a resource is unused beyond idleTimeout, it's discarded.
|
||||
// An idleTimeout of 0 means that there is no timeout.
|
||||
func NewResourcePool(factory Factory, capacity, maxCap int, idleTimeout time.Duration) *ResourcePool {
|
||||
if capacity <= 0 || maxCap <= 0 || capacity > maxCap {
|
||||
panic(fmt.Errorf("Invalid/out of range capacity"))
|
||||
}
|
||||
rp := &ResourcePool{
|
||||
resources: make(chan resourceWrapper, maxCap),
|
||||
factory: factory,
|
||||
capacity: sync2.AtomicInt64(capacity),
|
||||
idleTimeout: sync2.AtomicDuration(idleTimeout),
|
||||
}
|
||||
for i := 0; i < capacity; i++ {
|
||||
rp.resources <- resourceWrapper{}
|
||||
}
|
||||
return rp
|
||||
}
|
||||
|
||||
// Close empties the pool calling Close on all its resources.
|
||||
// You can call Close while there are outstanding resources.
|
||||
// It waits for all resources to be returned (Put).
|
||||
// After a Close, Get and TryGet are not allowed.
|
||||
func (rp *ResourcePool) Close() {
|
||||
rp.SetCapacity(0)
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) IsClosed() (closed bool) {
|
||||
return rp.capacity.Get() == 0
|
||||
}
|
||||
|
||||
// Get will return the next available resource. If capacity
|
||||
// has not been reached, it will create a new one using the factory. Otherwise,
|
||||
// it will indefinitely wait till the next resource becomes available.
|
||||
func (rp *ResourcePool) Get() (resource Resource, err error) {
|
||||
return rp.get(true)
|
||||
}
|
||||
|
||||
// TryGet will return the next available resource. If none is available, and capacity
|
||||
// has not been reached, it will create a new one using the factory. Otherwise,
|
||||
// it will return nil with no error.
|
||||
func (rp *ResourcePool) TryGet() (resource Resource, err error) {
|
||||
return rp.get(false)
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) get(wait bool) (resource Resource, err error) {
|
||||
// Fetch
|
||||
var wrapper resourceWrapper
|
||||
var ok bool
|
||||
select {
|
||||
case wrapper, ok = <-rp.resources:
|
||||
default:
|
||||
if !wait {
|
||||
return nil, nil
|
||||
}
|
||||
startTime := time.Now()
|
||||
wrapper, ok = <-rp.resources
|
||||
rp.recordWait(startTime)
|
||||
}
|
||||
if !ok {
|
||||
return nil, CLOSED_ERR
|
||||
}
|
||||
|
||||
// Unwrap
|
||||
timeout := rp.idleTimeout.Get()
|
||||
if wrapper.resource != nil && timeout > 0 && wrapper.timeUsed.Add(timeout).Sub(time.Now()) < 0 {
|
||||
wrapper.resource.Close()
|
||||
wrapper.resource = nil
|
||||
}
|
||||
if wrapper.resource == nil {
|
||||
wrapper.resource, err = rp.factory()
|
||||
if err != nil {
|
||||
rp.resources <- resourceWrapper{}
|
||||
}
|
||||
}
|
||||
return wrapper.resource, err
|
||||
}
|
||||
|
||||
// Put will return a resource to the pool. For every successful Get,
|
||||
// a corresponding Put is required. If you no longer need a resource,
|
||||
// you will need to call Put(nil) instead of returning the closed resource.
|
||||
// The will eventually cause a new resource to be created in its place.
|
||||
func (rp *ResourcePool) Put(resource Resource) {
|
||||
var wrapper resourceWrapper
|
||||
if resource != nil {
|
||||
wrapper = resourceWrapper{resource, time.Now()}
|
||||
}
|
||||
select {
|
||||
case rp.resources <- wrapper:
|
||||
default:
|
||||
panic(fmt.Errorf("Attempt to Put into a full ResourcePool"))
|
||||
}
|
||||
}
|
||||
|
||||
// SetCapacity changes the capacity of the pool.
|
||||
// You can use it to shrink or expand, but not beyond
|
||||
// the max capacity. If the change requires the pool
|
||||
// to be shrunk, SetCapacity waits till the necessary
|
||||
// number of resources are returned to the pool.
|
||||
// A SetCapacity of 0 is equivalent to closing the ResourcePool.
|
||||
func (rp *ResourcePool) SetCapacity(capacity int) error {
|
||||
if capacity < 0 || capacity > cap(rp.resources) {
|
||||
return fmt.Errorf("capacity %d is out of range", capacity)
|
||||
}
|
||||
|
||||
// Atomically swap new capacity with old, but only
|
||||
// if old capacity is non-zero.
|
||||
var oldcap int
|
||||
for {
|
||||
oldcap = int(rp.capacity.Get())
|
||||
if oldcap == 0 {
|
||||
return CLOSED_ERR
|
||||
}
|
||||
if oldcap == capacity {
|
||||
return nil
|
||||
}
|
||||
if rp.capacity.CompareAndSwap(int64(oldcap), int64(capacity)) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if capacity < oldcap {
|
||||
for i := 0; i < oldcap-capacity; i++ {
|
||||
wrapper := <-rp.resources
|
||||
if wrapper.resource != nil {
|
||||
wrapper.resource.Close()
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for i := 0; i < capacity-oldcap; i++ {
|
||||
rp.resources <- resourceWrapper{}
|
||||
}
|
||||
}
|
||||
if capacity == 0 {
|
||||
close(rp.resources)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) recordWait(start time.Time) {
|
||||
rp.waitCount.Add(1)
|
||||
rp.waitTime.Add(time.Now().Sub(start))
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) SetIdleTimeout(idleTimeout time.Duration) {
|
||||
rp.idleTimeout.Set(idleTimeout)
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) StatsJSON() string {
|
||||
c, a, mx, wc, wt, it := rp.Stats()
|
||||
return fmt.Sprintf(`{"Capacity": %v, "Available": %v, "MaxCapacity": %v, "WaitCount": %v, "WaitTime": %v, "IdleTimeout": %v}`, c, a, mx, wc, int64(wt), int64(it))
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) Stats() (capacity, available, maxCap, waitCount int64, waitTime, idleTimeout time.Duration) {
|
||||
return rp.Capacity(), rp.Available(), rp.MaxCap(), rp.WaitCount(), rp.WaitTime(), rp.IdleTimeout()
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) Capacity() int64 {
|
||||
return rp.capacity.Get()
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) Available() int64 {
|
||||
return int64(len(rp.resources))
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) MaxCap() int64 {
|
||||
return int64(cap(rp.resources))
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) WaitCount() int64 {
|
||||
return rp.waitCount.Get()
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) WaitTime() time.Duration {
|
||||
return rp.waitTime.Get()
|
||||
}
|
||||
|
||||
func (rp *ResourcePool) IdleTimeout() time.Duration {
|
||||
return rp.idleTimeout.Get()
|
||||
}
|
487
vendor/github.com/ngaut/pools/resource_pool_test.go
generated
vendored
Normal file
487
vendor/github.com/ngaut/pools/resource_pool_test.go
generated
vendored
Normal file
@ -0,0 +1,487 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pools
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ngaut/sync2"
|
||||
)
|
||||
|
||||
var lastId, count sync2.AtomicInt64
|
||||
|
||||
type TestResource struct {
|
||||
num int64
|
||||
closed bool
|
||||
}
|
||||
|
||||
func (tr *TestResource) Close() {
|
||||
if !tr.closed {
|
||||
count.Add(-1)
|
||||
tr.closed = true
|
||||
}
|
||||
}
|
||||
|
||||
func (tr *TestResource) IsClosed() bool {
|
||||
return tr.closed
|
||||
}
|
||||
|
||||
func PoolFactory() (Resource, error) {
|
||||
count.Add(1)
|
||||
return &TestResource{lastId.Add(1), false}, nil
|
||||
}
|
||||
|
||||
func FailFactory() (Resource, error) {
|
||||
return nil, errors.New("Failed")
|
||||
}
|
||||
|
||||
func SlowFailFactory() (Resource, error) {
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
return nil, errors.New("Failed")
|
||||
}
|
||||
|
||||
func TestOpen(t *testing.T) {
|
||||
lastId.Set(0)
|
||||
count.Set(0)
|
||||
p := NewResourcePool(PoolFactory, 6, 6, time.Second)
|
||||
p.SetCapacity(5)
|
||||
var resources [10]Resource
|
||||
|
||||
// Test Get
|
||||
for i := 0; i < 5; i++ {
|
||||
r, err := p.Get()
|
||||
resources[i] = r
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
_, available, _, waitCount, waitTime, _ := p.Stats()
|
||||
if available != int64(5-i-1) {
|
||||
t.Errorf("expecting %d, received %d", 5-i-1, available)
|
||||
}
|
||||
if waitCount != 0 {
|
||||
t.Errorf("expecting 0, received %d", waitCount)
|
||||
}
|
||||
if waitTime != 0 {
|
||||
t.Errorf("expecting 0, received %d", waitTime)
|
||||
}
|
||||
if lastId.Get() != int64(i+1) {
|
||||
t.Errorf("Expecting %d, received %d", i+1, lastId.Get())
|
||||
}
|
||||
if count.Get() != int64(i+1) {
|
||||
t.Errorf("Expecting %d, received %d", i+1, count.Get())
|
||||
}
|
||||
}
|
||||
|
||||
// Test TryGet
|
||||
r, err := p.TryGet()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
if r != nil {
|
||||
t.Errorf("Expecting nil")
|
||||
}
|
||||
for i := 0; i < 5; i++ {
|
||||
p.Put(resources[i])
|
||||
_, available, _, _, _, _ := p.Stats()
|
||||
if available != int64(i+1) {
|
||||
t.Errorf("expecting %d, received %d", 5-i-1, available)
|
||||
}
|
||||
}
|
||||
for i := 0; i < 5; i++ {
|
||||
r, err := p.TryGet()
|
||||
resources[i] = r
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
if r == nil {
|
||||
t.Errorf("Expecting non-nil")
|
||||
}
|
||||
if lastId.Get() != 5 {
|
||||
t.Errorf("Expecting 5, received %d", lastId.Get())
|
||||
}
|
||||
if count.Get() != 5 {
|
||||
t.Errorf("Expecting 5, received %d", count.Get())
|
||||
}
|
||||
}
|
||||
|
||||
// Test that Get waits
|
||||
ch := make(chan bool)
|
||||
go func() {
|
||||
for i := 0; i < 5; i++ {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Get failed: %v", err)
|
||||
}
|
||||
resources[i] = r
|
||||
}
|
||||
for i := 0; i < 5; i++ {
|
||||
p.Put(resources[i])
|
||||
}
|
||||
ch <- true
|
||||
}()
|
||||
for i := 0; i < 5; i++ {
|
||||
// Sleep to ensure the goroutine waits
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
p.Put(resources[i])
|
||||
}
|
||||
<-ch
|
||||
_, _, _, waitCount, waitTime, _ := p.Stats()
|
||||
if waitCount != 5 {
|
||||
t.Errorf("Expecting 5, received %d", waitCount)
|
||||
}
|
||||
if waitTime == 0 {
|
||||
t.Errorf("Expecting non-zero")
|
||||
}
|
||||
if lastId.Get() != 5 {
|
||||
t.Errorf("Expecting 5, received %d", lastId.Get())
|
||||
}
|
||||
|
||||
// Test Close resource
|
||||
r, err = p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
r.Close()
|
||||
p.Put(nil)
|
||||
if count.Get() != 4 {
|
||||
t.Errorf("Expecting 4, received %d", count.Get())
|
||||
}
|
||||
for i := 0; i < 5; i++ {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Get failed: %v", err)
|
||||
}
|
||||
resources[i] = r
|
||||
}
|
||||
for i := 0; i < 5; i++ {
|
||||
p.Put(resources[i])
|
||||
}
|
||||
if count.Get() != 5 {
|
||||
t.Errorf("Expecting 5, received %d", count.Get())
|
||||
}
|
||||
if lastId.Get() != 6 {
|
||||
t.Errorf("Expecting 6, received %d", lastId.Get())
|
||||
}
|
||||
|
||||
// SetCapacity
|
||||
p.SetCapacity(3)
|
||||
if count.Get() != 3 {
|
||||
t.Errorf("Expecting 3, received %d", count.Get())
|
||||
}
|
||||
if lastId.Get() != 6 {
|
||||
t.Errorf("Expecting 6, received %d", lastId.Get())
|
||||
}
|
||||
capacity, available, _, _, _, _ := p.Stats()
|
||||
if capacity != 3 {
|
||||
t.Errorf("Expecting 3, received %d", capacity)
|
||||
}
|
||||
if available != 3 {
|
||||
t.Errorf("Expecting 3, received %d", available)
|
||||
}
|
||||
p.SetCapacity(6)
|
||||
capacity, available, _, _, _, _ = p.Stats()
|
||||
if capacity != 6 {
|
||||
t.Errorf("Expecting 6, received %d", capacity)
|
||||
}
|
||||
if available != 6 {
|
||||
t.Errorf("Expecting 6, received %d", available)
|
||||
}
|
||||
for i := 0; i < 6; i++ {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Get failed: %v", err)
|
||||
}
|
||||
resources[i] = r
|
||||
}
|
||||
for i := 0; i < 6; i++ {
|
||||
p.Put(resources[i])
|
||||
}
|
||||
if count.Get() != 6 {
|
||||
t.Errorf("Expecting 5, received %d", count.Get())
|
||||
}
|
||||
if lastId.Get() != 9 {
|
||||
t.Errorf("Expecting 9, received %d", lastId.Get())
|
||||
}
|
||||
|
||||
// Close
|
||||
p.Close()
|
||||
capacity, available, _, _, _, _ = p.Stats()
|
||||
if capacity != 0 {
|
||||
t.Errorf("Expecting 0, received %d", capacity)
|
||||
}
|
||||
if available != 0 {
|
||||
t.Errorf("Expecting 0, received %d", available)
|
||||
}
|
||||
if count.Get() != 0 {
|
||||
t.Errorf("Expecting 0, received %d", count.Get())
|
||||
}
|
||||
}
|
||||
|
||||
func TestShrinking(t *testing.T) {
|
||||
lastId.Set(0)
|
||||
count.Set(0)
|
||||
p := NewResourcePool(PoolFactory, 5, 5, time.Second)
|
||||
var resources [10]Resource
|
||||
// Leave one empty slot in the pool
|
||||
for i := 0; i < 4; i++ {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Get failed: %v", err)
|
||||
}
|
||||
resources[i] = r
|
||||
}
|
||||
go p.SetCapacity(3)
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
stats := p.StatsJSON()
|
||||
expected := `{"Capacity": 3, "Available": 0, "MaxCapacity": 5, "WaitCount": 0, "WaitTime": 0, "IdleTimeout": 1000000000}`
|
||||
if stats != expected {
|
||||
t.Errorf(`expecting '%s', received '%s'`, expected, stats)
|
||||
}
|
||||
|
||||
// TryGet is allowed when shrinking
|
||||
r, err := p.TryGet()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
if r != nil {
|
||||
t.Errorf("Expecting nil")
|
||||
}
|
||||
|
||||
// Get is allowed when shrinking, but it will wait
|
||||
getdone := make(chan bool)
|
||||
go func() {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
p.Put(r)
|
||||
getdone <- true
|
||||
}()
|
||||
|
||||
// Put is allowed when shrinking. It's necessary.
|
||||
for i := 0; i < 4; i++ {
|
||||
p.Put(resources[i])
|
||||
}
|
||||
// Wait for Get test to complete
|
||||
<-getdone
|
||||
stats = p.StatsJSON()
|
||||
expected = `{"Capacity": 3, "Available": 3, "MaxCapacity": 5, "WaitCount": 0, "WaitTime": 0, "IdleTimeout": 1000000000}`
|
||||
if stats != expected {
|
||||
t.Errorf(`expecting '%s', received '%s'`, expected, stats)
|
||||
}
|
||||
if count.Get() != 3 {
|
||||
t.Errorf("Expecting 3, received %d", count.Get())
|
||||
}
|
||||
|
||||
// Ensure no deadlock if SetCapacity is called after we start
|
||||
// waiting for a resource
|
||||
for i := 0; i < 3; i++ {
|
||||
resources[i], err = p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
}
|
||||
// This will wait because pool is empty
|
||||
go func() {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
p.Put(r)
|
||||
getdone <- true
|
||||
}()
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
|
||||
// This will wait till we Put
|
||||
go p.SetCapacity(2)
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
|
||||
// This should not hang
|
||||
for i := 0; i < 3; i++ {
|
||||
p.Put(resources[i])
|
||||
}
|
||||
<-getdone
|
||||
capacity, available, _, _, _, _ := p.Stats()
|
||||
if capacity != 2 {
|
||||
t.Errorf("Expecting 2, received %d", capacity)
|
||||
}
|
||||
if available != 2 {
|
||||
t.Errorf("Expecting 2, received %d", available)
|
||||
}
|
||||
if count.Get() != 2 {
|
||||
t.Errorf("Expecting 2, received %d", count.Get())
|
||||
}
|
||||
|
||||
// Test race condition of SetCapacity with itself
|
||||
p.SetCapacity(3)
|
||||
for i := 0; i < 3; i++ {
|
||||
resources[i], err = p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
}
|
||||
// This will wait because pool is empty
|
||||
go func() {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
p.Put(r)
|
||||
getdone <- true
|
||||
}()
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
|
||||
// This will wait till we Put
|
||||
go p.SetCapacity(2)
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
go p.SetCapacity(4)
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
|
||||
// This should not hang
|
||||
for i := 0; i < 3; i++ {
|
||||
p.Put(resources[i])
|
||||
}
|
||||
<-getdone
|
||||
|
||||
err = p.SetCapacity(-1)
|
||||
if err == nil {
|
||||
t.Errorf("Expecting error")
|
||||
}
|
||||
err = p.SetCapacity(255555)
|
||||
if err == nil {
|
||||
t.Errorf("Expecting error")
|
||||
}
|
||||
|
||||
capacity, available, _, _, _, _ = p.Stats()
|
||||
if capacity != 4 {
|
||||
t.Errorf("Expecting 4, received %d", capacity)
|
||||
}
|
||||
if available != 4 {
|
||||
t.Errorf("Expecting 4, received %d", available)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClosing(t *testing.T) {
|
||||
lastId.Set(0)
|
||||
count.Set(0)
|
||||
p := NewResourcePool(PoolFactory, 5, 5, time.Second)
|
||||
var resources [10]Resource
|
||||
for i := 0; i < 5; i++ {
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Get failed: %v", err)
|
||||
}
|
||||
resources[i] = r
|
||||
}
|
||||
ch := make(chan bool)
|
||||
go func() {
|
||||
p.Close()
|
||||
ch <- true
|
||||
}()
|
||||
|
||||
// Wait for goroutine to call Close
|
||||
time.Sleep(10 * time.Nanosecond)
|
||||
stats := p.StatsJSON()
|
||||
expected := `{"Capacity": 0, "Available": 0, "MaxCapacity": 5, "WaitCount": 0, "WaitTime": 0, "IdleTimeout": 1000000000}`
|
||||
if stats != expected {
|
||||
t.Errorf(`expecting '%s', received '%s'`, expected, stats)
|
||||
}
|
||||
|
||||
// Put is allowed when closing
|
||||
for i := 0; i < 5; i++ {
|
||||
p.Put(resources[i])
|
||||
}
|
||||
|
||||
// Wait for Close to return
|
||||
<-ch
|
||||
|
||||
// SetCapacity must be ignored after Close
|
||||
err := p.SetCapacity(1)
|
||||
if err == nil {
|
||||
t.Errorf("expecting error")
|
||||
}
|
||||
|
||||
stats = p.StatsJSON()
|
||||
expected = `{"Capacity": 0, "Available": 0, "MaxCapacity": 5, "WaitCount": 0, "WaitTime": 0, "IdleTimeout": 1000000000}`
|
||||
if stats != expected {
|
||||
t.Errorf(`expecting '%s', received '%s'`, expected, stats)
|
||||
}
|
||||
if lastId.Get() != 5 {
|
||||
t.Errorf("Expecting 5, received %d", count.Get())
|
||||
}
|
||||
if count.Get() != 0 {
|
||||
t.Errorf("Expecting 0, received %d", count.Get())
|
||||
}
|
||||
}
|
||||
|
||||
func TestIdleTimeout(t *testing.T) {
|
||||
lastId.Set(0)
|
||||
count.Set(0)
|
||||
p := NewResourcePool(PoolFactory, 1, 1, 10*time.Nanosecond)
|
||||
defer p.Close()
|
||||
|
||||
r, err := p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
p.Put(r)
|
||||
if lastId.Get() != 1 {
|
||||
t.Errorf("Expecting 1, received %d", count.Get())
|
||||
}
|
||||
if count.Get() != 1 {
|
||||
t.Errorf("Expecting 1, received %d", count.Get())
|
||||
}
|
||||
time.Sleep(20 * time.Nanosecond)
|
||||
r, err = p.Get()
|
||||
if err != nil {
|
||||
t.Errorf("Unexpected error %v", err)
|
||||
}
|
||||
if lastId.Get() != 2 {
|
||||
t.Errorf("Expecting 2, received %d", count.Get())
|
||||
}
|
||||
if count.Get() != 1 {
|
||||
t.Errorf("Expecting 1, received %d", count.Get())
|
||||
}
|
||||
p.Put(r)
|
||||
}
|
||||
|
||||
func TestCreateFail(t *testing.T) {
|
||||
lastId.Set(0)
|
||||
count.Set(0)
|
||||
p := NewResourcePool(FailFactory, 5, 5, time.Second)
|
||||
defer p.Close()
|
||||
if _, err := p.Get(); err.Error() != "Failed" {
|
||||
t.Errorf("Expecting Failed, received %v", err)
|
||||
}
|
||||
stats := p.StatsJSON()
|
||||
expected := `{"Capacity": 5, "Available": 5, "MaxCapacity": 5, "WaitCount": 0, "WaitTime": 0, "IdleTimeout": 1000000000}`
|
||||
if stats != expected {
|
||||
t.Errorf(`expecting '%s', received '%s'`, expected, stats)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSlowCreateFail(t *testing.T) {
|
||||
lastId.Set(0)
|
||||
count.Set(0)
|
||||
p := NewResourcePool(SlowFailFactory, 2, 2, time.Second)
|
||||
defer p.Close()
|
||||
ch := make(chan bool)
|
||||
// The third Get should not wait indefinitely
|
||||
for i := 0; i < 3; i++ {
|
||||
go func() {
|
||||
p.Get()
|
||||
ch <- true
|
||||
}()
|
||||
}
|
||||
for i := 0; i < 3; i++ {
|
||||
<-ch
|
||||
}
|
||||
_, available, _, _, _, _ := p.Stats()
|
||||
if available != 2 {
|
||||
t.Errorf("Expecting 2, received %d", available)
|
||||
}
|
||||
}
|
214
vendor/github.com/ngaut/pools/roundrobin.go
generated
vendored
Normal file
214
vendor/github.com/ngaut/pools/roundrobin.go
generated
vendored
Normal file
@ -0,0 +1,214 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pools
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// RoundRobin is deprecated. Use ResourcePool instead.
|
||||
// RoundRobin allows you to use a pool of resources in a round robin fashion.
|
||||
type RoundRobin struct {
|
||||
mu sync.Mutex
|
||||
available *sync.Cond
|
||||
resources chan fifoWrapper
|
||||
size int64
|
||||
factory Factory
|
||||
idleTimeout time.Duration
|
||||
|
||||
// stats
|
||||
waitCount int64
|
||||
waitTime time.Duration
|
||||
}
|
||||
|
||||
type fifoWrapper struct {
|
||||
resource Resource
|
||||
timeUsed time.Time
|
||||
}
|
||||
|
||||
// NewRoundRobin creates a new RoundRobin pool.
|
||||
// capacity is the maximum number of resources RoundRobin will create.
|
||||
// factory will be the function used to create resources.
|
||||
// If a resource is unused beyond idleTimeout, it's discarded.
|
||||
func NewRoundRobin(capacity int, idleTimeout time.Duration) *RoundRobin {
|
||||
r := &RoundRobin{
|
||||
resources: make(chan fifoWrapper, capacity),
|
||||
size: 0,
|
||||
idleTimeout: idleTimeout,
|
||||
}
|
||||
r.available = sync.NewCond(&r.mu)
|
||||
return r
|
||||
}
|
||||
|
||||
// Open starts allowing the creation of resources
|
||||
func (rr *RoundRobin) Open(factory Factory) {
|
||||
rr.mu.Lock()
|
||||
defer rr.mu.Unlock()
|
||||
rr.factory = factory
|
||||
}
|
||||
|
||||
// Close empties the pool calling Close on all its resources.
|
||||
// It waits for all resources to be returned (Put).
|
||||
func (rr *RoundRobin) Close() {
|
||||
rr.mu.Lock()
|
||||
defer rr.mu.Unlock()
|
||||
for rr.size > 0 {
|
||||
select {
|
||||
case fw := <-rr.resources:
|
||||
go fw.resource.Close()
|
||||
rr.size--
|
||||
default:
|
||||
rr.available.Wait()
|
||||
}
|
||||
}
|
||||
rr.factory = nil
|
||||
}
|
||||
|
||||
func (rr *RoundRobin) IsClosed() bool {
|
||||
return rr.factory == nil
|
||||
}
|
||||
|
||||
// Get will return the next available resource. If none is available, and capacity
|
||||
// has not been reached, it will create a new one using the factory. Otherwise,
|
||||
// it will indefinitely wait till the next resource becomes available.
|
||||
func (rr *RoundRobin) Get() (resource Resource, err error) {
|
||||
return rr.get(true)
|
||||
}
|
||||
|
||||
// TryGet will return the next available resource. If none is available, and capacity
|
||||
// has not been reached, it will create a new one using the factory. Otherwise,
|
||||
// it will return nil with no error.
|
||||
func (rr *RoundRobin) TryGet() (resource Resource, err error) {
|
||||
return rr.get(false)
|
||||
}
|
||||
|
||||
func (rr *RoundRobin) get(wait bool) (resource Resource, err error) {
|
||||
rr.mu.Lock()
|
||||
defer rr.mu.Unlock()
|
||||
// Any waits in this loop will release the lock, and it will be
|
||||
// reacquired before the waits return.
|
||||
for {
|
||||
select {
|
||||
case fw := <-rr.resources:
|
||||
// Found a free resource in the channel
|
||||
if rr.idleTimeout > 0 && fw.timeUsed.Add(rr.idleTimeout).Sub(time.Now()) < 0 {
|
||||
// resource has been idle for too long. Discard & go for next.
|
||||
go fw.resource.Close()
|
||||
rr.size--
|
||||
// Nobody else should be waiting, but signal anyway.
|
||||
rr.available.Signal()
|
||||
continue
|
||||
}
|
||||
return fw.resource, nil
|
||||
default:
|
||||
// resource channel is empty
|
||||
if rr.size >= int64(cap(rr.resources)) {
|
||||
// The pool is full
|
||||
if wait {
|
||||
start := time.Now()
|
||||
rr.available.Wait()
|
||||
rr.recordWait(start)
|
||||
continue
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
// Pool is not full. Create a resource.
|
||||
if resource, err = rr.waitForCreate(); err != nil {
|
||||
// size was decremented, and somebody could be waiting.
|
||||
rr.available.Signal()
|
||||
return nil, err
|
||||
}
|
||||
// Creation successful. Account for this by incrementing size.
|
||||
rr.size++
|
||||
return resource, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (rr *RoundRobin) recordWait(start time.Time) {
|
||||
rr.waitCount++
|
||||
rr.waitTime += time.Now().Sub(start)
|
||||
}
|
||||
|
||||
func (rr *RoundRobin) waitForCreate() (resource Resource, err error) {
|
||||
// Prevent thundering herd: increment size before creating resource, and decrement after.
|
||||
rr.size++
|
||||
rr.mu.Unlock()
|
||||
defer func() {
|
||||
rr.mu.Lock()
|
||||
rr.size--
|
||||
}()
|
||||
return rr.factory()
|
||||
}
|
||||
|
||||
// Put will return a resource to the pool. You MUST return every resource to the pool,
|
||||
// even if it's closed. If a resource is closed, you should call Put(nil).
|
||||
func (rr *RoundRobin) Put(resource Resource) {
|
||||
rr.mu.Lock()
|
||||
defer rr.available.Signal()
|
||||
defer rr.mu.Unlock()
|
||||
|
||||
if rr.size > int64(cap(rr.resources)) {
|
||||
if resource != nil {
|
||||
go resource.Close()
|
||||
}
|
||||
rr.size--
|
||||
} else if resource == nil {
|
||||
rr.size--
|
||||
} else {
|
||||
if len(rr.resources) == cap(rr.resources) {
|
||||
panic("unexpected")
|
||||
}
|
||||
rr.resources <- fifoWrapper{resource, time.Now()}
|
||||
}
|
||||
}
|
||||
|
||||
// Set capacity changes the capacity of the pool.
|
||||
// You can use it to expand or shrink.
|
||||
func (rr *RoundRobin) SetCapacity(capacity int) error {
|
||||
rr.mu.Lock()
|
||||
defer rr.available.Broadcast()
|
||||
defer rr.mu.Unlock()
|
||||
|
||||
nr := make(chan fifoWrapper, capacity)
|
||||
// This loop transfers resources from the old channel
|
||||
// to the new one, until it fills up or runs out.
|
||||
// It discards extras, if any.
|
||||
for {
|
||||
select {
|
||||
case fw := <-rr.resources:
|
||||
if len(nr) < cap(nr) {
|
||||
nr <- fw
|
||||
} else {
|
||||
go fw.resource.Close()
|
||||
rr.size--
|
||||
}
|
||||
continue
|
||||
default:
|
||||
}
|
||||
break
|
||||
}
|
||||
rr.resources = nr
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rr *RoundRobin) SetIdleTimeout(idleTimeout time.Duration) {
|
||||
rr.mu.Lock()
|
||||
defer rr.mu.Unlock()
|
||||
rr.idleTimeout = idleTimeout
|
||||
}
|
||||
|
||||
func (rr *RoundRobin) StatsJSON() string {
|
||||
s, c, a, wc, wt, it := rr.Stats()
|
||||
return fmt.Sprintf("{\"Size\": %v, \"Capacity\": %v, \"Available\": %v, \"WaitCount\": %v, \"WaitTime\": %v, \"IdleTimeout\": %v}", s, c, a, wc, int64(wt), int64(it))
|
||||
}
|
||||
|
||||
func (rr *RoundRobin) Stats() (size, capacity, available, waitCount int64, waitTime, idleTimeout time.Duration) {
|
||||
rr.mu.Lock()
|
||||
defer rr.mu.Unlock()
|
||||
return rr.size, int64(cap(rr.resources)), int64(len(rr.resources)), rr.waitCount, rr.waitTime, rr.idleTimeout
|
||||
}
|
126
vendor/github.com/ngaut/pools/roundrobin_test.go
generated
vendored
Normal file
126
vendor/github.com/ngaut/pools/roundrobin_test.go
generated
vendored
Normal file
@ -0,0 +1,126 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pools
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestPool(t *testing.T) {
|
||||
lastId.Set(0)
|
||||
p := NewRoundRobin(5, time.Duration(10e9))
|
||||
p.Open(PoolFactory)
|
||||
defer p.Close()
|
||||
|
||||
for i := 0; i < 2; i++ {
|
||||
r, err := p.TryGet()
|
||||
if err != nil {
|
||||
t.Errorf("TryGet failed: %v", err)
|
||||
}
|
||||
if r.(*TestResource).num != 1 {
|
||||
t.Errorf("Expecting 1, received %d", r.(*TestResource).num)
|
||||
}
|
||||
p.Put(r)
|
||||
}
|
||||
// p = [1]
|
||||
|
||||
all := make([]Resource, 5)
|
||||
for i := 0; i < 5; i++ {
|
||||
if all[i], _ = p.TryGet(); all[i] == nil {
|
||||
t.Errorf("TryGet failed with nil")
|
||||
}
|
||||
}
|
||||
// all = [1-5], p is empty
|
||||
if none, _ := p.TryGet(); none != nil {
|
||||
t.Errorf("TryGet failed with non-nil")
|
||||
}
|
||||
|
||||
ch := make(chan bool)
|
||||
go ResourceWait(p, t, ch)
|
||||
time.Sleep(1e8)
|
||||
for i := 0; i < 5; i++ {
|
||||
p.Put(all[i])
|
||||
}
|
||||
// p = [1-5]
|
||||
<-ch
|
||||
// p = [1-5]
|
||||
if p.waitCount != 1 {
|
||||
t.Errorf("Expecting 1, received %d", p.waitCount)
|
||||
}
|
||||
|
||||
for i := 0; i < 5; i++ {
|
||||
all[i], _ = p.Get()
|
||||
}
|
||||
// all = [1-5], p is empty
|
||||
all[0].(*TestResource).Close()
|
||||
p.Put(nil)
|
||||
for i := 1; i < 5; i++ {
|
||||
p.Put(all[i])
|
||||
}
|
||||
// p = [2-5]
|
||||
|
||||
for i := 0; i < 4; i++ {
|
||||
r, _ := p.Get()
|
||||
if r.(*TestResource).num != int64(i+2) {
|
||||
t.Errorf("Expecting %d, received %d", i+2, r.(*TestResource).num)
|
||||
}
|
||||
p.Put(r)
|
||||
}
|
||||
|
||||
p.SetCapacity(3)
|
||||
// p = [2-4]
|
||||
if p.size != 3 {
|
||||
t.Errorf("Expecting 3, received %d", p.size)
|
||||
}
|
||||
|
||||
p.SetIdleTimeout(time.Duration(1e8))
|
||||
time.Sleep(2e8)
|
||||
r, _ := p.Get()
|
||||
if r.(*TestResource).num != 6 {
|
||||
t.Errorf("Expecting 6, received %d", r.(*TestResource).num)
|
||||
}
|
||||
p.Put(r)
|
||||
// p = [6]
|
||||
}
|
||||
|
||||
func TestPoolFail(t *testing.T) {
|
||||
p := NewRoundRobin(5, time.Duration(10e9))
|
||||
p.Open(FailFactory)
|
||||
defer p.Close()
|
||||
if _, err := p.Get(); err.Error() != "Failed" {
|
||||
t.Errorf("Expecting Failed, received %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestPoolFullFail(t *testing.T) {
|
||||
p := NewRoundRobin(2, time.Duration(10e9))
|
||||
p.Open(SlowFailFactory)
|
||||
defer p.Close()
|
||||
ch := make(chan bool)
|
||||
// The third get should not wait indefinitely
|
||||
for i := 0; i < 3; i++ {
|
||||
go func() {
|
||||
p.Get()
|
||||
ch <- true
|
||||
}()
|
||||
}
|
||||
for i := 0; i < 3; i++ {
|
||||
<-ch
|
||||
}
|
||||
}
|
||||
|
||||
func ResourceWait(p *RoundRobin, t *testing.T, ch chan bool) {
|
||||
for i := 0; i < 5; i++ {
|
||||
if r, err := p.Get(); err != nil {
|
||||
t.Errorf("TryGet failed: %v", err)
|
||||
} else if r.(*TestResource).num != int64(i+1) {
|
||||
t.Errorf("Expecting %d, received %d", i+1, r.(*TestResource).num)
|
||||
} else {
|
||||
p.Put(r)
|
||||
}
|
||||
}
|
||||
ch <- true
|
||||
}
|
28
vendor/github.com/ngaut/pools/vitess_license
generated
vendored
Normal file
28
vendor/github.com/ngaut/pools/vitess_license
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
Copyright 2012, Google Inc.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
114
vendor/github.com/ngaut/sync2/atomic.go
generated
vendored
Normal file
114
vendor/github.com/ngaut/sync2/atomic.go
generated
vendored
Normal file
@ -0,0 +1,114 @@
|
||||
// Copyright 2013, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package sync2
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
type AtomicInt32 int32
|
||||
|
||||
func (i *AtomicInt32) Add(n int32) int32 {
|
||||
return atomic.AddInt32((*int32)(i), n)
|
||||
}
|
||||
|
||||
func (i *AtomicInt32) Set(n int32) {
|
||||
atomic.StoreInt32((*int32)(i), n)
|
||||
}
|
||||
|
||||
func (i *AtomicInt32) Get() int32 {
|
||||
return atomic.LoadInt32((*int32)(i))
|
||||
}
|
||||
|
||||
func (i *AtomicInt32) CompareAndSwap(oldval, newval int32) (swapped bool) {
|
||||
return atomic.CompareAndSwapInt32((*int32)(i), oldval, newval)
|
||||
}
|
||||
|
||||
type AtomicUint32 uint32
|
||||
|
||||
func (i *AtomicUint32) Add(n uint32) uint32 {
|
||||
return atomic.AddUint32((*uint32)(i), n)
|
||||
}
|
||||
|
||||
func (i *AtomicUint32) Set(n uint32) {
|
||||
atomic.StoreUint32((*uint32)(i), n)
|
||||
}
|
||||
|
||||
func (i *AtomicUint32) Get() uint32 {
|
||||
return atomic.LoadUint32((*uint32)(i))
|
||||
}
|
||||
|
||||
func (i *AtomicUint32) CompareAndSwap(oldval, newval uint32) (swapped bool) {
|
||||
return atomic.CompareAndSwapUint32((*uint32)(i), oldval, newval)
|
||||
}
|
||||
|
||||
type AtomicInt64 int64
|
||||
|
||||
func (i *AtomicInt64) Add(n int64) int64 {
|
||||
return atomic.AddInt64((*int64)(i), n)
|
||||
}
|
||||
|
||||
func (i *AtomicInt64) Set(n int64) {
|
||||
atomic.StoreInt64((*int64)(i), n)
|
||||
}
|
||||
|
||||
func (i *AtomicInt64) Get() int64 {
|
||||
return atomic.LoadInt64((*int64)(i))
|
||||
}
|
||||
|
||||
func (i *AtomicInt64) CompareAndSwap(oldval, newval int64) (swapped bool) {
|
||||
return atomic.CompareAndSwapInt64((*int64)(i), oldval, newval)
|
||||
}
|
||||
|
||||
type AtomicDuration int64
|
||||
|
||||
func (d *AtomicDuration) Add(duration time.Duration) time.Duration {
|
||||
return time.Duration(atomic.AddInt64((*int64)(d), int64(duration)))
|
||||
}
|
||||
|
||||
func (d *AtomicDuration) Set(duration time.Duration) {
|
||||
atomic.StoreInt64((*int64)(d), int64(duration))
|
||||
}
|
||||
|
||||
func (d *AtomicDuration) Get() time.Duration {
|
||||
return time.Duration(atomic.LoadInt64((*int64)(d)))
|
||||
}
|
||||
|
||||
func (d *AtomicDuration) CompareAndSwap(oldval, newval time.Duration) (swapped bool) {
|
||||
return atomic.CompareAndSwapInt64((*int64)(d), int64(oldval), int64(newval))
|
||||
}
|
||||
|
||||
// AtomicString gives you atomic-style APIs for string, but
|
||||
// it's only a convenience wrapper that uses a mutex. So, it's
|
||||
// not as efficient as the rest of the atomic types.
|
||||
type AtomicString struct {
|
||||
mu sync.Mutex
|
||||
str string
|
||||
}
|
||||
|
||||
func (s *AtomicString) Set(str string) {
|
||||
s.mu.Lock()
|
||||
s.str = str
|
||||
s.mu.Unlock()
|
||||
}
|
||||
|
||||
func (s *AtomicString) Get() string {
|
||||
s.mu.Lock()
|
||||
str := s.str
|
||||
s.mu.Unlock()
|
||||
return str
|
||||
}
|
||||
|
||||
func (s *AtomicString) CompareAndSwap(oldval, newval string) (swqpped bool) {
|
||||
s.mu.Lock()
|
||||
defer s.mu.Unlock()
|
||||
if s.str == oldval {
|
||||
s.str = newval
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
32
vendor/github.com/ngaut/sync2/atomic_test.go
generated
vendored
Normal file
32
vendor/github.com/ngaut/sync2/atomic_test.go
generated
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
// Copyright 2013, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package sync2
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestAtomicString(t *testing.T) {
|
||||
var s AtomicString
|
||||
if s.Get() != "" {
|
||||
t.Errorf("want empty, got %s", s.Get())
|
||||
}
|
||||
s.Set("a")
|
||||
if s.Get() != "a" {
|
||||
t.Errorf("want a, got %s", s.Get())
|
||||
}
|
||||
if s.CompareAndSwap("b", "c") {
|
||||
t.Errorf("want false, got true")
|
||||
}
|
||||
if s.Get() != "a" {
|
||||
t.Errorf("want a, got %s", s.Get())
|
||||
}
|
||||
if !s.CompareAndSwap("a", "c") {
|
||||
t.Errorf("want true, got false")
|
||||
}
|
||||
if s.Get() != "c" {
|
||||
t.Errorf("want c, got %s", s.Get())
|
||||
}
|
||||
}
|
56
vendor/github.com/ngaut/sync2/cond.go
generated
vendored
Normal file
56
vendor/github.com/ngaut/sync2/cond.go
generated
vendored
Normal file
@ -0,0 +1,56 @@
|
||||
// Copyright 2013, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package sync2
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Cond is an alternate implementation of sync.Cond
|
||||
type Cond struct {
|
||||
L sync.Locker
|
||||
sema chan struct{}
|
||||
waiters AtomicInt64
|
||||
}
|
||||
|
||||
func NewCond(l sync.Locker) *Cond {
|
||||
return &Cond{L: l, sema: make(chan struct{})}
|
||||
}
|
||||
|
||||
func (c *Cond) Wait() {
|
||||
c.waiters.Add(1)
|
||||
c.L.Unlock()
|
||||
<-c.sema
|
||||
c.L.Lock()
|
||||
}
|
||||
|
||||
func (c *Cond) Signal() {
|
||||
for {
|
||||
w := c.waiters.Get()
|
||||
if w == 0 {
|
||||
return
|
||||
}
|
||||
if c.waiters.CompareAndSwap(w, w-1) {
|
||||
break
|
||||
}
|
||||
}
|
||||
c.sema <- struct{}{}
|
||||
}
|
||||
|
||||
func (c *Cond) Broadcast() {
|
||||
var w int64
|
||||
for {
|
||||
w = c.waiters.Get()
|
||||
if w == 0 {
|
||||
return
|
||||
}
|
||||
if c.waiters.CompareAndSwap(w, 0) {
|
||||
break
|
||||
}
|
||||
}
|
||||
for i := int64(0); i < w; i++ {
|
||||
c.sema <- struct{}{}
|
||||
}
|
||||
}
|
276
vendor/github.com/ngaut/sync2/cond_test.go
generated
vendored
Normal file
276
vendor/github.com/ngaut/sync2/cond_test.go
generated
vendored
Normal file
@ -0,0 +1,276 @@
|
||||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
package sync2
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
"sync"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestCondSignal(t *testing.T) {
|
||||
var m sync.Mutex
|
||||
c := NewCond(&m)
|
||||
n := 2
|
||||
running := make(chan bool, n)
|
||||
awake := make(chan bool, n)
|
||||
for i := 0; i < n; i++ {
|
||||
go func() {
|
||||
m.Lock()
|
||||
running <- true
|
||||
c.Wait()
|
||||
awake <- true
|
||||
m.Unlock()
|
||||
}()
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
<-running // Wait for everyone to run.
|
||||
}
|
||||
for n > 0 {
|
||||
select {
|
||||
case <-awake:
|
||||
t.Fatal("goroutine not asleep")
|
||||
default:
|
||||
}
|
||||
m.Lock()
|
||||
c.Signal()
|
||||
m.Unlock()
|
||||
<-awake // Will deadlock if no goroutine wakes up
|
||||
select {
|
||||
case <-awake:
|
||||
t.Fatal("too many goroutines awake")
|
||||
default:
|
||||
}
|
||||
n--
|
||||
}
|
||||
c.Signal()
|
||||
}
|
||||
|
||||
func TestCondSignalGenerations(t *testing.T) {
|
||||
var m sync.Mutex
|
||||
c := NewCond(&m)
|
||||
n := 100
|
||||
running := make(chan bool, n)
|
||||
awake := make(chan int, n)
|
||||
for i := 0; i < n; i++ {
|
||||
go func(i int) {
|
||||
m.Lock()
|
||||
running <- true
|
||||
c.Wait()
|
||||
awake <- i
|
||||
m.Unlock()
|
||||
}(i)
|
||||
if i > 0 {
|
||||
a := <-awake
|
||||
if a != i-1 {
|
||||
t.Fatalf("wrong goroutine woke up: want %d, got %d", i-1, a)
|
||||
}
|
||||
}
|
||||
<-running
|
||||
m.Lock()
|
||||
c.Signal()
|
||||
m.Unlock()
|
||||
}
|
||||
}
|
||||
|
||||
func TestCondBroadcast(t *testing.T) {
|
||||
var m sync.Mutex
|
||||
c := NewCond(&m)
|
||||
n := 200
|
||||
running := make(chan int, n)
|
||||
awake := make(chan int, n)
|
||||
exit := false
|
||||
for i := 0; i < n; i++ {
|
||||
go func(g int) {
|
||||
m.Lock()
|
||||
for !exit {
|
||||
running <- g
|
||||
c.Wait()
|
||||
awake <- g
|
||||
}
|
||||
m.Unlock()
|
||||
}(i)
|
||||
}
|
||||
for i := 0; i < n; i++ {
|
||||
for i := 0; i < n; i++ {
|
||||
<-running // Will deadlock unless n are running.
|
||||
}
|
||||
if i == n-1 {
|
||||
m.Lock()
|
||||
exit = true
|
||||
m.Unlock()
|
||||
}
|
||||
select {
|
||||
case <-awake:
|
||||
t.Fatal("goroutine not asleep")
|
||||
default:
|
||||
}
|
||||
m.Lock()
|
||||
c.Broadcast()
|
||||
m.Unlock()
|
||||
seen := make([]bool, n)
|
||||
for i := 0; i < n; i++ {
|
||||
g := <-awake
|
||||
if seen[g] {
|
||||
t.Fatal("goroutine woke up twice")
|
||||
}
|
||||
seen[g] = true
|
||||
}
|
||||
}
|
||||
select {
|
||||
case <-running:
|
||||
t.Fatal("goroutine did not exit")
|
||||
default:
|
||||
}
|
||||
c.Broadcast()
|
||||
}
|
||||
|
||||
func TestRace(t *testing.T) {
|
||||
x := 0
|
||||
c := NewCond(&sync.Mutex{})
|
||||
done := make(chan bool)
|
||||
go func() {
|
||||
c.L.Lock()
|
||||
x = 1
|
||||
c.Wait()
|
||||
if x != 2 {
|
||||
t.Fatal("want 2")
|
||||
}
|
||||
x = 3
|
||||
c.Signal()
|
||||
c.L.Unlock()
|
||||
done <- true
|
||||
}()
|
||||
go func() {
|
||||
c.L.Lock()
|
||||
for {
|
||||
if x == 1 {
|
||||
x = 2
|
||||
c.Signal()
|
||||
break
|
||||
}
|
||||
c.L.Unlock()
|
||||
runtime.Gosched()
|
||||
c.L.Lock()
|
||||
}
|
||||
c.L.Unlock()
|
||||
done <- true
|
||||
}()
|
||||
go func() {
|
||||
c.L.Lock()
|
||||
for {
|
||||
if x == 2 {
|
||||
c.Wait()
|
||||
if x != 3 {
|
||||
t.Fatal("want 3")
|
||||
}
|
||||
break
|
||||
}
|
||||
if x == 3 {
|
||||
break
|
||||
}
|
||||
c.L.Unlock()
|
||||
runtime.Gosched()
|
||||
c.L.Lock()
|
||||
}
|
||||
c.L.Unlock()
|
||||
done <- true
|
||||
}()
|
||||
<-done
|
||||
<-done
|
||||
<-done
|
||||
}
|
||||
|
||||
// Bench: Rename this function to TestBench for running benchmarks
|
||||
func Bench(t *testing.T) {
|
||||
waitvals := []int{1, 2, 4, 8}
|
||||
maxprocs := []int{1, 2, 4}
|
||||
fmt.Printf("procs\twaiters\told\tnew\tdelta\n")
|
||||
for _, procs := range maxprocs {
|
||||
runtime.GOMAXPROCS(procs)
|
||||
for _, waiters := range waitvals {
|
||||
oldbench := func(b *testing.B) {
|
||||
benchmarkCond(b, waiters)
|
||||
}
|
||||
oldbr := testing.Benchmark(oldbench)
|
||||
newbench := func(b *testing.B) {
|
||||
benchmarkCond2(b, waiters)
|
||||
}
|
||||
newbr := testing.Benchmark(newbench)
|
||||
oldns := oldbr.NsPerOp()
|
||||
newns := newbr.NsPerOp()
|
||||
percent := float64(newns-oldns) * 100.0 / float64(oldns)
|
||||
fmt.Printf("%d\t%d\t%d\t%d\t%6.2f%%\n", procs, waiters, oldns, newns, percent)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func benchmarkCond2(b *testing.B, waiters int) {
|
||||
c := NewCond(&sync.Mutex{})
|
||||
done := make(chan bool)
|
||||
id := 0
|
||||
|
||||
for routine := 0; routine < waiters+1; routine++ {
|
||||
go func() {
|
||||
for i := 0; i < b.N; i++ {
|
||||
c.L.Lock()
|
||||
if id == -1 {
|
||||
c.L.Unlock()
|
||||
break
|
||||
}
|
||||
id++
|
||||
if id == waiters+1 {
|
||||
id = 0
|
||||
c.Broadcast()
|
||||
} else {
|
||||
c.Wait()
|
||||
}
|
||||
c.L.Unlock()
|
||||
}
|
||||
c.L.Lock()
|
||||
id = -1
|
||||
c.Broadcast()
|
||||
c.L.Unlock()
|
||||
done <- true
|
||||
}()
|
||||
}
|
||||
for routine := 0; routine < waiters+1; routine++ {
|
||||
<-done
|
||||
}
|
||||
}
|
||||
|
||||
func benchmarkCond(b *testing.B, waiters int) {
|
||||
c := sync.NewCond(&sync.Mutex{})
|
||||
done := make(chan bool)
|
||||
id := 0
|
||||
|
||||
for routine := 0; routine < waiters+1; routine++ {
|
||||
go func() {
|
||||
for i := 0; i < b.N; i++ {
|
||||
c.L.Lock()
|
||||
if id == -1 {
|
||||
c.L.Unlock()
|
||||
break
|
||||
}
|
||||
id++
|
||||
if id == waiters+1 {
|
||||
id = 0
|
||||
c.Broadcast()
|
||||
} else {
|
||||
c.Wait()
|
||||
}
|
||||
c.L.Unlock()
|
||||
}
|
||||
c.L.Lock()
|
||||
id = -1
|
||||
c.Broadcast()
|
||||
c.L.Unlock()
|
||||
done <- true
|
||||
}()
|
||||
}
|
||||
for routine := 0; routine < waiters+1; routine++ {
|
||||
<-done
|
||||
}
|
||||
}
|
55
vendor/github.com/ngaut/sync2/semaphore.go
generated
vendored
Normal file
55
vendor/github.com/ngaut/sync2/semaphore.go
generated
vendored
Normal file
@ -0,0 +1,55 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package sync2
|
||||
|
||||
// What's in a name? Channels have all you need to emulate a counting
|
||||
// semaphore with a boatload of extra functionality. However, in some
|
||||
// cases, you just want a familiar API.
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// Semaphore is a counting semaphore with the option to
|
||||
// specify a timeout.
|
||||
type Semaphore struct {
|
||||
slots chan struct{}
|
||||
timeout time.Duration
|
||||
}
|
||||
|
||||
// NewSemaphore creates a Semaphore. The count parameter must be a positive
|
||||
// number. A timeout of zero means that there is no timeout.
|
||||
func NewSemaphore(count int, timeout time.Duration) *Semaphore {
|
||||
sem := &Semaphore{
|
||||
slots: make(chan struct{}, count),
|
||||
timeout: timeout,
|
||||
}
|
||||
for i := 0; i < count; i++ {
|
||||
sem.slots <- struct{}{}
|
||||
}
|
||||
return sem
|
||||
}
|
||||
|
||||
// Acquire returns true on successful acquisition, and
|
||||
// false on a timeout.
|
||||
func (sem *Semaphore) Acquire() bool {
|
||||
if sem.timeout == 0 {
|
||||
<-sem.slots
|
||||
return true
|
||||
}
|
||||
select {
|
||||
case <-sem.slots:
|
||||
return true
|
||||
case <-time.After(sem.timeout):
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Release releases the acquired semaphore. You must
|
||||
// not release more than the number of semaphores you've
|
||||
// acquired.
|
||||
func (sem *Semaphore) Release() {
|
||||
sem.slots <- struct{}{}
|
||||
}
|
41
vendor/github.com/ngaut/sync2/semaphore_test.go
generated
vendored
Normal file
41
vendor/github.com/ngaut/sync2/semaphore_test.go
generated
vendored
Normal file
@ -0,0 +1,41 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package sync2
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestSemaNoTimeout(t *testing.T) {
|
||||
s := NewSemaphore(1, 0)
|
||||
s.Acquire()
|
||||
released := false
|
||||
go func() {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
released = true
|
||||
s.Release()
|
||||
}()
|
||||
s.Acquire()
|
||||
if !released {
|
||||
t.Errorf("want true, got false")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSemaTimeout(t *testing.T) {
|
||||
s := NewSemaphore(1, 5*time.Millisecond)
|
||||
s.Acquire()
|
||||
go func() {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
s.Release()
|
||||
}()
|
||||
if ok := s.Acquire(); ok {
|
||||
t.Errorf("want false, got true")
|
||||
}
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
if ok := s.Acquire(); !ok {
|
||||
t.Errorf("want true, got false")
|
||||
}
|
||||
}
|
121
vendor/github.com/ngaut/sync2/service_manager.go
generated
vendored
Normal file
121
vendor/github.com/ngaut/sync2/service_manager.go
generated
vendored
Normal file
@ -0,0 +1,121 @@
|
||||
// Copyright 2013, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package sync2
|
||||
|
||||
import (
|
||||
"sync"
|
||||
)
|
||||
|
||||
// These are the three predefined states of a service.
|
||||
const (
|
||||
SERVICE_STOPPED = iota
|
||||
SERVICE_RUNNING
|
||||
SERVICE_SHUTTING_DOWN
|
||||
)
|
||||
|
||||
var stateNames = []string{
|
||||
"Stopped",
|
||||
"Running",
|
||||
"ShuttingDown",
|
||||
}
|
||||
|
||||
// ServiceManager manages the state of a service through its lifecycle.
|
||||
type ServiceManager struct {
|
||||
mu sync.Mutex
|
||||
wg sync.WaitGroup
|
||||
err error // err is the error returned from the service function.
|
||||
state AtomicInt64
|
||||
// shutdown is created when the service starts and is closed when the service
|
||||
// enters the SERVICE_SHUTTING_DOWN state.
|
||||
shutdown chan struct{}
|
||||
}
|
||||
|
||||
// Go tries to change the state from SERVICE_STOPPED to SERVICE_RUNNING.
|
||||
//
|
||||
// If the current state is not SERVICE_STOPPED (already running), it returns
|
||||
// false immediately.
|
||||
//
|
||||
// On successful transition, it launches the service as a goroutine and returns
|
||||
// true. The service function is responsible for returning on its own when
|
||||
// requested, either by regularly checking svc.IsRunning(), or by waiting for
|
||||
// the svc.ShuttingDown channel to be closed.
|
||||
//
|
||||
// When the service func returns, the state is reverted to SERVICE_STOPPED.
|
||||
func (svm *ServiceManager) Go(service func(svc *ServiceContext) error) bool {
|
||||
svm.mu.Lock()
|
||||
defer svm.mu.Unlock()
|
||||
if !svm.state.CompareAndSwap(SERVICE_STOPPED, SERVICE_RUNNING) {
|
||||
return false
|
||||
}
|
||||
svm.wg.Add(1)
|
||||
svm.err = nil
|
||||
svm.shutdown = make(chan struct{})
|
||||
go func() {
|
||||
svm.err = service(&ServiceContext{ShuttingDown: svm.shutdown})
|
||||
svm.state.Set(SERVICE_STOPPED)
|
||||
svm.wg.Done()
|
||||
}()
|
||||
return true
|
||||
}
|
||||
|
||||
// Stop tries to change the state from SERVICE_RUNNING to SERVICE_SHUTTING_DOWN.
|
||||
// If the current state is not SERVICE_RUNNING, it returns false immediately.
|
||||
// On successul transition, it waits for the service to finish, and returns true.
|
||||
// You are allowed to Go() again after a Stop().
|
||||
func (svm *ServiceManager) Stop() bool {
|
||||
svm.mu.Lock()
|
||||
defer svm.mu.Unlock()
|
||||
if !svm.state.CompareAndSwap(SERVICE_RUNNING, SERVICE_SHUTTING_DOWN) {
|
||||
return false
|
||||
}
|
||||
// Signal the service that we've transitioned to SERVICE_SHUTTING_DOWN.
|
||||
close(svm.shutdown)
|
||||
svm.shutdown = nil
|
||||
svm.wg.Wait()
|
||||
return true
|
||||
}
|
||||
|
||||
// Wait waits for the service to terminate if it's currently running.
|
||||
func (svm *ServiceManager) Wait() {
|
||||
svm.wg.Wait()
|
||||
}
|
||||
|
||||
// Join waits for the service to terminate and returns the value returned by the
|
||||
// service function.
|
||||
func (svm *ServiceManager) Join() error {
|
||||
svm.wg.Wait()
|
||||
return svm.err
|
||||
}
|
||||
|
||||
// State returns the current state of the service.
|
||||
// This should only be used to report the current state.
|
||||
func (svm *ServiceManager) State() int64 {
|
||||
return svm.state.Get()
|
||||
}
|
||||
|
||||
// StateName returns the name of the current state.
|
||||
func (svm *ServiceManager) StateName() string {
|
||||
return stateNames[svm.State()]
|
||||
}
|
||||
|
||||
// ServiceContext is passed into the service function to give it access to
|
||||
// information about the running service.
|
||||
type ServiceContext struct {
|
||||
// ShuttingDown is a channel that the service can select on to be notified
|
||||
// when it should shut down. The channel is closed when the state transitions
|
||||
// from SERVICE_RUNNING to SERVICE_SHUTTING_DOWN.
|
||||
ShuttingDown chan struct{}
|
||||
}
|
||||
|
||||
// IsRunning returns true if the ServiceContext.ShuttingDown channel has not
|
||||
// been closed yet.
|
||||
func (svc *ServiceContext) IsRunning() bool {
|
||||
select {
|
||||
case <-svc.ShuttingDown:
|
||||
return false
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
176
vendor/github.com/ngaut/sync2/service_manager_test.go
generated
vendored
Normal file
176
vendor/github.com/ngaut/sync2/service_manager_test.go
generated
vendored
Normal file
@ -0,0 +1,176 @@
|
||||
// Copyright 2013, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package sync2
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
type testService struct {
|
||||
activated AtomicInt64
|
||||
t *testing.T
|
||||
}
|
||||
|
||||
func (ts *testService) service(svc *ServiceContext) error {
|
||||
if !ts.activated.CompareAndSwap(0, 1) {
|
||||
ts.t.Fatalf("service called more than once")
|
||||
}
|
||||
for svc.IsRunning() {
|
||||
time.Sleep(10 * time.Millisecond)
|
||||
|
||||
}
|
||||
if !ts.activated.CompareAndSwap(1, 0) {
|
||||
ts.t.Fatalf("service ended more than once")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ts *testService) selectService(svc *ServiceContext) error {
|
||||
if !ts.activated.CompareAndSwap(0, 1) {
|
||||
ts.t.Fatalf("service called more than once")
|
||||
}
|
||||
serviceLoop:
|
||||
for svc.IsRunning() {
|
||||
select {
|
||||
case <-time.After(1 * time.Second):
|
||||
ts.t.Errorf("service didn't stop when shutdown channel was closed")
|
||||
case <-svc.ShuttingDown:
|
||||
break serviceLoop
|
||||
}
|
||||
}
|
||||
if !ts.activated.CompareAndSwap(1, 0) {
|
||||
ts.t.Fatalf("service ended more than once")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestServiceManager(t *testing.T) {
|
||||
ts := &testService{t: t}
|
||||
var sm ServiceManager
|
||||
if sm.StateName() != "Stopped" {
|
||||
t.Errorf("want Stopped, got %s", sm.StateName())
|
||||
}
|
||||
result := sm.Go(ts.service)
|
||||
if !result {
|
||||
t.Errorf("want true, got false")
|
||||
}
|
||||
if sm.StateName() != "Running" {
|
||||
t.Errorf("want Running, got %s", sm.StateName())
|
||||
}
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
if val := ts.activated.Get(); val != 1 {
|
||||
t.Errorf("want 1, got %d", val)
|
||||
}
|
||||
result = sm.Go(ts.service)
|
||||
if result {
|
||||
t.Errorf("want false, got true")
|
||||
}
|
||||
result = sm.Stop()
|
||||
if !result {
|
||||
t.Errorf("want true, got false")
|
||||
}
|
||||
if val := ts.activated.Get(); val != 0 {
|
||||
t.Errorf("want 0, got %d", val)
|
||||
}
|
||||
result = sm.Stop()
|
||||
if result {
|
||||
t.Errorf("want false, got true")
|
||||
}
|
||||
sm.state.Set(SERVICE_SHUTTING_DOWN)
|
||||
if sm.StateName() != "ShuttingDown" {
|
||||
t.Errorf("want ShuttingDown, got %s", sm.StateName())
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceManagerSelect(t *testing.T) {
|
||||
ts := &testService{t: t}
|
||||
var sm ServiceManager
|
||||
if sm.StateName() != "Stopped" {
|
||||
t.Errorf("want Stopped, got %s", sm.StateName())
|
||||
}
|
||||
result := sm.Go(ts.selectService)
|
||||
if !result {
|
||||
t.Errorf("want true, got false")
|
||||
}
|
||||
if sm.StateName() != "Running" {
|
||||
t.Errorf("want Running, got %s", sm.StateName())
|
||||
}
|
||||
time.Sleep(5 * time.Millisecond)
|
||||
if val := ts.activated.Get(); val != 1 {
|
||||
t.Errorf("want 1, got %d", val)
|
||||
}
|
||||
result = sm.Go(ts.service)
|
||||
if result {
|
||||
t.Errorf("want false, got true")
|
||||
}
|
||||
result = sm.Stop()
|
||||
if !result {
|
||||
t.Errorf("want true, got false")
|
||||
}
|
||||
if val := ts.activated.Get(); val != 0 {
|
||||
t.Errorf("want 0, got %d", val)
|
||||
}
|
||||
result = sm.Stop()
|
||||
if result {
|
||||
t.Errorf("want false, got true")
|
||||
}
|
||||
sm.state.Set(SERVICE_SHUTTING_DOWN)
|
||||
if sm.StateName() != "ShuttingDown" {
|
||||
t.Errorf("want ShuttingDown, got %s", sm.StateName())
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceManagerWaitNotRunning(t *testing.T) {
|
||||
done := make(chan struct{})
|
||||
var sm ServiceManager
|
||||
go func() {
|
||||
sm.Wait()
|
||||
close(done)
|
||||
}()
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(1 * time.Second):
|
||||
t.Errorf("Wait() blocked even though service wasn't running.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceManagerWait(t *testing.T) {
|
||||
done := make(chan struct{})
|
||||
stop := make(chan struct{})
|
||||
var sm ServiceManager
|
||||
sm.Go(func(*ServiceContext) error {
|
||||
<-stop
|
||||
return nil
|
||||
})
|
||||
go func() {
|
||||
sm.Wait()
|
||||
close(done)
|
||||
}()
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
select {
|
||||
case <-done:
|
||||
t.Errorf("Wait() didn't block while service was still running.")
|
||||
default:
|
||||
}
|
||||
close(stop)
|
||||
select {
|
||||
case <-done:
|
||||
case <-time.After(100 * time.Millisecond):
|
||||
t.Errorf("Wait() didn't unblock when service stopped.")
|
||||
}
|
||||
}
|
||||
|
||||
func TestServiceManagerJoin(t *testing.T) {
|
||||
want := "error 123"
|
||||
var sm ServiceManager
|
||||
sm.Go(func(*ServiceContext) error {
|
||||
return fmt.Errorf("error 123")
|
||||
})
|
||||
if got := sm.Join().Error(); got != want {
|
||||
t.Errorf("Join().Error() = %#v, want %#v", got, want)
|
||||
}
|
||||
}
|
28
vendor/github.com/ngaut/sync2/vitess_license
generated
vendored
Normal file
28
vendor/github.com/ngaut/sync2/vitess_license
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
Copyright 2012, Google Inc.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
201
vendor/github.com/ngaut/systimemon/LICENSE
generated
vendored
Normal file
201
vendor/github.com/ngaut/systimemon/LICENSE
generated
vendored
Normal file
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
2
vendor/github.com/ngaut/systimemon/README.md
generated
vendored
Normal file
2
vendor/github.com/ngaut/systimemon/README.md
generated
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
# systimemon
|
||||
System time monitor
|
24
vendor/github.com/ngaut/systimemon/systime_mon.go
generated
vendored
Normal file
24
vendor/github.com/ngaut/systimemon/systime_mon.go
generated
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
package systimemon
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/ngaut/log"
|
||||
)
|
||||
|
||||
// StartMonitor will call systimeErrHandler if system time jump backward.
|
||||
func StartMonitor(now func() time.Time, systimeErrHandler func()) {
|
||||
log.Info("start system time monitor")
|
||||
tick := time.NewTicker(100 * time.Millisecond)
|
||||
defer tick.Stop()
|
||||
for {
|
||||
last := now()
|
||||
select {
|
||||
case <-tick.C:
|
||||
if now().Sub(last) < 0 {
|
||||
log.Errorf("system time jump backward, last:%v", last)
|
||||
systimeErrHandler()
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
28
vendor/github.com/ngaut/systimemon/systime_mon_test.go
generated
vendored
Normal file
28
vendor/github.com/ngaut/systimemon/systime_mon_test.go
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
package systimemon
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestSystimeMonitor(t *testing.T) {
|
||||
jumpForward := false
|
||||
trigged := false
|
||||
go StartMonitor(
|
||||
func() time.Time {
|
||||
if !trigged {
|
||||
trigged = true
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
return time.Now().Add(-2 * time.Second)
|
||||
}, func() {
|
||||
jumpForward = true
|
||||
})
|
||||
|
||||
time.Sleep(1 * time.Second)
|
||||
|
||||
if !jumpForward {
|
||||
t.Error("should detect time error")
|
||||
}
|
||||
}
|
76
vendor/github.com/outbrain/golib/tests/spec.go
generated
vendored
Normal file
76
vendor/github.com/outbrain/golib/tests/spec.go
generated
vendored
Normal file
@ -0,0 +1,76 @@
|
||||
package tests
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Spec is an access point to test Expections
|
||||
type Spec struct {
|
||||
t *testing.T
|
||||
}
|
||||
|
||||
// S generates a spec. You will want to use it once in a test file, once in a test or once per each check
|
||||
func S(t *testing.T) *Spec {
|
||||
return &Spec{t: t}
|
||||
}
|
||||
|
||||
// ExpectNil expects given value to be nil, or errors
|
||||
func (spec *Spec) ExpectNil(actual interface{}) {
|
||||
if actual == nil {
|
||||
return
|
||||
}
|
||||
spec.t.Errorf("Expected %+v to be nil", actual)
|
||||
}
|
||||
|
||||
// ExpectNotNil expects given value to be not nil, or errors
|
||||
func (spec *Spec) ExpectNotNil(actual interface{}) {
|
||||
if actual != nil {
|
||||
return
|
||||
}
|
||||
spec.t.Errorf("Expected %+v to be not nil", actual)
|
||||
}
|
||||
|
||||
// ExpectEquals expects given values to be equal (comparison via `==`), or errors
|
||||
func (spec *Spec) ExpectEquals(actual, value interface{}) {
|
||||
if actual == value {
|
||||
return
|
||||
}
|
||||
spec.t.Errorf("Expected %+v, got %+v", value, actual)
|
||||
}
|
||||
|
||||
// ExpectNotEquals expects given values to be nonequal (comparison via `==`), or errors
|
||||
func (spec *Spec) ExpectNotEquals(actual, value interface{}) {
|
||||
if !(actual == value) {
|
||||
return
|
||||
}
|
||||
spec.t.Errorf("Expected not %+v", value)
|
||||
}
|
||||
|
||||
// ExpectEqualsAny expects given actual to equal (comparison via `==`) at least one of given values, or errors
|
||||
func (spec *Spec) ExpectEqualsAny(actual interface{}, values ...interface{}) {
|
||||
for _, value := range values {
|
||||
if actual == value {
|
||||
return
|
||||
}
|
||||
}
|
||||
spec.t.Errorf("Expected %+v to equal any of given values", actual)
|
||||
}
|
||||
|
||||
// ExpectNotEqualsAny expects given actual to be nonequal (comparison via `==`)tp any of given values, or errors
|
||||
func (spec *Spec) ExpectNotEqualsAny(actual interface{}, values ...interface{}) {
|
||||
for _, value := range values {
|
||||
if actual == value {
|
||||
spec.t.Errorf("Expected not %+v", value)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ExpectFalse expects given values to be false, or errors
|
||||
func (spec *Spec) ExpectFalse(actual interface{}) {
|
||||
spec.ExpectEquals(actual, false)
|
||||
}
|
||||
|
||||
// ExpectTrue expects given values to be true, or errors
|
||||
func (spec *Spec) ExpectTrue(actual interface{}) {
|
||||
spec.ExpectEquals(actual, true)
|
||||
}
|
3
vendor/github.com/siddontang/go-mysql/.gitignore
generated
vendored
Normal file
3
vendor/github.com/siddontang/go-mysql/.gitignore
generated
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
var
|
||||
bin
|
||||
.idea
|
32
vendor/github.com/siddontang/go-mysql/.travis.yml
generated
vendored
Normal file
32
vendor/github.com/siddontang/go-mysql/.travis.yml
generated
vendored
Normal file
@ -0,0 +1,32 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.6
|
||||
- 1.7
|
||||
|
||||
dist: trusty
|
||||
sudo: required
|
||||
addons:
|
||||
apt:
|
||||
packages:
|
||||
- mysql-server-5.6
|
||||
- mysql-client-core-5.6
|
||||
- mysql-client-5.6
|
||||
|
||||
before_script:
|
||||
# stop mysql and use row-based format binlog
|
||||
- "sudo /etc/init.d/mysql stop || true"
|
||||
- "echo '[mysqld]' | sudo tee /etc/mysql/conf.d/replication.cnf"
|
||||
- "echo 'server-id=1' | sudo tee -a /etc/mysql/conf.d/replication.cnf"
|
||||
- "echo 'log-bin=mysql' | sudo tee -a /etc/mysql/conf.d/replication.cnf"
|
||||
- "echo 'binlog-format = row' | sudo tee -a /etc/mysql/conf.d/replication.cnf"
|
||||
|
||||
# Start mysql (avoid errors to have logs)
|
||||
- "sudo /etc/init.d/mysql start || true"
|
||||
- "sudo tail -1000 /var/log/syslog"
|
||||
|
||||
- mysql -e "CREATE DATABASE IF NOT EXISTS test;" -uroot
|
||||
|
||||
|
||||
script:
|
||||
- make test
|
20
vendor/github.com/siddontang/go-mysql/LICENSE
generated
vendored
Normal file
20
vendor/github.com/siddontang/go-mysql/LICENSE
generated
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 siddontang
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user