Merge branch 'master' into minor_changes
This commit is contained in:
commit
d32304520c
@ -1,7 +1,9 @@
|
||||
# http://docs.travis-ci.com/user/languages/go/
|
||||
language: go
|
||||
|
||||
go: 1.9
|
||||
go:
|
||||
- "1.9"
|
||||
- "1.10"
|
||||
|
||||
os:
|
||||
- linux
|
||||
|
@ -1 +1 @@
|
||||
1.0.46
|
||||
1.0.47
|
||||
|
@ -69,6 +69,10 @@ This is somewhat similar to a Nagios `n`-times test, where `n` in our case is al
|
||||
|
||||
Optional. Default is `safe`. See more discussion in [`cut-over`](cut-over.md)
|
||||
|
||||
### cut-over-lock-timeout-seconds
|
||||
|
||||
Default `3`. Max number of seconds to hold locks on tables while attempting to cut-over (retry attempted when lock exceeds timeout).
|
||||
|
||||
### discard-foreign-keys
|
||||
|
||||
**Danger**: this flag will _silently_ discard any foreign keys existing on your table.
|
||||
@ -107,6 +111,10 @@ While the ongoing estimated number of rows is still heuristic, it's almost exact
|
||||
|
||||
Without this parameter, migration is a _noop_: testing table creation and validity of migration, but not touching data.
|
||||
|
||||
### force-table-names
|
||||
|
||||
Table name prefix to be used on the temporary tables.
|
||||
|
||||
### gcp
|
||||
|
||||
Add this flag when executing on a 1st generation Google Cloud Platform (GCP).
|
||||
@ -125,6 +133,10 @@ We think `gh-ost` should not take chances or make assumptions about the user's t
|
||||
|
||||
See [`initially-drop-ghost-table`](#initially-drop-ghost-table)
|
||||
|
||||
### initially-drop-socket-file
|
||||
|
||||
Default False. Should `gh-ost` forcibly delete an existing socket file. Be careful: this might drop the socket file of a running migration!
|
||||
|
||||
### max-lag-millis
|
||||
|
||||
On a replication topology, this is perhaps the most important migration throttling factor: the maximum lag allowed for migration to work. If lag exceeds this value, migration throttles.
|
||||
@ -169,6 +181,10 @@ See [`approve-renamed-columns`](#approve-renamed-columns)
|
||||
|
||||
Issue the migration on a replica; do not modify data on master. Useful for validating, testing and benchmarking. See [`testing-on-replica`](testing-on-replica.md)
|
||||
|
||||
### test-on-replica-skip-replica-stop
|
||||
|
||||
Default `False`. When `--test-on-replica` is enabled, do not issue commands stop replication (requires `--test-on-replica`).
|
||||
|
||||
### throttle-control-replicas
|
||||
|
||||
Provide a command delimited list of replicas; `gh-ost` will throttle when any of the given replicas lag beyond [`--max-lag-millis`](#max-lag-millis). The list can be queried and updated dynamically via [interactive commands](interactive-commands.md)
|
||||
|
@ -69,6 +69,7 @@ The following variables are available on all hooks:
|
||||
- `GH_OST_INSPECTED_HOST`
|
||||
- `GH_OST_EXECUTING_HOST`
|
||||
- `GH_OST_HOOKS_HINT` - copy of `--hooks-hint` value
|
||||
- `GH_OST_DRY_RUN` - whether or not the `gh-ost` run is a dry run
|
||||
|
||||
The following variable are available on particular hooks:
|
||||
|
||||
|
@ -1,12 +1,12 @@
|
||||
# Shared key
|
||||
|
||||
A requirement for a migration to run is that the two _before_ and _after_ tables have a shared unique key. This is to elaborate and illustrate on the matter.
|
||||
gh-ost requires for every migration that both the _before_ and _after_ versions of the table share the same unique not-null key columns. This page illustrates this rule.
|
||||
|
||||
### Introduction
|
||||
|
||||
Consider a classic, simple migration. The table is any normal:
|
||||
Consider a simple migration, with a normal table,
|
||||
|
||||
```
|
||||
```sql
|
||||
CREATE TABLE tbl (
|
||||
id bigint unsigned not null auto_increment,
|
||||
data varchar(255),
|
||||
@ -15,54 +15,72 @@ CREATE TABLE tbl (
|
||||
)
|
||||
```
|
||||
|
||||
And the migration is a simple `add column ts timestamp`.
|
||||
|
||||
In such migration there is no change in indexes, and in particular no change to any unique key, and specifically no change to the `PRIMARY KEY`. To run this migration, `gh-ost` would iterate the `tbl` table using the primary key, copy rows from `tbl` to the _ghost_ table `_tbl_gho` by order of `id`, and then apply binlog events onto `_tbl_gho`.
|
||||
|
||||
Applying the binlog events assumes the existence of a shared unique key. For example, an `UPDATE` statement in the binary log translate to a `REPLACE` statement which `gh-ost` applies to the _ghost_ table. Such statement expects to add or replace an existing row based on given row data. In particular, it would _replace_ an existing row if a unique key violation is met.
|
||||
|
||||
So `gh-ost` correlates `tbl` and `_tbl_gho` rows using a unique key. In the above example that would be the `PRIMARY KEY`.
|
||||
|
||||
### Rules
|
||||
|
||||
There must be a shared set of not-null columns for which there is a unique constraint in both the original table and the migration (_ghost_) table.
|
||||
|
||||
### Interpreting the rules
|
||||
|
||||
The same columns must be covered by a unique key in both tables. This doesn't have to be the `PRIMARY KEY`. This doesn't have to be a key of the same name.
|
||||
|
||||
Upon migration, `gh-ost` inspects both the original and _ghost_ table and attempts to find at least one such unique key (or rather, a set of columns) that is shared between the two. Typically this would just be the `PRIMARY KEY`, but sometimes you may change the `PRIMARY KEY` itself, in which case `gh-ost` will look for other options.
|
||||
|
||||
`gh-ost` expects unique keys where no `NULL` values are found, i.e. all columns covered by the unique key are defined as `NOT NULL`. This is implicitly true for `PRIMARY KEY`s. If no such key can be found, `gh-ost` bails out. In the event there is no such key, but you happen to _know_ your columns have no `NULL` values even though they're `NULL`-able, you may take responsibility and pass the `--allow-nullable-unique-key`. The migration will run well as long as no `NULL` values are found in the unique key's columns. Any actual `NULL`s may corrupt the migration.
|
||||
|
||||
### Examples: allowed and not allowed
|
||||
and the migration `add column ts timestamp`. The _after_ table version would be:
|
||||
|
||||
```sql
|
||||
CREATE TABLE tbl (
|
||||
id bigint unsigned not null auto_increment,
|
||||
data varchar(255),
|
||||
more_data int,
|
||||
ts timestamp,
|
||||
PRIMARY KEY(id)
|
||||
)
|
||||
```
|
||||
|
||||
(This is also the definition of the _ghost_ table, except that that table would be called `_tbl_gho`).
|
||||
|
||||
In this migration, the _before_ and _after_ versions contain the same unique not-null key (the PRIMARY KEY). To run this migration, `gh-ost` would iterate through the `tbl` table using the primary key, copy rows from `tbl` to the _ghost_ table `_tbl_gho` in primary key order, while also applying the binlog event writes from `tble` onto `_tbl_gho`.
|
||||
|
||||
The applying of the binlog events is what requires the shared unique key. For example, an `UPDATE` statement to `tbl` translates to a `REPLACE` statement which `gh-ost` applies to `_tbl_gho`. A `REPLACE` statement expects to insert or replace an existing row based on its row's values and the table's unique key constraints. In particular, if inserting that row would result in a unique key violation (e.g., a row with that primary key already exists), it would _replace_ that existing row with the new values.
|
||||
|
||||
So `gh-ost` correlates `tbl` and `_tbl_gho` rows one to one using a unique key. In the above example that would be the `PRIMARY KEY`.
|
||||
|
||||
### Interpreting the rule
|
||||
|
||||
The _before_ and _after_ versions of the table share the same unique not-null key, but:
|
||||
- the key doesn't have to be the PRIMARY KEY
|
||||
- the key can have a different name between the _before_ and _after_ versions (e.g., renamed via DROP INDEX and ADD INDEX) so long as it contains the exact same column(s)
|
||||
|
||||
At the start of the migration, `gh-ost` inspects both the original and _ghost_ table it created, and attempts to find at least one such unique key (or rather, a set of columns) that is shared between the two. Typically this would just be the `PRIMARY KEY`, but some tables don't have primary keys, or sometimes it is the primary key that is being modified by the migration. In these cases `gh-ost` will look for other options.
|
||||
|
||||
`gh-ost` expects unique keys where no `NULL` values are found, i.e. all columns contained in the unique key are defined as `NOT NULL`. This is implicitly true for primary keys. If no such key can be found, `gh-ost` bails out.
|
||||
|
||||
If the table contains a unique key with nullable columns, but you know your columns contain no `NULL` values, use the `--allow-nullable-unique-key` option. The migration will run well as long as no `NULL` values are found in the unique key's columns. **Any actual `NULL`s may corrupt the migration.**
|
||||
|
||||
### Examples: Allowed and Not Allowed
|
||||
|
||||
```sql
|
||||
create table some_table (
|
||||
id int auto_increment,
|
||||
id int not null auto_increment,
|
||||
ts timestamp,
|
||||
name varchar(128) not null,
|
||||
owner_id int not null,
|
||||
loc_id int,
|
||||
loc_id int not null,
|
||||
primary key(id),
|
||||
unique key name_uidx(name)
|
||||
)
|
||||
```
|
||||
|
||||
Following are examples of migrations that are _good to run_:
|
||||
Note the two unique, not-null indexes: the primary key and `name_uidx`.
|
||||
|
||||
Allowed migrations:
|
||||
|
||||
- `add column i int`
|
||||
- `add key owner_idx(owner_id)`
|
||||
- `add unique key owner_name_idx(owner_id, name)` - though you need to make sure to not write conflicting rows while this migration runs
|
||||
- `add key owner_idx (owner_id)`
|
||||
- `add unique key owner_name_idx (owner_id, name)` - **be careful not to write conflicting rows while this migration runs**
|
||||
- `drop key name_uidx` - `primary key` is shared between the tables
|
||||
- `drop primary key, add primary key(owner_id, loc_id)` - `name_uidx` is shared between the tables and is used for migration
|
||||
- `change id bigint unsigned` - the `'primary key` is used. The change of type still makes the `primary key` workable.
|
||||
- `drop primary key, drop key name_uidx, create primary key(name), create unique key id_uidx(id)` - swapping the two keys. `gh-ost` is still happy because `id` is still unique in both tables. So is `name`.
|
||||
- `drop primary key, add primary key(owner_id, loc_id)` - `name_uidx` is shared between the tables
|
||||
- `change id bigint unsigned not null auto_increment` - the `primary key` changes datatype but not value, and can be used
|
||||
- `drop primary key, drop key name_uidx, add primary key(name), add unique key id_uidx(id)` - swapping the two keys. Either `id` or `name` could be used
|
||||
|
||||
Not allowed:
|
||||
|
||||
- `drop primary key, drop key name_uidx` - the _ghost_ table has no unique key
|
||||
- `drop primary key, drop key name_uidx, create primary key(name, owner_id)` - no shared columns to the unique keys on both tables. Even though `name` exists in the _ghost_ table's `primary key`, it is only part of the key and in itself does not guarantee uniqueness in the _ghost_ table.
|
||||
|
||||
|
||||
Following are examples of migrations that _cannot run_:
|
||||
### Workarounds
|
||||
|
||||
- `drop primary key, drop key name_uidx` - no unique key to _ghost_ table, so clearly cannot run
|
||||
- `drop primary key, drop key name_uidx, create primary key(name, owner_id)` - no shared columns to both tables. Even though `name` exists in the _ghost_ table's `primary key`, it is only part of the key and in itself does not guarantee uniqueness in the _ghost_ table.
|
||||
|
||||
Also, you cannot run a migration on a table that doesn't have some form of `unique key` in the first place, such as `some_table (id int, ts timestamp)`
|
||||
If you need to change your primary key or only not-null unique index to use different columns, you will want to do it as two separate migrations:
|
||||
1. `ADD UNIQUE KEY temp_pk (temp_pk_column,...)`
|
||||
1. `DROP PRIMARY KEY, DROP KEY temp_pk, ADD PRIMARY KEY (temp_pk_column,...)`
|
||||
|
@ -46,6 +46,14 @@ Note that you may dynamically change both `--max-lag-millis` and the `throttle-c
|
||||
|
||||
An example query could be: `--throttle-query="select hour(now()) between 8 and 17"` which implies throttling auto-starts `8:00am` and migration auto-resumes at `18:00pm`.
|
||||
|
||||
#### HTTP Throttle
|
||||
|
||||
The `--throttle-http` flag allows for throttling via HTTP. Every 100ms `gh-ost` issues a `HEAD` request to the provided URL. If the response status code is not `200` throttling will kick in until a `200` response status code is returned.
|
||||
|
||||
If no URL is provided or the URL provided doesn't contain the scheme then the HTTP check will be disabled. For example `--throttle-http="http://1.2.3.4:6789/throttle"` will enable the HTTP check/throttling, but `--throttle-http="1.2.3.4:6789/throttle"` will not.
|
||||
|
||||
The URL can be queried and updated dynamically via [interactive interface](interactive-commands.md).
|
||||
|
||||
#### Manual control
|
||||
|
||||
In addition to the above, you are able to take control and throttle the operation any time you like.
|
||||
|
@ -40,12 +40,13 @@ func NewGoMySQLReader(migrationContext *base.MigrationContext) (binlogReader *Go
|
||||
serverId := uint32(migrationContext.ReplicaServerId)
|
||||
|
||||
binlogSyncerConfig := replication.BinlogSyncerConfig{
|
||||
ServerID: serverId,
|
||||
Flavor: "mysql",
|
||||
Host: binlogReader.connectionConfig.Key.Hostname,
|
||||
Port: uint16(binlogReader.connectionConfig.Key.Port),
|
||||
User: binlogReader.connectionConfig.User,
|
||||
Password: binlogReader.connectionConfig.Password,
|
||||
ServerID: serverId,
|
||||
Flavor: "mysql",
|
||||
Host: binlogReader.connectionConfig.Key.Hostname,
|
||||
Port: uint16(binlogReader.connectionConfig.Key.Port),
|
||||
User: binlogReader.connectionConfig.User,
|
||||
Password: binlogReader.connectionConfig.Password,
|
||||
UseDecimal: true,
|
||||
}
|
||||
binlogReader.binlogSyncer = replication.NewBinlogSyncer(binlogSyncerConfig)
|
||||
|
||||
|
@ -64,6 +64,7 @@ func (this *HooksExecutor) applyEnvironmentVariables(extraVariables ...string) [
|
||||
env = append(env, fmt.Sprintf("GH_OST_INSPECTED_HOST=%s", this.migrationContext.GetInspectorHostname()))
|
||||
env = append(env, fmt.Sprintf("GH_OST_EXECUTING_HOST=%s", this.migrationContext.Hostname))
|
||||
env = append(env, fmt.Sprintf("GH_OST_HOOKS_HINT=%s", this.migrationContext.HooksHintMessage))
|
||||
env = append(env, fmt.Sprintf("GH_OST_DRY_RUN=%t", this.migrationContext.Noop))
|
||||
|
||||
for _, variable := range extraVariables {
|
||||
env = append(env, variable)
|
||||
|
@ -173,8 +173,7 @@ func (this *Inspector) inspectOriginalAndGhostTables() (err error) {
|
||||
// This additional step looks at which columns are unsigned. We could have merged this within
|
||||
// the `getTableColumns()` function, but it's a later patch and introduces some complexity; I feel
|
||||
// comfortable in doing this as a separate step.
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, this.migrationContext.OriginalTableColumns, this.migrationContext.SharedColumns)
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, &this.migrationContext.UniqueKey.Columns)
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, this.migrationContext.OriginalTableColumns, this.migrationContext.SharedColumns, &this.migrationContext.UniqueKey.Columns)
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.GetGhostTableName(), this.migrationContext.GhostTableColumns, this.migrationContext.MappedSharedColumns)
|
||||
|
||||
for i := range this.migrationContext.SharedColumns.Columns() {
|
||||
@ -233,6 +232,9 @@ func (this *Inspector) validateGrants() error {
|
||||
if strings.Contains(grant, fmt.Sprintf("GRANT ALL PRIVILEGES ON `%s`.*", this.migrationContext.DatabaseName)) {
|
||||
foundDBAll = true
|
||||
}
|
||||
if strings.Contains(grant, fmt.Sprintf("GRANT ALL PRIVILEGES ON `%s`.*", strings.Replace(this.migrationContext.DatabaseName, "_", "\\_", -1))) {
|
||||
foundDBAll = true
|
||||
}
|
||||
if base.StringContainsAll(grant, `ALTER`, `CREATE`, `DELETE`, `DROP`, `INDEX`, `INSERT`, `LOCK TABLES`, `SELECT`, `TRIGGER`, `UPDATE`, ` ON *.*`) {
|
||||
foundDBAll = true
|
||||
}
|
||||
@ -552,44 +554,35 @@ func (this *Inspector) applyColumnTypes(databaseName, tableName string, columnsL
|
||||
err := sqlutils.QueryRowsMap(this.db, query, func(m sqlutils.RowMap) error {
|
||||
columnName := m.GetString("COLUMN_NAME")
|
||||
columnType := m.GetString("COLUMN_TYPE")
|
||||
if strings.Contains(columnType, "unsigned") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.SetUnsigned(columnName)
|
||||
for _, columnsList := range columnsLists {
|
||||
column := columnsList.GetColumn(columnName)
|
||||
if column == nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "mediumint") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.MediumIntColumnType
|
||||
|
||||
if strings.Contains(columnType, "unsigned") {
|
||||
column.IsUnsigned = true
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "timestamp") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.TimestampColumnType
|
||||
if strings.Contains(columnType, "mediumint") {
|
||||
column.Type = sql.MediumIntColumnType
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "datetime") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.DateTimeColumnType
|
||||
if strings.Contains(columnType, "timestamp") {
|
||||
column.Type = sql.TimestampColumnType
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "json") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.JSONColumnType
|
||||
if strings.Contains(columnType, "datetime") {
|
||||
column.Type = sql.DateTimeColumnType
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "float") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.FloatColumnType
|
||||
if strings.Contains(columnType, "json") {
|
||||
column.Type = sql.JSONColumnType
|
||||
}
|
||||
}
|
||||
if strings.HasPrefix(columnType, "enum") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.EnumColumnType
|
||||
if strings.Contains(columnType, "float") {
|
||||
column.Type = sql.FloatColumnType
|
||||
}
|
||||
}
|
||||
if charset := m.GetString("CHARACTER_SET_NAME"); charset != "" {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.SetCharset(columnName, charset)
|
||||
if strings.HasPrefix(columnType, "enum") {
|
||||
column.Type = sql.EnumColumnType
|
||||
}
|
||||
if charset := m.GetString("CHARACTER_SET_NAME"); charset != "" {
|
||||
column.Charset = charset
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
@ -1087,24 +1087,33 @@ func (this *Migrator) iterateChunks() error {
|
||||
log.Debugf("No rows found in table. Rowcopy will be implicitly empty")
|
||||
return terminateRowIteration(nil)
|
||||
}
|
||||
|
||||
var hasNoFurtherRangeFlag int64
|
||||
// Iterate per chunk:
|
||||
for {
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 {
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 || atomic.LoadInt64(&hasNoFurtherRangeFlag) == 1 {
|
||||
// Done
|
||||
// There's another such check down the line
|
||||
return nil
|
||||
}
|
||||
copyRowsFunc := func() error {
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 {
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 || atomic.LoadInt64(&hasNoFurtherRangeFlag) == 1 {
|
||||
// Done.
|
||||
// There's another such check down the line
|
||||
return nil
|
||||
}
|
||||
hasFurtherRange, err := this.applier.CalculateNextIterationRangeEndValues()
|
||||
if err != nil {
|
||||
|
||||
// When hasFurtherRange is false, original table might be write locked and CalculateNextIterationRangeEndValues would hangs forever
|
||||
|
||||
hasFurtherRange := false
|
||||
if err := this.retryOperation(func() (e error) {
|
||||
hasFurtherRange, e = this.applier.CalculateNextIterationRangeEndValues()
|
||||
return e
|
||||
}); err != nil {
|
||||
return terminateRowIteration(err)
|
||||
}
|
||||
if !hasFurtherRange {
|
||||
atomic.StoreInt64(&hasNoFurtherRangeFlag, 1)
|
||||
return terminateRowIteration(nil)
|
||||
}
|
||||
// Copy task:
|
||||
@ -1122,7 +1131,7 @@ func (this *Migrator) iterateChunks() error {
|
||||
}
|
||||
_, rowsAffected, _, err := this.applier.ApplyIterationInsertQuery()
|
||||
if err != nil {
|
||||
return terminateRowIteration(err)
|
||||
return err // wrapping call will retry
|
||||
}
|
||||
atomic.AddInt64(&this.migrationContext.TotalRowsCopied, rowsAffected)
|
||||
atomic.AddInt64(&this.migrationContext.Iteration, 1)
|
||||
|
20
localtests/bit-add/create.sql
Normal file
20
localtests/bit-add/create.sql
Normal file
@ -0,0 +1,20 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11);
|
||||
insert into gh_ost_test values (null, 13);
|
||||
end ;;
|
1
localtests/bit-add/extra_args
Normal file
1
localtests/bit-add/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="add column is_good bit null default 0"
|
1
localtests/bit-add/ghost_columns
Normal file
1
localtests/bit-add/ghost_columns
Normal file
@ -0,0 +1 @@
|
||||
id, i
|
1
localtests/bit-add/orig_columns
Normal file
1
localtests/bit-add/orig_columns
Normal file
@ -0,0 +1 @@
|
||||
id, i
|
24
localtests/bit-dml/create.sql
Normal file
24
localtests/bit-dml/create.sql
Normal file
@ -0,0 +1,24 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
is_good bit null default 0,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, 0);
|
||||
insert into gh_ost_test values (null, 13, 1);
|
||||
insert into gh_ost_test values (null, 17, 1);
|
||||
|
||||
update gh_ost_test set is_good=0 where i=13 order by id desc limit 1;
|
||||
end ;;
|
1
localtests/bit-dml/extra_args
Normal file
1
localtests/bit-dml/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="modify column is_good bit not null default 0" --approve-renamed-columns
|
31
localtests/convert-utf8mb4/create.sql
Normal file
31
localtests/convert-utf8mb4/create.sql
Normal file
@ -0,0 +1,31 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
t varchar(128) charset utf8 collate utf8_general_ci,
|
||||
tl varchar(128) charset latin1 not null,
|
||||
ta varchar(128) charset ascii not null,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
insert into gh_ost_test values (null, 'átesting');
|
||||
|
||||
|
||||
insert into gh_ost_test values (null, 'Hello world, Καλημέρα κόσμε, コンニチハ', 'átesting0', 'initial');
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, md5(rand()), 'átesting-a', 'a');
|
||||
insert into gh_ost_test values (null, 'novo proprietário', 'átesting-b', 'b');
|
||||
insert into gh_ost_test values (null, '2H₂ + O₂ ⇌ 2H₂O, R = 4.7 kΩ, ⌀ 200 mm', 'átesting-c', 'c');
|
||||
insert into gh_ost_test values (null, 'usuário', 'átesting-x', 'x');
|
||||
|
||||
delete from gh_ost_test where ta='x' order by id desc limit 1;
|
||||
end ;;
|
1
localtests/convert-utf8mb4/extra_args
Normal file
1
localtests/convert-utf8mb4/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter='convert to character set utf8mb4'
|
@ -1,3 +1,5 @@
|
||||
set session time_zone='+00:00';
|
||||
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
@ -7,6 +9,7 @@ create table gh_ost_test (
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
set session time_zone='+00:00';
|
||||
insert into gh_ost_test values (1, '0000-00-00 00:00:00', now(), 0);
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
@ -19,5 +22,6 @@ create event gh_ost_test
|
||||
enable
|
||||
do
|
||||
begin
|
||||
set session time_zone='+00:00';
|
||||
update gh_ost_test set counter = counter + 1 where id = 1;
|
||||
end ;;
|
||||
|
23
localtests/decimal/create.sql
Normal file
23
localtests/decimal/create.sql
Normal file
@ -0,0 +1,23 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
dec0 decimal(65,30) unsigned NOT NULL DEFAULT '0.000000000000000000000000000000',
|
||||
dec1 decimal(65,30) unsigned NOT NULL DEFAULT '1.000000000000000000000000000000',
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 0.0, 0.0);
|
||||
insert into gh_ost_test values (null, 2.0, 4.0);
|
||||
insert into gh_ost_test values (null, 99999999999999999999999999999999999.000, 6.0);
|
||||
update gh_ost_test set dec1=4.5 where dec2=4.0 order by id desc limit 1;
|
||||
end ;;
|
@ -14,11 +14,13 @@ ghost_binary=""
|
||||
exec_command_file=/tmp/gh-ost-test.bash
|
||||
orig_content_output_file=/tmp/gh-ost-test.orig.content.csv
|
||||
ghost_content_output_file=/tmp/gh-ost-test.ghost.content.csv
|
||||
throttle_flag_file=/tmp/gh-ost-test.ghost.throttle.flag
|
||||
|
||||
master_host=
|
||||
master_port=
|
||||
replica_host=
|
||||
replica_port=
|
||||
original_sql_mode=
|
||||
|
||||
OPTIND=1
|
||||
while getopts "b:" OPTION
|
||||
@ -45,6 +47,8 @@ verify_master_and_replica() {
|
||||
echo "Cannot enable event_scheduler on master"
|
||||
exit 1
|
||||
fi
|
||||
original_sql_mode="$(gh-ost-test-mysql-master -e "select @@global.sql_mode" -s -s)"
|
||||
echo "sql_mode on master is ${original_sql_mode}"
|
||||
|
||||
if [ "$(gh-ost-test-mysql-replica -e "select 1" -ss)" != "1" ] ; then
|
||||
echo "Cannot verify gh-ost-test-mysql-replica"
|
||||
@ -87,7 +91,6 @@ start_replication() {
|
||||
test_single() {
|
||||
local test_name
|
||||
test_name="$1"
|
||||
original_sql_mode="$(gh-ost-test-mysql-master -e "select @@global.sql_mode" -s -s)"
|
||||
|
||||
if [ -f $tests_path/$test_name/ignore_versions ] ; then
|
||||
ignore_versions=$(cat $tests_path/$test_name/ignore_versions)
|
||||
@ -108,7 +111,7 @@ test_single() {
|
||||
gh-ost-test-mysql-master --default-character-set=utf8mb4 test -e "set @@global.sql_mode='$(cat $tests_path/$test_name/sql_mode)'"
|
||||
gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "set @@global.sql_mode='$(cat $tests_path/$test_name/sql_mode)'"
|
||||
fi
|
||||
|
||||
|
||||
gh-ost-test-mysql-master --default-character-set=utf8mb4 test < $tests_path/$test_name/create.sql
|
||||
|
||||
extra_args=""
|
||||
@ -145,6 +148,7 @@ test_single() {
|
||||
--initially-drop-old-table \
|
||||
--initially-drop-ghost-table \
|
||||
--throttle-query='select timestampdiff(second, min(last_update), now()) < 5 from _gh_ost_test_ghc' \
|
||||
--throttle-flag-file=$throttle_flag_file \
|
||||
--serve-socket-file=/tmp/gh-ost.test.sock \
|
||||
--initially-drop-socket-file \
|
||||
--test-on-replica \
|
||||
|
@ -4,7 +4,7 @@ PREFERRED_GO_VERSION=go1.9.2
|
||||
SUPPORTED_GO_VERSIONS='go1.[89]'
|
||||
|
||||
GO_PKG_DARWIN=${PREFERRED_GO_VERSION}.darwin-amd64.pkg
|
||||
GO_PKG_DARWIN_SHA=73fd5840d55f5566d8db6c0ffdd187577e8ebe650c783f68bd27cbf95bde6743
|
||||
GO_PKG_DARWIN_SHA=8b4f6ae6deae1150d2e341d02c247fd18a99af387516540eeb84702ffd76d3a1
|
||||
|
||||
GO_PKG_LINUX=${PREFERRED_GO_VERSION}.linux-amd64.tar.gz
|
||||
GO_PKG_LINUX_SHA=de874549d9a8d8d8062be05808509c09a88a248e77ec14eb77453530829ac02b
|
||||
|
26
vendor/github.com/siddontang/go-mysql/.travis.yml
generated
vendored
26
vendor/github.com/siddontang/go-mysql/.travis.yml
generated
vendored
@ -1,32 +1,34 @@
|
||||
language: go
|
||||
|
||||
go:
|
||||
- 1.6
|
||||
- 1.7
|
||||
- "1.9"
|
||||
- "1.10"
|
||||
|
||||
dist: trusty
|
||||
sudo: required
|
||||
addons:
|
||||
apt:
|
||||
sources:
|
||||
- mysql-5.7-trusty
|
||||
packages:
|
||||
- mysql-server-5.6
|
||||
- mysql-client-core-5.6
|
||||
- mysql-client-5.6
|
||||
- mysql-server
|
||||
- mysql-client
|
||||
|
||||
before_install:
|
||||
- sudo mysql -e "use mysql; update user set authentication_string=PASSWORD('') where User='root'; update user set plugin='mysql_native_password';FLUSH PRIVILEGES;"
|
||||
- sudo mysql_upgrade
|
||||
|
||||
before_script:
|
||||
# stop mysql and use row-based format binlog
|
||||
- "sudo /etc/init.d/mysql stop || true"
|
||||
- "sudo service mysql stop || true"
|
||||
- "echo '[mysqld]' | sudo tee /etc/mysql/conf.d/replication.cnf"
|
||||
- "echo 'server-id=1' | sudo tee -a /etc/mysql/conf.d/replication.cnf"
|
||||
- "echo 'log-bin=mysql' | sudo tee -a /etc/mysql/conf.d/replication.cnf"
|
||||
- "echo 'log-bin=mysql' | sudo tee -a /etc/mysql/conf.d/replication.cnf"
|
||||
- "echo 'binlog-format = row' | sudo tee -a /etc/mysql/conf.d/replication.cnf"
|
||||
|
||||
# Start mysql (avoid errors to have logs)
|
||||
- "sudo /etc/init.d/mysql start || true"
|
||||
- "sudo service mysql start || true"
|
||||
- "sudo tail -1000 /var/log/syslog"
|
||||
|
||||
- mysql -e "CREATE DATABASE IF NOT EXISTS test;" -uroot
|
||||
|
||||
|
||||
script:
|
||||
- make test
|
||||
- make test
|
||||
|
78
vendor/github.com/siddontang/go-mysql/Gopkg.lock
generated
vendored
Normal file
78
vendor/github.com/siddontang/go-mysql/Gopkg.lock
generated
vendored
Normal file
@ -0,0 +1,78 @@
|
||||
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
|
||||
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/BurntSushi/toml"
|
||||
packages = ["."]
|
||||
revision = "b26d9c308763d68093482582cea63d69be07a0f0"
|
||||
version = "v0.3.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/go-sql-driver/mysql"
|
||||
packages = ["."]
|
||||
revision = "99ff426eb706cffe92ff3d058e168b278cabf7c7"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/jmoiron/sqlx"
|
||||
packages = [
|
||||
".",
|
||||
"reflectx"
|
||||
]
|
||||
revision = "2aeb6a910c2b94f2d5eb53d9895d80e27264ec41"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/juju/errors"
|
||||
packages = ["."]
|
||||
revision = "c7d06af17c68cd34c835053720b21f6549d9b0ee"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/pingcap/check"
|
||||
packages = ["."]
|
||||
revision = "1c287c953996ab3a0bf535dba9d53d809d3dc0b6"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/satori/go.uuid"
|
||||
packages = ["."]
|
||||
revision = "f58768cc1a7a7e77a3bd49e98cdd21419399b6a3"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[projects]]
|
||||
name = "github.com/shopspring/decimal"
|
||||
packages = ["."]
|
||||
revision = "cd690d0c9e2447b1ef2a129a6b7b49077da89b8e"
|
||||
version = "1.1.0"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/siddontang/go"
|
||||
packages = [
|
||||
"hack",
|
||||
"sync2"
|
||||
]
|
||||
revision = "2b7082d296ba89ae7ead0f977816bddefb65df9d"
|
||||
|
||||
[[projects]]
|
||||
branch = "master"
|
||||
name = "github.com/siddontang/go-log"
|
||||
packages = [
|
||||
"log",
|
||||
"loggers"
|
||||
]
|
||||
revision = "a4d157e46fa3e08b7e7ff329af341fa3ff86c02c"
|
||||
|
||||
[[projects]]
|
||||
name = "google.golang.org/appengine"
|
||||
packages = ["cloudsql"]
|
||||
revision = "b1f26356af11148e710935ed1ac8a7f5702c7612"
|
||||
version = "v1.1.0"
|
||||
|
||||
[solve-meta]
|
||||
analyzer-name = "dep"
|
||||
analyzer-version = 1
|
||||
inputs-digest = "a1f9939938a58551bbb3f19411c9d1386995d36296de6f6fb5d858f5923db85e"
|
||||
solver-name = "gps-cdcl"
|
||||
solver-version = 1
|
56
vendor/github.com/siddontang/go-mysql/Gopkg.toml
generated
vendored
Normal file
56
vendor/github.com/siddontang/go-mysql/Gopkg.toml
generated
vendored
Normal file
@ -0,0 +1,56 @@
|
||||
# Gopkg.toml example
|
||||
#
|
||||
# Refer to https://golang.github.io/dep/docs/Gopkg.toml.html
|
||||
# for detailed Gopkg.toml documentation.
|
||||
#
|
||||
# required = ["github.com/user/thing/cmd/thing"]
|
||||
# ignored = ["github.com/user/project/pkgX", "bitbucket.org/user/project/pkgA/pkgY"]
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project"
|
||||
# version = "1.0.0"
|
||||
#
|
||||
# [[constraint]]
|
||||
# name = "github.com/user/project2"
|
||||
# branch = "dev"
|
||||
# source = "github.com/myfork/project2"
|
||||
#
|
||||
# [[override]]
|
||||
# name = "github.com/x/y"
|
||||
# version = "2.4.0"
|
||||
#
|
||||
# [prune]
|
||||
# non-go = false
|
||||
# go-tests = true
|
||||
# unused-packages = true
|
||||
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/BurntSushi/toml"
|
||||
version = "v0.3.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/go-sql-driver/mysql"
|
||||
branch = "master"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/juju/errors"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/satori/go.uuid"
|
||||
version = "v1.2.0"
|
||||
|
||||
[[constraint]]
|
||||
name = "github.com/shopspring/decimal"
|
||||
version = "v1.1.0"
|
||||
|
||||
[[constraint]]
|
||||
branch = "master"
|
||||
name = "github.com/siddontang/go"
|
||||
|
||||
[prune]
|
||||
go-tests = true
|
||||
unused-packages = true
|
||||
non-go = true
|
||||
|
23
vendor/github.com/siddontang/go-mysql/Makefile
generated
vendored
23
vendor/github.com/siddontang/go-mysql/Makefile
generated
vendored
@ -1,33 +1,14 @@
|
||||
all: build
|
||||
|
||||
build:
|
||||
rm -rf vendor && ln -s _vendor/vendor vendor
|
||||
go build -o bin/go-mysqlbinlog cmd/go-mysqlbinlog/main.go
|
||||
go build -o bin/go-mysqldump cmd/go-mysqldump/main.go
|
||||
go build -o bin/go-canal cmd/go-canal/main.go
|
||||
go build -o bin/go-binlogparser cmd/go-binlogparser/main.go
|
||||
rm -rf vendor
|
||||
|
||||
|
||||
test:
|
||||
rm -rf vendor && ln -s _vendor/vendor vendor
|
||||
go test --race -timeout 2m ./...
|
||||
rm -rf vendor
|
||||
|
||||
clean:
|
||||
go clean -i ./...
|
||||
@rm -rf ./bin
|
||||
|
||||
update_vendor:
|
||||
which glide >/dev/null || curl https://glide.sh/get | sh
|
||||
which glide-vc || go get -v -u github.com/sgotti/glide-vc
|
||||
rm -r vendor && mv _vendor/vendor vendor || true
|
||||
rm -rf _vendor
|
||||
ifdef PKG
|
||||
glide get --strip-vendor --skip-test ${PKG}
|
||||
else
|
||||
glide update --strip-vendor --skip-test
|
||||
endif
|
||||
@echo "removing test files"
|
||||
glide vc --only-code --no-tests
|
||||
mkdir -p _vendor
|
||||
mv vendor _vendor/vendor
|
||||
@rm -rf ./bin
|
90
vendor/github.com/siddontang/go-mysql/README.md
generated
vendored
90
vendor/github.com/siddontang/go-mysql/README.md
generated
vendored
@ -15,7 +15,7 @@ import (
|
||||
"github.com/siddontang/go-mysql/replication"
|
||||
"os"
|
||||
)
|
||||
// Create a binlog syncer with a unique server id, the server id must be different from other MySQL's.
|
||||
// Create a binlog syncer with a unique server id, the server id must be different from other MySQL's.
|
||||
// flavor is mysql or mariadb
|
||||
cfg := replication.BinlogSyncerConfig {
|
||||
ServerID: 100,
|
||||
@ -27,7 +27,7 @@ cfg := replication.BinlogSyncerConfig {
|
||||
}
|
||||
syncer := replication.NewBinlogSyncer(cfg)
|
||||
|
||||
// Start sync with sepcified binlog file and position
|
||||
// Start sync with specified binlog file and position
|
||||
streamer, _ := syncer.StartSync(mysql.Position{binlogFile, binlogPos})
|
||||
|
||||
// or you can start a gtid replication like
|
||||
@ -44,7 +44,7 @@ for {
|
||||
// or we can use a timeout context
|
||||
for {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
e, _ := s.GetEvent(ctx)
|
||||
ev, err := s.GetEvent(ctx)
|
||||
cancel()
|
||||
|
||||
if err == context.DeadlineExceeded {
|
||||
@ -85,13 +85,13 @@ Schema: test
|
||||
Query: DROP TABLE IF EXISTS `test_replication` /* generated by server */
|
||||
```
|
||||
|
||||
## Canal
|
||||
## Canal
|
||||
|
||||
Canal is a package that can sync your MySQL into everywhere, like Redis, Elasticsearch.
|
||||
Canal is a package that can sync your MySQL into everywhere, like Redis, Elasticsearch.
|
||||
|
||||
First, canal will dump your MySQL data then sync changed data using binlog incrementally.
|
||||
First, canal will dump your MySQL data then sync changed data using binlog incrementally.
|
||||
|
||||
You must use ROW format for binlog, full binlog row image is preferred, because we may meet some errors when primary key changed in update for minimal or noblob row image.
|
||||
You must use ROW format for binlog, full binlog row image is preferred, because we may meet some errors when primary key changed in update for minimal or noblob row image.
|
||||
|
||||
A simple example:
|
||||
|
||||
@ -105,30 +105,31 @@ cfg.Dump.Tables = []string{"canal_test"}
|
||||
|
||||
c, err := NewCanal(cfg)
|
||||
|
||||
type myRowsEventHandler struct {
|
||||
type MyEventHandler struct {
|
||||
DummyEventHandler
|
||||
}
|
||||
|
||||
func (h *myRowsEventHandler) Do(e *RowsEvent) error {
|
||||
func (h *MyEventHandler) OnRow(e *RowsEvent) error {
|
||||
log.Infof("%s %v\n", e.Action, e.Rows)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *myRowsEventHandler) String() string {
|
||||
return "myRowsEventHandler"
|
||||
func (h *MyEventHandler) String() string {
|
||||
return "MyEventHandler"
|
||||
}
|
||||
|
||||
// Register a handler to handle RowsEvent
|
||||
c.RegRowsEventHandler(&MyRowsEventHandler{})
|
||||
c.SetEventHandler(&MyEventHandler{})
|
||||
|
||||
// Start canal
|
||||
c.Start()
|
||||
```
|
||||
|
||||
You can see [go-mysql-elasticsearch](https://github.com/siddontang/go-mysql-elasticsearch) for how to sync MySQL data into Elasticsearch.
|
||||
You can see [go-mysql-elasticsearch](https://github.com/siddontang/go-mysql-elasticsearch) for how to sync MySQL data into Elasticsearch.
|
||||
|
||||
## Client
|
||||
|
||||
Client package supports a simple MySQL connection driver which you can use it to communicate with MySQL server.
|
||||
Client package supports a simple MySQL connection driver which you can use it to communicate with MySQL server.
|
||||
|
||||
### Example
|
||||
|
||||
@ -137,9 +138,16 @@ import (
|
||||
"github.com/siddontang/go-mysql/client"
|
||||
)
|
||||
|
||||
// Connect MySQL at 127.0.0.1:3306, with user root, an empty passowrd and database test
|
||||
// Connect MySQL at 127.0.0.1:3306, with user root, an empty password and database test
|
||||
conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test")
|
||||
|
||||
// Or to use SSL/TLS connection if MySQL server supports TLS
|
||||
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.UseSSL(true)})
|
||||
|
||||
// or to set your own client-side certificates for identity verification for security
|
||||
//tlsConfig := NewClientTLSConfig(caPem, certPem, keyPem, false, "your-server-name")
|
||||
//conn, _ := client.Connect("127.0.0.1:3306", "root", "", "test", func(c *Conn) {c.SetTLSConfig(tlsConfig)})
|
||||
|
||||
conn.Ping()
|
||||
|
||||
// Insert
|
||||
@ -153,13 +161,20 @@ r, _ := conn.Execute(`select id, name from table where id = 1`)
|
||||
|
||||
// Handle resultset
|
||||
v, _ := r.GetInt(0, 0)
|
||||
v, _ = r.GetIntByName(0, "id")
|
||||
v, _ = r.GetIntByName(0, "id")
|
||||
```
|
||||
|
||||
Tested MySQL versions for the client include:
|
||||
- 5.5.x
|
||||
- 5.6.x
|
||||
- 5.7.x
|
||||
- 8.0.x
|
||||
|
||||
## Server
|
||||
|
||||
Server package supplies a framework to implement a simple MySQL server which can handle the packets from the MySQL client.
|
||||
You can use it to build your own MySQL proxy.
|
||||
Server package supplies a framework to implement a simple MySQL server which can handle the packets from the MySQL client.
|
||||
You can use it to build your own MySQL proxy. The server connection is compatible with MySQL 5.5, 5.6, 5.7, and 8.0 versions,
|
||||
so that most MySQL clients should be able to connect to the Server without modifications.
|
||||
|
||||
### Example
|
||||
|
||||
@ -173,42 +188,53 @@ l, _ := net.Listen("tcp", "127.0.0.1:4000")
|
||||
|
||||
c, _ := l.Accept()
|
||||
|
||||
// Create a connection with user root and an empty passowrd
|
||||
// We only an empty handler to handle command too
|
||||
// Create a connection with user root and an empty password.
|
||||
// You can use your own handler to handle command here.
|
||||
conn, _ := server.NewConn(c, "root", "", server.EmptyHandler{})
|
||||
|
||||
for {
|
||||
conn.HandleCommand()
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
Another shell
|
||||
|
||||
```
|
||||
mysql -h127.0.0.1 -P4000 -uroot -p
|
||||
//Becuase empty handler does nothing, so here the MySQL client can only connect the proxy server. :-)
|
||||
mysql -h127.0.0.1 -P4000 -uroot -p
|
||||
//Becuase empty handler does nothing, so here the MySQL client can only connect the proxy server. :-)
|
||||
```
|
||||
|
||||
> ```NewConn()``` will use default server configurations:
|
||||
> 1. automatically generate default server certificates and enable TLS/SSL support.
|
||||
> 2. support three mainstream authentication methods **'mysql_native_password'**, **'caching_sha2_password'**, and **'sha256_password'**
|
||||
> and use **'mysql_native_password'** as default.
|
||||
> 3. use an in-memory user credential provider to store user and password.
|
||||
>
|
||||
> To customize server configurations, use ```NewServer()``` and create connection via ```NewCustomizedConn()```.
|
||||
|
||||
|
||||
## Failover
|
||||
|
||||
Failover supports to promote a new master and let other slaves replicate from it automatically when the old master was down.
|
||||
|
||||
Failover supports MySQL >= 5.6.9 with GTID mode, if you use lower version, e.g, MySQL 5.0 - 5.5, please use [MHA](http://code.google.com/p/mysql-master-ha/) or [orchestrator](https://github.com/outbrain/orchestrator).
|
||||
|
||||
At the same time, Failover supports MariaDB >= 10.0.9 with GTID mode too.
|
||||
At the same time, Failover supports MariaDB >= 10.0.9 with GTID mode too.
|
||||
|
||||
Why only GTID? Supporting failover with no GTID mode is very hard, because slave can not find the proper binlog filename and position with the new master.
|
||||
Although there are many companies use MySQL 5.0 - 5.5, I think upgrade MySQL to 5.6 or higher is easy.
|
||||
Why only GTID? Supporting failover with no GTID mode is very hard, because slave can not find the proper binlog filename and position with the new master.
|
||||
Although there are many companies use MySQL 5.0 - 5.5, I think upgrade MySQL to 5.6 or higher is easy.
|
||||
|
||||
## Driver
|
||||
|
||||
Driver is the package that you can use go-mysql with go database/sql like other drivers. A simple example:
|
||||
|
||||
```
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
|
||||
- "github.com/siddontang/go-mysql/driver"
|
||||
_ "github.com/siddontang/go-mysql/driver"
|
||||
)
|
||||
|
||||
func main() {
|
||||
@ -221,9 +247,17 @@ func main() {
|
||||
|
||||
We pass all tests in https://github.com/bradfitz/go-sql-test using go-mysql driver. :-)
|
||||
|
||||
## Donate
|
||||
|
||||
If you like the project and want to buy me a cola, you can through:
|
||||
|
||||
|PayPal|微信|
|
||||
|------|---|
|
||||
|[![](https://www.paypalobjects.com/webstatic/paypalme/images/pp_logo_small.png)](https://paypal.me/siddontang)|[![](https://github.com/siddontang/blog/blob/master/donate/weixin.png)|
|
||||
|
||||
## Feedback
|
||||
|
||||
go-mysql is still in development, your feedback is very welcome.
|
||||
go-mysql is still in development, your feedback is very welcome.
|
||||
|
||||
|
||||
Gmail: siddontang@gmail.com
|
||||
|
112
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/go-sql-driver/mysql/rows.go
generated
vendored
112
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/go-sql-driver/mysql/rows.go
generated
vendored
@ -1,112 +0,0 @@
|
||||
// Go MySQL Driver - A MySQL-Driver for Go's database/sql package
|
||||
//
|
||||
// Copyright 2012 The Go-MySQL-Driver Authors. All rights reserved.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"database/sql/driver"
|
||||
"io"
|
||||
)
|
||||
|
||||
type mysqlField struct {
|
||||
tableName string
|
||||
name string
|
||||
flags fieldFlag
|
||||
fieldType byte
|
||||
decimals byte
|
||||
}
|
||||
|
||||
type mysqlRows struct {
|
||||
mc *mysqlConn
|
||||
columns []mysqlField
|
||||
}
|
||||
|
||||
type binaryRows struct {
|
||||
mysqlRows
|
||||
}
|
||||
|
||||
type textRows struct {
|
||||
mysqlRows
|
||||
}
|
||||
|
||||
type emptyRows struct{}
|
||||
|
||||
func (rows *mysqlRows) Columns() []string {
|
||||
columns := make([]string, len(rows.columns))
|
||||
if rows.mc != nil && rows.mc.cfg.ColumnsWithAlias {
|
||||
for i := range columns {
|
||||
if tableName := rows.columns[i].tableName; len(tableName) > 0 {
|
||||
columns[i] = tableName + "." + rows.columns[i].name
|
||||
} else {
|
||||
columns[i] = rows.columns[i].name
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for i := range columns {
|
||||
columns[i] = rows.columns[i].name
|
||||
}
|
||||
}
|
||||
return columns
|
||||
}
|
||||
|
||||
func (rows *mysqlRows) Close() error {
|
||||
mc := rows.mc
|
||||
if mc == nil {
|
||||
return nil
|
||||
}
|
||||
if mc.netConn == nil {
|
||||
return ErrInvalidConn
|
||||
}
|
||||
|
||||
// Remove unread packets from stream
|
||||
err := mc.readUntilEOF()
|
||||
if err == nil {
|
||||
if err = mc.discardResults(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
rows.mc = nil
|
||||
return err
|
||||
}
|
||||
|
||||
func (rows *binaryRows) Next(dest []driver.Value) error {
|
||||
if mc := rows.mc; mc != nil {
|
||||
if mc.netConn == nil {
|
||||
return ErrInvalidConn
|
||||
}
|
||||
|
||||
// Fetch next row from stream
|
||||
return rows.readRow(dest)
|
||||
}
|
||||
return io.EOF
|
||||
}
|
||||
|
||||
func (rows *textRows) Next(dest []driver.Value) error {
|
||||
if mc := rows.mc; mc != nil {
|
||||
if mc.netConn == nil {
|
||||
return ErrInvalidConn
|
||||
}
|
||||
|
||||
// Fetch next row from stream
|
||||
return rows.readRow(dest)
|
||||
}
|
||||
return io.EOF
|
||||
}
|
||||
|
||||
func (rows emptyRows) Columns() []string {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rows emptyRows) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (rows emptyRows) Next(dest []driver.Value) error {
|
||||
return io.EOF
|
||||
}
|
@ -1,150 +0,0 @@
|
||||
// Go MySQL Driver - A MySQL-Driver for Go's database/sql package
|
||||
//
|
||||
// Copyright 2012 The Go-MySQL-Driver Authors. All rights reserved.
|
||||
//
|
||||
// This Source Code Form is subject to the terms of the Mozilla Public
|
||||
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
|
||||
// You can obtain one at http://mozilla.org/MPL/2.0/.
|
||||
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"database/sql/driver"
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
type mysqlStmt struct {
|
||||
mc *mysqlConn
|
||||
id uint32
|
||||
paramCount int
|
||||
columns []mysqlField // cached from the first query
|
||||
}
|
||||
|
||||
func (stmt *mysqlStmt) Close() error {
|
||||
if stmt.mc == nil || stmt.mc.netConn == nil {
|
||||
errLog.Print(ErrInvalidConn)
|
||||
return driver.ErrBadConn
|
||||
}
|
||||
|
||||
err := stmt.mc.writeCommandPacketUint32(comStmtClose, stmt.id)
|
||||
stmt.mc = nil
|
||||
return err
|
||||
}
|
||||
|
||||
func (stmt *mysqlStmt) NumInput() int {
|
||||
return stmt.paramCount
|
||||
}
|
||||
|
||||
func (stmt *mysqlStmt) ColumnConverter(idx int) driver.ValueConverter {
|
||||
return converter{}
|
||||
}
|
||||
|
||||
func (stmt *mysqlStmt) Exec(args []driver.Value) (driver.Result, error) {
|
||||
if stmt.mc.netConn == nil {
|
||||
errLog.Print(ErrInvalidConn)
|
||||
return nil, driver.ErrBadConn
|
||||
}
|
||||
// Send command
|
||||
err := stmt.writeExecutePacket(args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mc := stmt.mc
|
||||
|
||||
mc.affectedRows = 0
|
||||
mc.insertId = 0
|
||||
|
||||
// Read Result
|
||||
resLen, err := mc.readResultSetHeaderPacket()
|
||||
if err == nil {
|
||||
if resLen > 0 {
|
||||
// Columns
|
||||
err = mc.readUntilEOF()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Rows
|
||||
err = mc.readUntilEOF()
|
||||
}
|
||||
if err == nil {
|
||||
return &mysqlResult{
|
||||
affectedRows: int64(mc.affectedRows),
|
||||
insertId: int64(mc.insertId),
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
func (stmt *mysqlStmt) Query(args []driver.Value) (driver.Rows, error) {
|
||||
if stmt.mc.netConn == nil {
|
||||
errLog.Print(ErrInvalidConn)
|
||||
return nil, driver.ErrBadConn
|
||||
}
|
||||
// Send command
|
||||
err := stmt.writeExecutePacket(args)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
mc := stmt.mc
|
||||
|
||||
// Read Result
|
||||
resLen, err := mc.readResultSetHeaderPacket()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rows := new(binaryRows)
|
||||
|
||||
if resLen > 0 {
|
||||
rows.mc = mc
|
||||
// Columns
|
||||
// If not cached, read them and cache them
|
||||
if stmt.columns == nil {
|
||||
rows.columns, err = mc.readColumns(resLen)
|
||||
stmt.columns = rows.columns
|
||||
} else {
|
||||
rows.columns = stmt.columns
|
||||
err = mc.readUntilEOF()
|
||||
}
|
||||
}
|
||||
|
||||
return rows, err
|
||||
}
|
||||
|
||||
type converter struct{}
|
||||
|
||||
func (c converter) ConvertValue(v interface{}) (driver.Value, error) {
|
||||
if driver.IsValue(v) {
|
||||
return v, nil
|
||||
}
|
||||
|
||||
rv := reflect.ValueOf(v)
|
||||
switch rv.Kind() {
|
||||
case reflect.Ptr:
|
||||
// indirect pointers
|
||||
if rv.IsNil() {
|
||||
return nil, nil
|
||||
}
|
||||
return c.ConvertValue(rv.Elem().Interface())
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return rv.Int(), nil
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32:
|
||||
return int64(rv.Uint()), nil
|
||||
case reflect.Uint64:
|
||||
u64 := rv.Uint()
|
||||
if u64 >= 1<<63 {
|
||||
return strconv.FormatUint(u64, 10), nil
|
||||
}
|
||||
return int64(u64), nil
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return rv.Float(), nil
|
||||
}
|
||||
return nil, fmt.Errorf("unsupported type %T, a %s", v, rv.Kind())
|
||||
}
|
165
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/LICENSE
generated
vendored
165
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/LICENSE
generated
vendored
@ -1,165 +0,0 @@
|
||||
GNU LESSER GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
|
||||
This version of the GNU Lesser General Public License incorporates
|
||||
the terms and conditions of version 3 of the GNU General Public
|
||||
License, supplemented by the additional permissions listed below.
|
||||
|
||||
0. Additional Definitions.
|
||||
|
||||
As used herein, "this License" refers to version 3 of the GNU Lesser
|
||||
General Public License, and the "GNU GPL" refers to version 3 of the GNU
|
||||
General Public License.
|
||||
|
||||
"The Library" refers to a covered work governed by this License,
|
||||
other than an Application or a Combined Work as defined below.
|
||||
|
||||
An "Application" is any work that makes use of an interface provided
|
||||
by the Library, but which is not otherwise based on the Library.
|
||||
Defining a subclass of a class defined by the Library is deemed a mode
|
||||
of using an interface provided by the Library.
|
||||
|
||||
A "Combined Work" is a work produced by combining or linking an
|
||||
Application with the Library. The particular version of the Library
|
||||
with which the Combined Work was made is also called the "Linked
|
||||
Version".
|
||||
|
||||
The "Minimal Corresponding Source" for a Combined Work means the
|
||||
Corresponding Source for the Combined Work, excluding any source code
|
||||
for portions of the Combined Work that, considered in isolation, are
|
||||
based on the Application, and not on the Linked Version.
|
||||
|
||||
The "Corresponding Application Code" for a Combined Work means the
|
||||
object code and/or source code for the Application, including any data
|
||||
and utility programs needed for reproducing the Combined Work from the
|
||||
Application, but excluding the System Libraries of the Combined Work.
|
||||
|
||||
1. Exception to Section 3 of the GNU GPL.
|
||||
|
||||
You may convey a covered work under sections 3 and 4 of this License
|
||||
without being bound by section 3 of the GNU GPL.
|
||||
|
||||
2. Conveying Modified Versions.
|
||||
|
||||
If you modify a copy of the Library, and, in your modifications, a
|
||||
facility refers to a function or data to be supplied by an Application
|
||||
that uses the facility (other than as an argument passed when the
|
||||
facility is invoked), then you may convey a copy of the modified
|
||||
version:
|
||||
|
||||
a) under this License, provided that you make a good faith effort to
|
||||
ensure that, in the event an Application does not supply the
|
||||
function or data, the facility still operates, and performs
|
||||
whatever part of its purpose remains meaningful, or
|
||||
|
||||
b) under the GNU GPL, with none of the additional permissions of
|
||||
this License applicable to that copy.
|
||||
|
||||
3. Object Code Incorporating Material from Library Header Files.
|
||||
|
||||
The object code form of an Application may incorporate material from
|
||||
a header file that is part of the Library. You may convey such object
|
||||
code under terms of your choice, provided that, if the incorporated
|
||||
material is not limited to numerical parameters, data structure
|
||||
layouts and accessors, or small macros, inline functions and templates
|
||||
(ten or fewer lines in length), you do both of the following:
|
||||
|
||||
a) Give prominent notice with each copy of the object code that the
|
||||
Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the object code with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
4. Combined Works.
|
||||
|
||||
You may convey a Combined Work under terms of your choice that,
|
||||
taken together, effectively do not restrict modification of the
|
||||
portions of the Library contained in the Combined Work and reverse
|
||||
engineering for debugging such modifications, if you also do each of
|
||||
the following:
|
||||
|
||||
a) Give prominent notice with each copy of the Combined Work that
|
||||
the Library is used in it and that the Library and its use are
|
||||
covered by this License.
|
||||
|
||||
b) Accompany the Combined Work with a copy of the GNU GPL and this license
|
||||
document.
|
||||
|
||||
c) For a Combined Work that displays copyright notices during
|
||||
execution, include the copyright notice for the Library among
|
||||
these notices, as well as a reference directing the user to the
|
||||
copies of the GNU GPL and this license document.
|
||||
|
||||
d) Do one of the following:
|
||||
|
||||
0) Convey the Minimal Corresponding Source under the terms of this
|
||||
License, and the Corresponding Application Code in a form
|
||||
suitable for, and under terms that permit, the user to
|
||||
recombine or relink the Application with a modified version of
|
||||
the Linked Version to produce a modified Combined Work, in the
|
||||
manner specified by section 6 of the GNU GPL for conveying
|
||||
Corresponding Source.
|
||||
|
||||
1) Use a suitable shared library mechanism for linking with the
|
||||
Library. A suitable mechanism is one that (a) uses at run time
|
||||
a copy of the Library already present on the user's computer
|
||||
system, and (b) will operate properly with a modified version
|
||||
of the Library that is interface-compatible with the Linked
|
||||
Version.
|
||||
|
||||
e) Provide Installation Information, but only if you would otherwise
|
||||
be required to provide such information under section 6 of the
|
||||
GNU GPL, and only to the extent that such information is
|
||||
necessary to install and execute a modified version of the
|
||||
Combined Work produced by recombining or relinking the
|
||||
Application with a modified version of the Linked Version. (If
|
||||
you use option 4d0, the Installation Information must accompany
|
||||
the Minimal Corresponding Source and Corresponding Application
|
||||
Code. If you use option 4d1, you must provide the Installation
|
||||
Information in the manner specified by section 6 of the GNU GPL
|
||||
for conveying Corresponding Source.)
|
||||
|
||||
5. Combined Libraries.
|
||||
|
||||
You may place library facilities that are a work based on the
|
||||
Library side by side in a single library together with other library
|
||||
facilities that are not Applications and are not covered by this
|
||||
License, and convey such a combined library under terms of your
|
||||
choice, if you do both of the following:
|
||||
|
||||
a) Accompany the combined library with a copy of the same work based
|
||||
on the Library, uncombined with any other library facilities,
|
||||
conveyed under the terms of this License.
|
||||
|
||||
b) Give prominent notice with the combined library that part of it
|
||||
is a work based on the Library, and explaining where to find the
|
||||
accompanying uncombined form of the same work.
|
||||
|
||||
6. Revised Versions of the GNU Lesser General Public License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions
|
||||
of the GNU Lesser General Public License from time to time. Such new
|
||||
versions will be similar in spirit to the present version, but may
|
||||
differ in detail to address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Library as you received it specifies that a certain numbered version
|
||||
of the GNU Lesser General Public License "or any later version"
|
||||
applies to it, you have the option of following the terms and
|
||||
conditions either of that published version or of any later version
|
||||
published by the Free Software Foundation. If the Library as you
|
||||
received it does not specify a version number of the GNU Lesser
|
||||
General Public License, you may choose any version of the GNU Lesser
|
||||
General Public License ever published by the Free Software Foundation.
|
||||
|
||||
If the Library as you received it specifies that a proxy can decide
|
||||
whether future versions of the GNU Lesser General Public License shall
|
||||
apply, that proxy's public statement of acceptance of any version is
|
||||
permanent authorization for you to choose that version for the
|
||||
Library.
|
18
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/crash_unix.go
generated
vendored
18
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/crash_unix.go
generated
vendored
@ -1,18 +0,0 @@
|
||||
// +build freebsd openbsd netbsd dragonfly darwin linux
|
||||
|
||||
package log
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
func CrashLog(file string) {
|
||||
f, err := os.OpenFile(file, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
|
||||
if err != nil {
|
||||
log.Println(err.Error())
|
||||
} else {
|
||||
syscall.Dup2(int(f.Fd()), 2)
|
||||
}
|
||||
}
|
37
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/crash_win.go
generated
vendored
37
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/crash_win.go
generated
vendored
@ -1,37 +0,0 @@
|
||||
// +build windows
|
||||
|
||||
package log
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
var (
|
||||
kernel32 = syscall.MustLoadDLL("kernel32.dll")
|
||||
procSetStdHandle = kernel32.MustFindProc("SetStdHandle")
|
||||
)
|
||||
|
||||
func setStdHandle(stdhandle int32, handle syscall.Handle) error {
|
||||
r0, _, e1 := syscall.Syscall(procSetStdHandle.Addr(), 2, uintptr(stdhandle), uintptr(handle), 0)
|
||||
if r0 == 0 {
|
||||
if e1 != 0 {
|
||||
return error(e1)
|
||||
}
|
||||
return syscall.EINVAL
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func CrashLog(file string) {
|
||||
f, err := os.OpenFile(file, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0666)
|
||||
if err != nil {
|
||||
log.Println(err.Error())
|
||||
} else {
|
||||
err = setStdHandle(syscall.STD_ERROR_HANDLE, syscall.Handle(f.Fd()))
|
||||
if err != nil {
|
||||
log.Println(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
380
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/log.go
generated
vendored
380
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/ngaut/log/log.go
generated
vendored
@ -1,380 +0,0 @@
|
||||
//high level log wrapper, so it can output different log based on level
|
||||
package log
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
Ldate = log.Ldate
|
||||
Llongfile = log.Llongfile
|
||||
Lmicroseconds = log.Lmicroseconds
|
||||
Lshortfile = log.Lshortfile
|
||||
LstdFlags = log.LstdFlags
|
||||
Ltime = log.Ltime
|
||||
)
|
||||
|
||||
type (
|
||||
LogLevel int
|
||||
LogType int
|
||||
)
|
||||
|
||||
const (
|
||||
LOG_FATAL = LogType(0x1)
|
||||
LOG_ERROR = LogType(0x2)
|
||||
LOG_WARNING = LogType(0x4)
|
||||
LOG_INFO = LogType(0x8)
|
||||
LOG_DEBUG = LogType(0x10)
|
||||
)
|
||||
|
||||
const (
|
||||
LOG_LEVEL_NONE = LogLevel(0x0)
|
||||
LOG_LEVEL_FATAL = LOG_LEVEL_NONE | LogLevel(LOG_FATAL)
|
||||
LOG_LEVEL_ERROR = LOG_LEVEL_FATAL | LogLevel(LOG_ERROR)
|
||||
LOG_LEVEL_WARN = LOG_LEVEL_ERROR | LogLevel(LOG_WARNING)
|
||||
LOG_LEVEL_INFO = LOG_LEVEL_WARN | LogLevel(LOG_INFO)
|
||||
LOG_LEVEL_DEBUG = LOG_LEVEL_INFO | LogLevel(LOG_DEBUG)
|
||||
LOG_LEVEL_ALL = LOG_LEVEL_DEBUG
|
||||
)
|
||||
|
||||
const FORMAT_TIME_DAY string = "20060102"
|
||||
const FORMAT_TIME_HOUR string = "2006010215"
|
||||
|
||||
var _log *logger = New()
|
||||
|
||||
func init() {
|
||||
SetFlags(Ldate | Ltime | Lshortfile)
|
||||
SetHighlighting(runtime.GOOS != "windows")
|
||||
}
|
||||
|
||||
func Logger() *log.Logger {
|
||||
return _log._log
|
||||
}
|
||||
|
||||
func SetLevel(level LogLevel) {
|
||||
_log.SetLevel(level)
|
||||
}
|
||||
func GetLogLevel() LogLevel {
|
||||
return _log.level
|
||||
}
|
||||
|
||||
func SetOutput(out io.Writer) {
|
||||
_log.SetOutput(out)
|
||||
}
|
||||
|
||||
func SetOutputByName(path string) error {
|
||||
return _log.SetOutputByName(path)
|
||||
}
|
||||
|
||||
func SetFlags(flags int) {
|
||||
_log._log.SetFlags(flags)
|
||||
}
|
||||
|
||||
func Info(v ...interface{}) {
|
||||
_log.Info(v...)
|
||||
}
|
||||
|
||||
func Infof(format string, v ...interface{}) {
|
||||
_log.Infof(format, v...)
|
||||
}
|
||||
|
||||
func Debug(v ...interface{}) {
|
||||
_log.Debug(v...)
|
||||
}
|
||||
|
||||
func Debugf(format string, v ...interface{}) {
|
||||
_log.Debugf(format, v...)
|
||||
}
|
||||
|
||||
func Warn(v ...interface{}) {
|
||||
_log.Warning(v...)
|
||||
}
|
||||
|
||||
func Warnf(format string, v ...interface{}) {
|
||||
_log.Warningf(format, v...)
|
||||
}
|
||||
|
||||
func Warning(v ...interface{}) {
|
||||
_log.Warning(v...)
|
||||
}
|
||||
|
||||
func Warningf(format string, v ...interface{}) {
|
||||
_log.Warningf(format, v...)
|
||||
}
|
||||
|
||||
func Error(v ...interface{}) {
|
||||
_log.Error(v...)
|
||||
}
|
||||
|
||||
func Errorf(format string, v ...interface{}) {
|
||||
_log.Errorf(format, v...)
|
||||
}
|
||||
|
||||
func Fatal(v ...interface{}) {
|
||||
_log.Fatal(v...)
|
||||
}
|
||||
|
||||
func Fatalf(format string, v ...interface{}) {
|
||||
_log.Fatalf(format, v...)
|
||||
}
|
||||
|
||||
func SetLevelByString(level string) {
|
||||
_log.SetLevelByString(level)
|
||||
}
|
||||
|
||||
func SetHighlighting(highlighting bool) {
|
||||
_log.SetHighlighting(highlighting)
|
||||
}
|
||||
|
||||
func SetRotateByDay() {
|
||||
_log.SetRotateByDay()
|
||||
}
|
||||
|
||||
func SetRotateByHour() {
|
||||
_log.SetRotateByHour()
|
||||
}
|
||||
|
||||
type logger struct {
|
||||
_log *log.Logger
|
||||
level LogLevel
|
||||
highlighting bool
|
||||
|
||||
dailyRolling bool
|
||||
hourRolling bool
|
||||
|
||||
fileName string
|
||||
logSuffix string
|
||||
fd *os.File
|
||||
|
||||
lock sync.Mutex
|
||||
}
|
||||
|
||||
func (l *logger) SetHighlighting(highlighting bool) {
|
||||
l.highlighting = highlighting
|
||||
}
|
||||
|
||||
func (l *logger) SetLevel(level LogLevel) {
|
||||
l.level = level
|
||||
}
|
||||
|
||||
func (l *logger) SetLevelByString(level string) {
|
||||
l.level = StringToLogLevel(level)
|
||||
}
|
||||
|
||||
func (l *logger) SetRotateByDay() {
|
||||
l.dailyRolling = true
|
||||
l.logSuffix = genDayTime(time.Now())
|
||||
}
|
||||
|
||||
func (l *logger) SetRotateByHour() {
|
||||
l.hourRolling = true
|
||||
l.logSuffix = genHourTime(time.Now())
|
||||
}
|
||||
|
||||
func (l *logger) rotate() error {
|
||||
l.lock.Lock()
|
||||
defer l.lock.Unlock()
|
||||
|
||||
var suffix string
|
||||
if l.dailyRolling {
|
||||
suffix = genDayTime(time.Now())
|
||||
} else if l.hourRolling {
|
||||
suffix = genHourTime(time.Now())
|
||||
} else {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Notice: if suffix is not equal to l.LogSuffix, then rotate
|
||||
if suffix != l.logSuffix {
|
||||
err := l.doRotate(suffix)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *logger) doRotate(suffix string) error {
|
||||
// Notice: Not check error, is this ok?
|
||||
l.fd.Close()
|
||||
|
||||
lastFileName := l.fileName + "." + l.logSuffix
|
||||
err := os.Rename(l.fileName, lastFileName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = l.SetOutputByName(l.fileName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
l.logSuffix = suffix
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *logger) SetOutput(out io.Writer) {
|
||||
l._log = log.New(out, l._log.Prefix(), l._log.Flags())
|
||||
}
|
||||
|
||||
func (l *logger) SetOutputByName(path string) error {
|
||||
f, err := os.OpenFile(path, os.O_CREATE|os.O_APPEND|os.O_RDWR, 0666)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
l.SetOutput(f)
|
||||
|
||||
l.fileName = path
|
||||
l.fd = f
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func (l *logger) log(t LogType, v ...interface{}) {
|
||||
if l.level|LogLevel(t) != l.level {
|
||||
return
|
||||
}
|
||||
|
||||
err := l.rotate()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%s\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
v1 := make([]interface{}, len(v)+2)
|
||||
logStr, logColor := LogTypeToString(t)
|
||||
if l.highlighting {
|
||||
v1[0] = "\033" + logColor + "m[" + logStr + "]"
|
||||
copy(v1[1:], v)
|
||||
v1[len(v)+1] = "\033[0m"
|
||||
} else {
|
||||
v1[0] = "[" + logStr + "]"
|
||||
copy(v1[1:], v)
|
||||
v1[len(v)+1] = ""
|
||||
}
|
||||
|
||||
s := fmt.Sprintln(v1...)
|
||||
l._log.Output(4, s)
|
||||
}
|
||||
|
||||
func (l *logger) logf(t LogType, format string, v ...interface{}) {
|
||||
if l.level|LogLevel(t) != l.level {
|
||||
return
|
||||
}
|
||||
|
||||
err := l.rotate()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "%s\n", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
logStr, logColor := LogTypeToString(t)
|
||||
var s string
|
||||
if l.highlighting {
|
||||
s = "\033" + logColor + "m[" + logStr + "] " + fmt.Sprintf(format, v...) + "\033[0m"
|
||||
} else {
|
||||
s = "[" + logStr + "] " + fmt.Sprintf(format, v...)
|
||||
}
|
||||
l._log.Output(4, s)
|
||||
}
|
||||
|
||||
func (l *logger) Fatal(v ...interface{}) {
|
||||
l.log(LOG_FATAL, v...)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
func (l *logger) Fatalf(format string, v ...interface{}) {
|
||||
l.logf(LOG_FATAL, format, v...)
|
||||
os.Exit(-1)
|
||||
}
|
||||
|
||||
func (l *logger) Error(v ...interface{}) {
|
||||
l.log(LOG_ERROR, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Errorf(format string, v ...interface{}) {
|
||||
l.logf(LOG_ERROR, format, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Warning(v ...interface{}) {
|
||||
l.log(LOG_WARNING, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Warningf(format string, v ...interface{}) {
|
||||
l.logf(LOG_WARNING, format, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Debug(v ...interface{}) {
|
||||
l.log(LOG_DEBUG, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Debugf(format string, v ...interface{}) {
|
||||
l.logf(LOG_DEBUG, format, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Info(v ...interface{}) {
|
||||
l.log(LOG_INFO, v...)
|
||||
}
|
||||
|
||||
func (l *logger) Infof(format string, v ...interface{}) {
|
||||
l.logf(LOG_INFO, format, v...)
|
||||
}
|
||||
|
||||
func StringToLogLevel(level string) LogLevel {
|
||||
switch level {
|
||||
case "fatal":
|
||||
return LOG_LEVEL_FATAL
|
||||
case "error":
|
||||
return LOG_LEVEL_ERROR
|
||||
case "warn":
|
||||
return LOG_LEVEL_WARN
|
||||
case "warning":
|
||||
return LOG_LEVEL_WARN
|
||||
case "debug":
|
||||
return LOG_LEVEL_DEBUG
|
||||
case "info":
|
||||
return LOG_LEVEL_INFO
|
||||
}
|
||||
return LOG_LEVEL_ALL
|
||||
}
|
||||
|
||||
func LogTypeToString(t LogType) (string, string) {
|
||||
switch t {
|
||||
case LOG_FATAL:
|
||||
return "fatal", "[0;31"
|
||||
case LOG_ERROR:
|
||||
return "error", "[0;31"
|
||||
case LOG_WARNING:
|
||||
return "warning", "[0;33"
|
||||
case LOG_DEBUG:
|
||||
return "debug", "[0;36"
|
||||
case LOG_INFO:
|
||||
return "info", "[0;37"
|
||||
}
|
||||
return "unknown", "[0;37"
|
||||
}
|
||||
|
||||
func genDayTime(t time.Time) string {
|
||||
return t.Format(FORMAT_TIME_DAY)
|
||||
}
|
||||
|
||||
func genHourTime(t time.Time) string {
|
||||
return t.Format(FORMAT_TIME_HOUR)
|
||||
}
|
||||
|
||||
func New() *logger {
|
||||
return Newlogger(os.Stderr, "")
|
||||
}
|
||||
|
||||
func Newlogger(w io.Writer, prefix string) *logger {
|
||||
return &logger{_log: log.New(w, prefix, LstdFlags), level: LOG_LEVEL_ALL, highlighting: true}
|
||||
}
|
488
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/satori/go.uuid/uuid.go
generated
vendored
488
vendor/github.com/siddontang/go-mysql/_vendor/vendor/github.com/satori/go.uuid/uuid.go
generated
vendored
@ -1,488 +0,0 @@
|
||||
// Copyright (C) 2013-2015 by Maxim Bublis <b@codemonkey.ru>
|
||||
//
|
||||
// Permission is hereby granted, free of charge, to any person obtaining
|
||||
// a copy of this software and associated documentation files (the
|
||||
// "Software"), to deal in the Software without restriction, including
|
||||
// without limitation the rights to use, copy, modify, merge, publish,
|
||||
// distribute, sublicense, and/or sell copies of the Software, and to
|
||||
// permit persons to whom the Software is furnished to do so, subject to
|
||||
// the following conditions:
|
||||
//
|
||||
// The above copyright notice and this permission notice shall be
|
||||
// included in all copies or substantial portions of the Software.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
|
||||
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
|
||||
// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
|
||||
// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
|
||||
// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
// Package uuid provides implementation of Universally Unique Identifier (UUID).
|
||||
// Supported versions are 1, 3, 4 and 5 (as specified in RFC 4122) and
|
||||
// version 2 (as specified in DCE 1.1).
|
||||
package uuid
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/md5"
|
||||
"crypto/rand"
|
||||
"crypto/sha1"
|
||||
"database/sql/driver"
|
||||
"encoding/binary"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"hash"
|
||||
"net"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// UUID layout variants.
|
||||
const (
|
||||
VariantNCS = iota
|
||||
VariantRFC4122
|
||||
VariantMicrosoft
|
||||
VariantFuture
|
||||
)
|
||||
|
||||
// UUID DCE domains.
|
||||
const (
|
||||
DomainPerson = iota
|
||||
DomainGroup
|
||||
DomainOrg
|
||||
)
|
||||
|
||||
// Difference in 100-nanosecond intervals between
|
||||
// UUID epoch (October 15, 1582) and Unix epoch (January 1, 1970).
|
||||
const epochStart = 122192928000000000
|
||||
|
||||
// Used in string method conversion
|
||||
const dash byte = '-'
|
||||
|
||||
// UUID v1/v2 storage.
|
||||
var (
|
||||
storageMutex sync.Mutex
|
||||
storageOnce sync.Once
|
||||
epochFunc = unixTimeFunc
|
||||
clockSequence uint16
|
||||
lastTime uint64
|
||||
hardwareAddr [6]byte
|
||||
posixUID = uint32(os.Getuid())
|
||||
posixGID = uint32(os.Getgid())
|
||||
)
|
||||
|
||||
// String parse helpers.
|
||||
var (
|
||||
urnPrefix = []byte("urn:uuid:")
|
||||
byteGroups = []int{8, 4, 4, 4, 12}
|
||||
)
|
||||
|
||||
func initClockSequence() {
|
||||
buf := make([]byte, 2)
|
||||
safeRandom(buf)
|
||||
clockSequence = binary.BigEndian.Uint16(buf)
|
||||
}
|
||||
|
||||
func initHardwareAddr() {
|
||||
interfaces, err := net.Interfaces()
|
||||
if err == nil {
|
||||
for _, iface := range interfaces {
|
||||
if len(iface.HardwareAddr) >= 6 {
|
||||
copy(hardwareAddr[:], iface.HardwareAddr)
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize hardwareAddr randomly in case
|
||||
// of real network interfaces absence
|
||||
safeRandom(hardwareAddr[:])
|
||||
|
||||
// Set multicast bit as recommended in RFC 4122
|
||||
hardwareAddr[0] |= 0x01
|
||||
}
|
||||
|
||||
func initStorage() {
|
||||
initClockSequence()
|
||||
initHardwareAddr()
|
||||
}
|
||||
|
||||
func safeRandom(dest []byte) {
|
||||
if _, err := rand.Read(dest); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Returns difference in 100-nanosecond intervals between
|
||||
// UUID epoch (October 15, 1582) and current time.
|
||||
// This is default epoch calculation function.
|
||||
func unixTimeFunc() uint64 {
|
||||
return epochStart + uint64(time.Now().UnixNano()/100)
|
||||
}
|
||||
|
||||
// UUID representation compliant with specification
|
||||
// described in RFC 4122.
|
||||
type UUID [16]byte
|
||||
|
||||
// NullUUID can be used with the standard sql package to represent a
|
||||
// UUID value that can be NULL in the database
|
||||
type NullUUID struct {
|
||||
UUID UUID
|
||||
Valid bool
|
||||
}
|
||||
|
||||
// The nil UUID is special form of UUID that is specified to have all
|
||||
// 128 bits set to zero.
|
||||
var Nil = UUID{}
|
||||
|
||||
// Predefined namespace UUIDs.
|
||||
var (
|
||||
NamespaceDNS, _ = FromString("6ba7b810-9dad-11d1-80b4-00c04fd430c8")
|
||||
NamespaceURL, _ = FromString("6ba7b811-9dad-11d1-80b4-00c04fd430c8")
|
||||
NamespaceOID, _ = FromString("6ba7b812-9dad-11d1-80b4-00c04fd430c8")
|
||||
NamespaceX500, _ = FromString("6ba7b814-9dad-11d1-80b4-00c04fd430c8")
|
||||
)
|
||||
|
||||
// And returns result of binary AND of two UUIDs.
|
||||
func And(u1 UUID, u2 UUID) UUID {
|
||||
u := UUID{}
|
||||
for i := 0; i < 16; i++ {
|
||||
u[i] = u1[i] & u2[i]
|
||||
}
|
||||
return u
|
||||
}
|
||||
|
||||
// Or returns result of binary OR of two UUIDs.
|
||||
func Or(u1 UUID, u2 UUID) UUID {
|
||||
u := UUID{}
|
||||
for i := 0; i < 16; i++ {
|
||||
u[i] = u1[i] | u2[i]
|
||||
}
|
||||
return u
|
||||
}
|
||||
|
||||
// Equal returns true if u1 and u2 equals, otherwise returns false.
|
||||
func Equal(u1 UUID, u2 UUID) bool {
|
||||
return bytes.Equal(u1[:], u2[:])
|
||||
}
|
||||
|
||||
// Version returns algorithm version used to generate UUID.
|
||||
func (u UUID) Version() uint {
|
||||
return uint(u[6] >> 4)
|
||||
}
|
||||
|
||||
// Variant returns UUID layout variant.
|
||||
func (u UUID) Variant() uint {
|
||||
switch {
|
||||
case (u[8] & 0x80) == 0x00:
|
||||
return VariantNCS
|
||||
case (u[8]&0xc0)|0x80 == 0x80:
|
||||
return VariantRFC4122
|
||||
case (u[8]&0xe0)|0xc0 == 0xc0:
|
||||
return VariantMicrosoft
|
||||
}
|
||||
return VariantFuture
|
||||
}
|
||||
|
||||
// Bytes returns bytes slice representation of UUID.
|
||||
func (u UUID) Bytes() []byte {
|
||||
return u[:]
|
||||
}
|
||||
|
||||
// Returns canonical string representation of UUID:
|
||||
// xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
|
||||
func (u UUID) String() string {
|
||||
buf := make([]byte, 36)
|
||||
|
||||
hex.Encode(buf[0:8], u[0:4])
|
||||
buf[8] = dash
|
||||
hex.Encode(buf[9:13], u[4:6])
|
||||
buf[13] = dash
|
||||
hex.Encode(buf[14:18], u[6:8])
|
||||
buf[18] = dash
|
||||
hex.Encode(buf[19:23], u[8:10])
|
||||
buf[23] = dash
|
||||
hex.Encode(buf[24:], u[10:])
|
||||
|
||||
return string(buf)
|
||||
}
|
||||
|
||||
// SetVersion sets version bits.
|
||||
func (u *UUID) SetVersion(v byte) {
|
||||
u[6] = (u[6] & 0x0f) | (v << 4)
|
||||
}
|
||||
|
||||
// SetVariant sets variant bits as described in RFC 4122.
|
||||
func (u *UUID) SetVariant() {
|
||||
u[8] = (u[8] & 0xbf) | 0x80
|
||||
}
|
||||
|
||||
// MarshalText implements the encoding.TextMarshaler interface.
|
||||
// The encoding is the same as returned by String.
|
||||
func (u UUID) MarshalText() (text []byte, err error) {
|
||||
text = []byte(u.String())
|
||||
return
|
||||
}
|
||||
|
||||
// UnmarshalText implements the encoding.TextUnmarshaler interface.
|
||||
// Following formats are supported:
|
||||
// "6ba7b810-9dad-11d1-80b4-00c04fd430c8",
|
||||
// "{6ba7b810-9dad-11d1-80b4-00c04fd430c8}",
|
||||
// "urn:uuid:6ba7b810-9dad-11d1-80b4-00c04fd430c8"
|
||||
func (u *UUID) UnmarshalText(text []byte) (err error) {
|
||||
if len(text) < 32 {
|
||||
err = fmt.Errorf("uuid: UUID string too short: %s", text)
|
||||
return
|
||||
}
|
||||
|
||||
t := text[:]
|
||||
braced := false
|
||||
|
||||
if bytes.Equal(t[:9], urnPrefix) {
|
||||
t = t[9:]
|
||||
} else if t[0] == '{' {
|
||||
braced = true
|
||||
t = t[1:]
|
||||
}
|
||||
|
||||
b := u[:]
|
||||
|
||||
for i, byteGroup := range byteGroups {
|
||||
if i > 0 && t[0] == '-' {
|
||||
t = t[1:]
|
||||
} else if i > 0 && t[0] != '-' {
|
||||
err = fmt.Errorf("uuid: invalid string format")
|
||||
return
|
||||
}
|
||||
|
||||
if i == 2 {
|
||||
if !bytes.Contains([]byte("012345"), []byte{t[0]}) {
|
||||
err = fmt.Errorf("uuid: invalid version number: %s", t[0])
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
if len(t) < byteGroup {
|
||||
err = fmt.Errorf("uuid: UUID string too short: %s", text)
|
||||
return
|
||||
}
|
||||
|
||||
if i == 4 && len(t) > byteGroup &&
|
||||
((braced && t[byteGroup] != '}') || len(t[byteGroup:]) > 1 || !braced) {
|
||||
err = fmt.Errorf("uuid: UUID string too long: %s", t)
|
||||
return
|
||||
}
|
||||
|
||||
_, err = hex.Decode(b[:byteGroup/2], t[:byteGroup])
|
||||
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
t = t[byteGroup:]
|
||||
b = b[byteGroup/2:]
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// MarshalBinary implements the encoding.BinaryMarshaler interface.
|
||||
func (u UUID) MarshalBinary() (data []byte, err error) {
|
||||
data = u.Bytes()
|
||||
return
|
||||
}
|
||||
|
||||
// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
|
||||
// It will return error if the slice isn't 16 bytes long.
|
||||
func (u *UUID) UnmarshalBinary(data []byte) (err error) {
|
||||
if len(data) != 16 {
|
||||
err = fmt.Errorf("uuid: UUID must be exactly 16 bytes long, got %d bytes", len(data))
|
||||
return
|
||||
}
|
||||
copy(u[:], data)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
// Value implements the driver.Valuer interface.
|
||||
func (u UUID) Value() (driver.Value, error) {
|
||||
return u.String(), nil
|
||||
}
|
||||
|
||||
// Scan implements the sql.Scanner interface.
|
||||
// A 16-byte slice is handled by UnmarshalBinary, while
|
||||
// a longer byte slice or a string is handled by UnmarshalText.
|
||||
func (u *UUID) Scan(src interface{}) error {
|
||||
switch src := src.(type) {
|
||||
case []byte:
|
||||
if len(src) == 16 {
|
||||
return u.UnmarshalBinary(src)
|
||||
}
|
||||
return u.UnmarshalText(src)
|
||||
|
||||
case string:
|
||||
return u.UnmarshalText([]byte(src))
|
||||
}
|
||||
|
||||
return fmt.Errorf("uuid: cannot convert %T to UUID", src)
|
||||
}
|
||||
|
||||
// Value implements the driver.Valuer interface.
|
||||
func (u NullUUID) Value() (driver.Value, error) {
|
||||
if !u.Valid {
|
||||
return nil, nil
|
||||
}
|
||||
// Delegate to UUID Value function
|
||||
return u.UUID.Value()
|
||||
}
|
||||
|
||||
// Scan implements the sql.Scanner interface.
|
||||
func (u *NullUUID) Scan(src interface{}) error {
|
||||
if src == nil {
|
||||
u.UUID, u.Valid = Nil, false
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delegate to UUID Scan function
|
||||
u.Valid = true
|
||||
return u.UUID.Scan(src)
|
||||
}
|
||||
|
||||
// FromBytes returns UUID converted from raw byte slice input.
|
||||
// It will return error if the slice isn't 16 bytes long.
|
||||
func FromBytes(input []byte) (u UUID, err error) {
|
||||
err = u.UnmarshalBinary(input)
|
||||
return
|
||||
}
|
||||
|
||||
// FromBytesOrNil returns UUID converted from raw byte slice input.
|
||||
// Same behavior as FromBytes, but returns a Nil UUID on error.
|
||||
func FromBytesOrNil(input []byte) UUID {
|
||||
uuid, err := FromBytes(input)
|
||||
if err != nil {
|
||||
return Nil
|
||||
}
|
||||
return uuid
|
||||
}
|
||||
|
||||
// FromString returns UUID parsed from string input.
|
||||
// Input is expected in a form accepted by UnmarshalText.
|
||||
func FromString(input string) (u UUID, err error) {
|
||||
err = u.UnmarshalText([]byte(input))
|
||||
return
|
||||
}
|
||||
|
||||
// FromStringOrNil returns UUID parsed from string input.
|
||||
// Same behavior as FromString, but returns a Nil UUID on error.
|
||||
func FromStringOrNil(input string) UUID {
|
||||
uuid, err := FromString(input)
|
||||
if err != nil {
|
||||
return Nil
|
||||
}
|
||||
return uuid
|
||||
}
|
||||
|
||||
// Returns UUID v1/v2 storage state.
|
||||
// Returns epoch timestamp, clock sequence, and hardware address.
|
||||
func getStorage() (uint64, uint16, []byte) {
|
||||
storageOnce.Do(initStorage)
|
||||
|
||||
storageMutex.Lock()
|
||||
defer storageMutex.Unlock()
|
||||
|
||||
timeNow := epochFunc()
|
||||
// Clock changed backwards since last UUID generation.
|
||||
// Should increase clock sequence.
|
||||
if timeNow <= lastTime {
|
||||
clockSequence++
|
||||
}
|
||||
lastTime = timeNow
|
||||
|
||||
return timeNow, clockSequence, hardwareAddr[:]
|
||||
}
|
||||
|
||||
// NewV1 returns UUID based on current timestamp and MAC address.
|
||||
func NewV1() UUID {
|
||||
u := UUID{}
|
||||
|
||||
timeNow, clockSeq, hardwareAddr := getStorage()
|
||||
|
||||
binary.BigEndian.PutUint32(u[0:], uint32(timeNow))
|
||||
binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32))
|
||||
binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48))
|
||||
binary.BigEndian.PutUint16(u[8:], clockSeq)
|
||||
|
||||
copy(u[10:], hardwareAddr)
|
||||
|
||||
u.SetVersion(1)
|
||||
u.SetVariant()
|
||||
|
||||
return u
|
||||
}
|
||||
|
||||
// NewV2 returns DCE Security UUID based on POSIX UID/GID.
|
||||
func NewV2(domain byte) UUID {
|
||||
u := UUID{}
|
||||
|
||||
timeNow, clockSeq, hardwareAddr := getStorage()
|
||||
|
||||
switch domain {
|
||||
case DomainPerson:
|
||||
binary.BigEndian.PutUint32(u[0:], posixUID)
|
||||
case DomainGroup:
|
||||
binary.BigEndian.PutUint32(u[0:], posixGID)
|
||||
}
|
||||
|
||||
binary.BigEndian.PutUint16(u[4:], uint16(timeNow>>32))
|
||||
binary.BigEndian.PutUint16(u[6:], uint16(timeNow>>48))
|
||||
binary.BigEndian.PutUint16(u[8:], clockSeq)
|
||||
u[9] = domain
|
||||
|
||||
copy(u[10:], hardwareAddr)
|
||||
|
||||
u.SetVersion(2)
|
||||
u.SetVariant()
|
||||
|
||||
return u
|
||||
}
|
||||
|
||||
// NewV3 returns UUID based on MD5 hash of namespace UUID and name.
|
||||
func NewV3(ns UUID, name string) UUID {
|
||||
u := newFromHash(md5.New(), ns, name)
|
||||
u.SetVersion(3)
|
||||
u.SetVariant()
|
||||
|
||||
return u
|
||||
}
|
||||
|
||||
// NewV4 returns random generated UUID.
|
||||
func NewV4() UUID {
|
||||
u := UUID{}
|
||||
safeRandom(u[:])
|
||||
u.SetVersion(4)
|
||||
u.SetVariant()
|
||||
|
||||
return u
|
||||
}
|
||||
|
||||
// NewV5 returns UUID based on SHA-1 hash of namespace UUID and name.
|
||||
func NewV5(ns UUID, name string) UUID {
|
||||
u := newFromHash(sha1.New(), ns, name)
|
||||
u.SetVersion(5)
|
||||
u.SetVariant()
|
||||
|
||||
return u
|
||||
}
|
||||
|
||||
// Returns UUID based on hashing of namespace UUID and name.
|
||||
func newFromHash(h hash.Hash, ns UUID, name string) UUID {
|
||||
u := UUID{}
|
||||
h.Write(ns[:])
|
||||
h.Write([]byte(name))
|
||||
copy(u[:], h.Sum(nil))
|
||||
|
||||
return u
|
||||
}
|
@ -1,39 +0,0 @@
|
||||
// Copyright 2012, Google Inc. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package ioutil2
|
||||
|
||||
import (
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
)
|
||||
|
||||
// Write file to temp and atomically move when everything else succeeds.
|
||||
func WriteFileAtomic(filename string, data []byte, perm os.FileMode) error {
|
||||
dir, name := path.Split(filename)
|
||||
f, err := ioutil.TempFile(dir, name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
n, err := f.Write(data)
|
||||
f.Close()
|
||||
if err == nil && n < len(data) {
|
||||
err = io.ErrShortWrite
|
||||
} else {
|
||||
err = os.Chmod(f.Name(), perm)
|
||||
}
|
||||
if err != nil {
|
||||
os.Remove(f.Name())
|
||||
return err
|
||||
}
|
||||
return os.Rename(f.Name(), filename)
|
||||
}
|
||||
|
||||
// Check file exists or not
|
||||
func FileExists(name string) bool {
|
||||
_, err := os.Stat(name)
|
||||
return !os.IsNotExist(err)
|
||||
}
|
@ -1,69 +0,0 @@
|
||||
package ioutil2
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
var ErrExceedLimit = errors.New("write exceed limit")
|
||||
|
||||
func NewSectionWriter(w io.WriterAt, off int64, n int64) *SectionWriter {
|
||||
return &SectionWriter{w, off, off, off + n}
|
||||
}
|
||||
|
||||
type SectionWriter struct {
|
||||
w io.WriterAt
|
||||
base int64
|
||||
off int64
|
||||
limit int64
|
||||
}
|
||||
|
||||
func (s *SectionWriter) Write(p []byte) (n int, err error) {
|
||||
if s.off >= s.limit {
|
||||
return 0, ErrExceedLimit
|
||||
}
|
||||
|
||||
if max := s.limit - s.off; int64(len(p)) > max {
|
||||
return 0, ErrExceedLimit
|
||||
}
|
||||
|
||||
n, err = s.w.WriteAt(p, s.off)
|
||||
s.off += int64(n)
|
||||
return
|
||||
}
|
||||
|
||||
var errWhence = errors.New("Seek: invalid whence")
|
||||
var errOffset = errors.New("Seek: invalid offset")
|
||||
|
||||
func (s *SectionWriter) Seek(offset int64, whence int) (int64, error) {
|
||||
switch whence {
|
||||
default:
|
||||
return 0, errWhence
|
||||
case 0:
|
||||
offset += s.base
|
||||
case 1:
|
||||
offset += s.off
|
||||
case 2:
|
||||
offset += s.limit
|
||||
}
|
||||
if offset < s.base {
|
||||
return 0, errOffset
|
||||
}
|
||||
s.off = offset
|
||||
return offset - s.base, nil
|
||||
}
|
||||
|
||||
func (s *SectionWriter) WriteAt(p []byte, off int64) (n int, err error) {
|
||||
if off < 0 || off >= s.limit-s.base {
|
||||
return 0, errOffset
|
||||
}
|
||||
off += s.base
|
||||
if max := s.limit - off; int64(len(p)) > max {
|
||||
return 0, ErrExceedLimit
|
||||
}
|
||||
|
||||
return s.w.WriteAt(p, off)
|
||||
}
|
||||
|
||||
// Size returns the size of the section in bytes.
|
||||
func (s *SectionWriter) Size() int64 { return s.limit - s.base }
|
22
vendor/github.com/siddontang/go-mysql/_vendor/vendor/golang.org/x/net/PATENTS
generated
vendored
22
vendor/github.com/siddontang/go-mysql/_vendor/vendor/golang.org/x/net/PATENTS
generated
vendored
@ -1,22 +0,0 @@
|
||||
Additional IP Rights Grant (Patents)
|
||||
|
||||
"This implementation" means the copyrightable works distributed by
|
||||
Google as part of the Go project.
|
||||
|
||||
Google hereby grants to You a perpetual, worldwide, non-exclusive,
|
||||
no-charge, royalty-free, irrevocable (except as stated in this section)
|
||||
patent license to make, have made, use, offer to sell, sell, import,
|
||||
transfer and otherwise run, modify and propagate the contents of this
|
||||
implementation of Go, where such license applies only to those patent
|
||||
claims, both currently owned or controlled by Google and acquired in
|
||||
the future, licensable by Google that are necessarily infringed by this
|
||||
implementation of Go. This grant does not include claims that would be
|
||||
infringed only as a consequence of further modification of this
|
||||
implementation. If you or your agent or exclusive licensee institute or
|
||||
order or agree to the institution of patent litigation against any
|
||||
entity (including a cross-claim or counterclaim in a lawsuit) alleging
|
||||
that this implementation of Go or any code incorporated within this
|
||||
implementation of Go constitutes direct or contributory patent
|
||||
infringement, or inducement of patent infringement, then any patent
|
||||
rights granted to you under this License for this implementation of Go
|
||||
shall terminate as of the date such litigation is filed.
|
447
vendor/github.com/siddontang/go-mysql/_vendor/vendor/golang.org/x/net/context/context.go
generated
vendored
447
vendor/github.com/siddontang/go-mysql/_vendor/vendor/golang.org/x/net/context/context.go
generated
vendored
@ -1,447 +0,0 @@
|
||||
// Copyright 2014 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package context defines the Context type, which carries deadlines,
|
||||
// cancelation signals, and other request-scoped values across API boundaries
|
||||
// and between processes.
|
||||
//
|
||||
// Incoming requests to a server should create a Context, and outgoing calls to
|
||||
// servers should accept a Context. The chain of function calls between must
|
||||
// propagate the Context, optionally replacing it with a modified copy created
|
||||
// using WithDeadline, WithTimeout, WithCancel, or WithValue.
|
||||
//
|
||||
// Programs that use Contexts should follow these rules to keep interfaces
|
||||
// consistent across packages and enable static analysis tools to check context
|
||||
// propagation:
|
||||
//
|
||||
// Do not store Contexts inside a struct type; instead, pass a Context
|
||||
// explicitly to each function that needs it. The Context should be the first
|
||||
// parameter, typically named ctx:
|
||||
//
|
||||
// func DoSomething(ctx context.Context, arg Arg) error {
|
||||
// // ... use ctx ...
|
||||
// }
|
||||
//
|
||||
// Do not pass a nil Context, even if a function permits it. Pass context.TODO
|
||||
// if you are unsure about which Context to use.
|
||||
//
|
||||
// Use context Values only for request-scoped data that transits processes and
|
||||
// APIs, not for passing optional parameters to functions.
|
||||
//
|
||||
// The same Context may be passed to functions running in different goroutines;
|
||||
// Contexts are safe for simultaneous use by multiple goroutines.
|
||||
//
|
||||
// See http://blog.golang.org/context for example code for a server that uses
|
||||
// Contexts.
|
||||
package context // import "golang.org/x/net/context"
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// A Context carries a deadline, a cancelation signal, and other values across
|
||||
// API boundaries.
|
||||
//
|
||||
// Context's methods may be called by multiple goroutines simultaneously.
|
||||
type Context interface {
|
||||
// Deadline returns the time when work done on behalf of this context
|
||||
// should be canceled. Deadline returns ok==false when no deadline is
|
||||
// set. Successive calls to Deadline return the same results.
|
||||
Deadline() (deadline time.Time, ok bool)
|
||||
|
||||
// Done returns a channel that's closed when work done on behalf of this
|
||||
// context should be canceled. Done may return nil if this context can
|
||||
// never be canceled. Successive calls to Done return the same value.
|
||||
//
|
||||
// WithCancel arranges for Done to be closed when cancel is called;
|
||||
// WithDeadline arranges for Done to be closed when the deadline
|
||||
// expires; WithTimeout arranges for Done to be closed when the timeout
|
||||
// elapses.
|
||||
//
|
||||
// Done is provided for use in select statements:
|
||||
//
|
||||
// // Stream generates values with DoSomething and sends them to out
|
||||
// // until DoSomething returns an error or ctx.Done is closed.
|
||||
// func Stream(ctx context.Context, out <-chan Value) error {
|
||||
// for {
|
||||
// v, err := DoSomething(ctx)
|
||||
// if err != nil {
|
||||
// return err
|
||||
// }
|
||||
// select {
|
||||
// case <-ctx.Done():
|
||||
// return ctx.Err()
|
||||
// case out <- v:
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// See http://blog.golang.org/pipelines for more examples of how to use
|
||||
// a Done channel for cancelation.
|
||||
Done() <-chan struct{}
|
||||
|
||||
// Err returns a non-nil error value after Done is closed. Err returns
|
||||
// Canceled if the context was canceled or DeadlineExceeded if the
|
||||
// context's deadline passed. No other values for Err are defined.
|
||||
// After Done is closed, successive calls to Err return the same value.
|
||||
Err() error
|
||||
|
||||
// Value returns the value associated with this context for key, or nil
|
||||
// if no value is associated with key. Successive calls to Value with
|
||||
// the same key returns the same result.
|
||||
//
|
||||
// Use context values only for request-scoped data that transits
|
||||
// processes and API boundaries, not for passing optional parameters to
|
||||
// functions.
|
||||
//
|
||||
// A key identifies a specific value in a Context. Functions that wish
|
||||
// to store values in Context typically allocate a key in a global
|
||||
// variable then use that key as the argument to context.WithValue and
|
||||
// Context.Value. A key can be any type that supports equality;
|
||||
// packages should define keys as an unexported type to avoid
|
||||
// collisions.
|
||||
//
|
||||
// Packages that define a Context key should provide type-safe accessors
|
||||
// for the values stores using that key:
|
||||
//
|
||||
// // Package user defines a User type that's stored in Contexts.
|
||||
// package user
|
||||
//
|
||||
// import "golang.org/x/net/context"
|
||||
//
|
||||
// // User is the type of value stored in the Contexts.
|
||||
// type User struct {...}
|
||||
//
|
||||
// // key is an unexported type for keys defined in this package.
|
||||
// // This prevents collisions with keys defined in other packages.
|
||||
// type key int
|
||||
//
|
||||
// // userKey is the key for user.User values in Contexts. It is
|
||||
// // unexported; clients use user.NewContext and user.FromContext
|
||||
// // instead of using this key directly.
|
||||
// var userKey key = 0
|
||||
//
|
||||
// // NewContext returns a new Context that carries value u.
|
||||
// func NewContext(ctx context.Context, u *User) context.Context {
|
||||
// return context.WithValue(ctx, userKey, u)
|
||||
// }
|
||||
//
|
||||
// // FromContext returns the User value stored in ctx, if any.
|
||||
// func FromContext(ctx context.Context) (*User, bool) {
|
||||
// u, ok := ctx.Value(userKey).(*User)
|
||||
// return u, ok
|
||||
// }
|
||||
Value(key interface{}) interface{}
|
||||
}
|
||||
|
||||
// Canceled is the error returned by Context.Err when the context is canceled.
|
||||
var Canceled = errors.New("context canceled")
|
||||
|
||||
// DeadlineExceeded is the error returned by Context.Err when the context's
|
||||
// deadline passes.
|
||||
var DeadlineExceeded = errors.New("context deadline exceeded")
|
||||
|
||||
// An emptyCtx is never canceled, has no values, and has no deadline. It is not
|
||||
// struct{}, since vars of this type must have distinct addresses.
|
||||
type emptyCtx int
|
||||
|
||||
func (*emptyCtx) Deadline() (deadline time.Time, ok bool) {
|
||||
return
|
||||
}
|
||||
|
||||
func (*emptyCtx) Done() <-chan struct{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (*emptyCtx) Err() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (*emptyCtx) Value(key interface{}) interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *emptyCtx) String() string {
|
||||
switch e {
|
||||
case background:
|
||||
return "context.Background"
|
||||
case todo:
|
||||
return "context.TODO"
|
||||
}
|
||||
return "unknown empty Context"
|
||||
}
|
||||
|
||||
var (
|
||||
background = new(emptyCtx)
|
||||
todo = new(emptyCtx)
|
||||
)
|
||||
|
||||
// Background returns a non-nil, empty Context. It is never canceled, has no
|
||||
// values, and has no deadline. It is typically used by the main function,
|
||||
// initialization, and tests, and as the top-level Context for incoming
|
||||
// requests.
|
||||
func Background() Context {
|
||||
return background
|
||||
}
|
||||
|
||||
// TODO returns a non-nil, empty Context. Code should use context.TODO when
|
||||
// it's unclear which Context to use or it is not yet available (because the
|
||||
// surrounding function has not yet been extended to accept a Context
|
||||
// parameter). TODO is recognized by static analysis tools that determine
|
||||
// whether Contexts are propagated correctly in a program.
|
||||
func TODO() Context {
|
||||
return todo
|
||||
}
|
||||
|
||||
// A CancelFunc tells an operation to abandon its work.
|
||||
// A CancelFunc does not wait for the work to stop.
|
||||
// After the first call, subsequent calls to a CancelFunc do nothing.
|
||||
type CancelFunc func()
|
||||
|
||||
// WithCancel returns a copy of parent with a new Done channel. The returned
|
||||
// context's Done channel is closed when the returned cancel function is called
|
||||
// or when the parent context's Done channel is closed, whichever happens first.
|
||||
//
|
||||
// Canceling this context releases resources associated with it, so code should
|
||||
// call cancel as soon as the operations running in this Context complete.
|
||||
func WithCancel(parent Context) (ctx Context, cancel CancelFunc) {
|
||||
c := newCancelCtx(parent)
|
||||
propagateCancel(parent, &c)
|
||||
return &c, func() { c.cancel(true, Canceled) }
|
||||
}
|
||||
|
||||
// newCancelCtx returns an initialized cancelCtx.
|
||||
func newCancelCtx(parent Context) cancelCtx {
|
||||
return cancelCtx{
|
||||
Context: parent,
|
||||
done: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// propagateCancel arranges for child to be canceled when parent is.
|
||||
func propagateCancel(parent Context, child canceler) {
|
||||
if parent.Done() == nil {
|
||||
return // parent is never canceled
|
||||
}
|
||||
if p, ok := parentCancelCtx(parent); ok {
|
||||
p.mu.Lock()
|
||||
if p.err != nil {
|
||||
// parent has already been canceled
|
||||
child.cancel(false, p.err)
|
||||
} else {
|
||||
if p.children == nil {
|
||||
p.children = make(map[canceler]bool)
|
||||
}
|
||||
p.children[child] = true
|
||||
}
|
||||
p.mu.Unlock()
|
||||
} else {
|
||||
go func() {
|
||||
select {
|
||||
case <-parent.Done():
|
||||
child.cancel(false, parent.Err())
|
||||
case <-child.Done():
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
// parentCancelCtx follows a chain of parent references until it finds a
|
||||
// *cancelCtx. This function understands how each of the concrete types in this
|
||||
// package represents its parent.
|
||||
func parentCancelCtx(parent Context) (*cancelCtx, bool) {
|
||||
for {
|
||||
switch c := parent.(type) {
|
||||
case *cancelCtx:
|
||||
return c, true
|
||||
case *timerCtx:
|
||||
return &c.cancelCtx, true
|
||||
case *valueCtx:
|
||||
parent = c.Context
|
||||
default:
|
||||
return nil, false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// removeChild removes a context from its parent.
|
||||
func removeChild(parent Context, child canceler) {
|
||||
p, ok := parentCancelCtx(parent)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
p.mu.Lock()
|
||||
if p.children != nil {
|
||||
delete(p.children, child)
|
||||
}
|
||||
p.mu.Unlock()
|
||||
}
|
||||
|
||||
// A canceler is a context type that can be canceled directly. The
|
||||
// implementations are *cancelCtx and *timerCtx.
|
||||
type canceler interface {
|
||||
cancel(removeFromParent bool, err error)
|
||||
Done() <-chan struct{}
|
||||
}
|
||||
|
||||
// A cancelCtx can be canceled. When canceled, it also cancels any children
|
||||
// that implement canceler.
|
||||
type cancelCtx struct {
|
||||
Context
|
||||
|
||||
done chan struct{} // closed by the first cancel call.
|
||||
|
||||
mu sync.Mutex
|
||||
children map[canceler]bool // set to nil by the first cancel call
|
||||
err error // set to non-nil by the first cancel call
|
||||
}
|
||||
|
||||
func (c *cancelCtx) Done() <-chan struct{} {
|
||||
return c.done
|
||||
}
|
||||
|
||||
func (c *cancelCtx) Err() error {
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
return c.err
|
||||
}
|
||||
|
||||
func (c *cancelCtx) String() string {
|
||||
return fmt.Sprintf("%v.WithCancel", c.Context)
|
||||
}
|
||||
|
||||
// cancel closes c.done, cancels each of c's children, and, if
|
||||
// removeFromParent is true, removes c from its parent's children.
|
||||
func (c *cancelCtx) cancel(removeFromParent bool, err error) {
|
||||
if err == nil {
|
||||
panic("context: internal error: missing cancel error")
|
||||
}
|
||||
c.mu.Lock()
|
||||
if c.err != nil {
|
||||
c.mu.Unlock()
|
||||
return // already canceled
|
||||
}
|
||||
c.err = err
|
||||
close(c.done)
|
||||
for child := range c.children {
|
||||
// NOTE: acquiring the child's lock while holding parent's lock.
|
||||
child.cancel(false, err)
|
||||
}
|
||||
c.children = nil
|
||||
c.mu.Unlock()
|
||||
|
||||
if removeFromParent {
|
||||
removeChild(c.Context, c)
|
||||
}
|
||||
}
|
||||
|
||||
// WithDeadline returns a copy of the parent context with the deadline adjusted
|
||||
// to be no later than d. If the parent's deadline is already earlier than d,
|
||||
// WithDeadline(parent, d) is semantically equivalent to parent. The returned
|
||||
// context's Done channel is closed when the deadline expires, when the returned
|
||||
// cancel function is called, or when the parent context's Done channel is
|
||||
// closed, whichever happens first.
|
||||
//
|
||||
// Canceling this context releases resources associated with it, so code should
|
||||
// call cancel as soon as the operations running in this Context complete.
|
||||
func WithDeadline(parent Context, deadline time.Time) (Context, CancelFunc) {
|
||||
if cur, ok := parent.Deadline(); ok && cur.Before(deadline) {
|
||||
// The current deadline is already sooner than the new one.
|
||||
return WithCancel(parent)
|
||||
}
|
||||
c := &timerCtx{
|
||||
cancelCtx: newCancelCtx(parent),
|
||||
deadline: deadline,
|
||||
}
|
||||
propagateCancel(parent, c)
|
||||
d := deadline.Sub(time.Now())
|
||||
if d <= 0 {
|
||||
c.cancel(true, DeadlineExceeded) // deadline has already passed
|
||||
return c, func() { c.cancel(true, Canceled) }
|
||||
}
|
||||
c.mu.Lock()
|
||||
defer c.mu.Unlock()
|
||||
if c.err == nil {
|
||||
c.timer = time.AfterFunc(d, func() {
|
||||
c.cancel(true, DeadlineExceeded)
|
||||
})
|
||||
}
|
||||
return c, func() { c.cancel(true, Canceled) }
|
||||
}
|
||||
|
||||
// A timerCtx carries a timer and a deadline. It embeds a cancelCtx to
|
||||
// implement Done and Err. It implements cancel by stopping its timer then
|
||||
// delegating to cancelCtx.cancel.
|
||||
type timerCtx struct {
|
||||
cancelCtx
|
||||
timer *time.Timer // Under cancelCtx.mu.
|
||||
|
||||
deadline time.Time
|
||||
}
|
||||
|
||||
func (c *timerCtx) Deadline() (deadline time.Time, ok bool) {
|
||||
return c.deadline, true
|
||||
}
|
||||
|
||||
func (c *timerCtx) String() string {
|
||||
return fmt.Sprintf("%v.WithDeadline(%s [%s])", c.cancelCtx.Context, c.deadline, c.deadline.Sub(time.Now()))
|
||||
}
|
||||
|
||||
func (c *timerCtx) cancel(removeFromParent bool, err error) {
|
||||
c.cancelCtx.cancel(false, err)
|
||||
if removeFromParent {
|
||||
// Remove this timerCtx from its parent cancelCtx's children.
|
||||
removeChild(c.cancelCtx.Context, c)
|
||||
}
|
||||
c.mu.Lock()
|
||||
if c.timer != nil {
|
||||
c.timer.Stop()
|
||||
c.timer = nil
|
||||
}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
// WithTimeout returns WithDeadline(parent, time.Now().Add(timeout)).
|
||||
//
|
||||
// Canceling this context releases resources associated with it, so code should
|
||||
// call cancel as soon as the operations running in this Context complete:
|
||||
//
|
||||
// func slowOperationWithTimeout(ctx context.Context) (Result, error) {
|
||||
// ctx, cancel := context.WithTimeout(ctx, 100*time.Millisecond)
|
||||
// defer cancel() // releases resources if slowOperation completes before timeout elapses
|
||||
// return slowOperation(ctx)
|
||||
// }
|
||||
func WithTimeout(parent Context, timeout time.Duration) (Context, CancelFunc) {
|
||||
return WithDeadline(parent, time.Now().Add(timeout))
|
||||
}
|
||||
|
||||
// WithValue returns a copy of parent in which the value associated with key is
|
||||
// val.
|
||||
//
|
||||
// Use context Values only for request-scoped data that transits processes and
|
||||
// APIs, not for passing optional parameters to functions.
|
||||
func WithValue(parent Context, key interface{}, val interface{}) Context {
|
||||
return &valueCtx{parent, key, val}
|
||||
}
|
||||
|
||||
// A valueCtx carries a key-value pair. It implements Value for that key and
|
||||
// delegates all other calls to the embedded Context.
|
||||
type valueCtx struct {
|
||||
Context
|
||||
key, val interface{}
|
||||
}
|
||||
|
||||
func (c *valueCtx) String() string {
|
||||
return fmt.Sprintf("%v.WithValue(%#v, %#v)", c.Context, c.key, c.val)
|
||||
}
|
||||
|
||||
func (c *valueCtx) Value(key interface{}) interface{} {
|
||||
if c.key == key {
|
||||
return c.val
|
||||
}
|
||||
return c.Context.Value(key)
|
||||
}
|
337
vendor/github.com/siddontang/go-mysql/canal/canal.go
generated
vendored
337
vendor/github.com/siddontang/go-mysql/canal/canal.go
generated
vendored
@ -1,26 +1,25 @@
|
||||
package canal
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/siddontang/go-log/log"
|
||||
"github.com/siddontang/go-mysql/client"
|
||||
"github.com/siddontang/go-mysql/dump"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go-mysql/replication"
|
||||
"github.com/siddontang/go-mysql/schema"
|
||||
"github.com/siddontang/go/sync2"
|
||||
)
|
||||
|
||||
var errCanalClosed = errors.New("canal was closed")
|
||||
|
||||
// Canal can sync your MySQL data into everywhere, like Elasticsearch, Redis, etc...
|
||||
// MySQL must open row format for binlog
|
||||
type Canal struct {
|
||||
@ -30,48 +29,49 @@ type Canal struct {
|
||||
|
||||
master *masterInfo
|
||||
dumper *dump.Dumper
|
||||
dumped bool
|
||||
dumpDoneCh chan struct{}
|
||||
syncer *replication.BinlogSyncer
|
||||
|
||||
rsLock sync.Mutex
|
||||
rsHandlers []RowsEventHandler
|
||||
eventHandler EventHandler
|
||||
|
||||
connLock sync.Mutex
|
||||
conn *client.Conn
|
||||
|
||||
wg sync.WaitGroup
|
||||
tableLock sync.RWMutex
|
||||
tables map[string]*schema.Table
|
||||
errorTablesGetTime map[string]time.Time
|
||||
|
||||
tableLock sync.Mutex
|
||||
tables map[string]*schema.Table
|
||||
tableMatchCache map[string]bool
|
||||
includeTableRegex []*regexp.Regexp
|
||||
excludeTableRegex []*regexp.Regexp
|
||||
|
||||
quit chan struct{}
|
||||
closed sync2.AtomicBool
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
}
|
||||
|
||||
// canal will retry fetching unknown table's meta after UnknownTableRetryPeriod
|
||||
var UnknownTableRetryPeriod = time.Second * time.Duration(10)
|
||||
var ErrExcludedTable = errors.New("excluded table meta")
|
||||
|
||||
func NewCanal(cfg *Config) (*Canal, error) {
|
||||
c := new(Canal)
|
||||
c.cfg = cfg
|
||||
c.closed.Set(false)
|
||||
c.quit = make(chan struct{})
|
||||
|
||||
os.MkdirAll(cfg.DataDir, 0755)
|
||||
c.ctx, c.cancel = context.WithCancel(context.Background())
|
||||
|
||||
c.dumpDoneCh = make(chan struct{})
|
||||
c.rsHandlers = make([]RowsEventHandler, 0, 4)
|
||||
c.eventHandler = &DummyEventHandler{}
|
||||
|
||||
c.tables = make(map[string]*schema.Table)
|
||||
if c.cfg.DiscardNoMetaRowEvent {
|
||||
c.errorTablesGetTime = make(map[string]time.Time)
|
||||
}
|
||||
c.master = &masterInfo{}
|
||||
|
||||
var err error
|
||||
if c.master, err = loadMasterInfo(c.masterInfoPath()); err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
} else if len(c.master.Addr) != 0 && c.master.Addr != c.cfg.Addr {
|
||||
log.Infof("MySQL addr %s in old master.info, but new %s, reset", c.master.Addr, c.cfg.Addr)
|
||||
// may use another MySQL, reset
|
||||
c.master = &masterInfo{}
|
||||
}
|
||||
|
||||
c.master.Addr = c.cfg.Addr
|
||||
|
||||
if err := c.prepareDumper(); err != nil {
|
||||
if err = c.prepareDumper(); err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
@ -83,6 +83,33 @@ func NewCanal(cfg *Config) (*Canal, error) {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
// init table filter
|
||||
if n := len(c.cfg.IncludeTableRegex); n > 0 {
|
||||
c.includeTableRegex = make([]*regexp.Regexp, n)
|
||||
for i, val := range c.cfg.IncludeTableRegex {
|
||||
reg, err := regexp.Compile(val)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
c.includeTableRegex[i] = reg
|
||||
}
|
||||
}
|
||||
|
||||
if n := len(c.cfg.ExcludeTableRegex); n > 0 {
|
||||
c.excludeTableRegex = make([]*regexp.Regexp, n)
|
||||
for i, val := range c.cfg.ExcludeTableRegex {
|
||||
reg, err := regexp.Compile(val)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
c.excludeTableRegex[i] = reg
|
||||
}
|
||||
}
|
||||
|
||||
if c.includeTableRegex != nil || c.excludeTableRegex != nil {
|
||||
c.tableMatchCache = make(map[string]bool)
|
||||
}
|
||||
|
||||
return c, nil
|
||||
}
|
||||
|
||||
@ -114,6 +141,15 @@ func (c *Canal) prepareDumper() error {
|
||||
c.dumper.AddTables(tableDB, tables...)
|
||||
}
|
||||
|
||||
charset := c.cfg.Charset
|
||||
c.dumper.SetCharset(charset)
|
||||
|
||||
c.dumper.SetWhere(c.cfg.Dump.Where)
|
||||
c.dumper.SkipMasterData(c.cfg.Dump.SkipMasterData)
|
||||
c.dumper.SetMaxAllowedPacket(c.cfg.Dump.MaxAllowedPacketMB)
|
||||
// Use hex blob for mysqldump
|
||||
c.dumper.SetHexBlob(true)
|
||||
|
||||
for _, ignoreTable := range c.cfg.Dump.IgnoreTables {
|
||||
if seps := strings.Split(ignoreTable, ","); len(seps) == 2 {
|
||||
c.dumper.AddIgnoreTables(seps[0], seps[1])
|
||||
@ -129,92 +165,208 @@ func (c *Canal) prepareDumper() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Canal) Start() error {
|
||||
c.wg.Add(1)
|
||||
go c.run()
|
||||
// Run will first try to dump all data from MySQL master `mysqldump`,
|
||||
// then sync from the binlog position in the dump data.
|
||||
// It will run forever until meeting an error or Canal closed.
|
||||
func (c *Canal) Run() error {
|
||||
return c.run()
|
||||
}
|
||||
|
||||
return nil
|
||||
// RunFrom will sync from the binlog position directly, ignore mysqldump.
|
||||
func (c *Canal) RunFrom(pos mysql.Position) error {
|
||||
c.master.Update(pos)
|
||||
|
||||
return c.Run()
|
||||
}
|
||||
|
||||
func (c *Canal) StartFromGTID(set mysql.GTIDSet) error {
|
||||
c.master.UpdateGTIDSet(set)
|
||||
|
||||
return c.Run()
|
||||
}
|
||||
|
||||
// Dump all data from MySQL master `mysqldump`, ignore sync binlog.
|
||||
func (c *Canal) Dump() error {
|
||||
if c.dumped {
|
||||
return errors.New("the method Dump can't be called twice")
|
||||
}
|
||||
c.dumped = true
|
||||
defer close(c.dumpDoneCh)
|
||||
return c.dump()
|
||||
}
|
||||
|
||||
func (c *Canal) run() error {
|
||||
defer c.wg.Done()
|
||||
defer func() {
|
||||
c.cancel()
|
||||
}()
|
||||
|
||||
if err := c.tryDump(); err != nil {
|
||||
log.Errorf("canal dump mysql err: %v", err)
|
||||
return errors.Trace(err)
|
||||
c.master.UpdateTimestamp(uint32(time.Now().Unix()))
|
||||
|
||||
if !c.dumped {
|
||||
c.dumped = true
|
||||
|
||||
err := c.tryDump()
|
||||
close(c.dumpDoneCh)
|
||||
|
||||
if err != nil {
|
||||
log.Errorf("canal dump mysql err: %v", err)
|
||||
return errors.Trace(err)
|
||||
}
|
||||
}
|
||||
|
||||
close(c.dumpDoneCh)
|
||||
|
||||
if err := c.startSyncBinlog(); err != nil {
|
||||
if !c.isClosed() {
|
||||
log.Errorf("canal start sync binlog err: %v", err)
|
||||
}
|
||||
if err := c.runSyncBinlog(); err != nil {
|
||||
log.Errorf("canal start sync binlog err: %v", err)
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Canal) isClosed() bool {
|
||||
return c.closed.Get()
|
||||
}
|
||||
|
||||
func (c *Canal) Close() {
|
||||
log.Infof("close canal")
|
||||
log.Infof("closing canal")
|
||||
|
||||
c.m.Lock()
|
||||
defer c.m.Unlock()
|
||||
|
||||
if c.isClosed() {
|
||||
return
|
||||
}
|
||||
|
||||
c.closed.Set(true)
|
||||
|
||||
close(c.quit)
|
||||
|
||||
c.cancel()
|
||||
c.connLock.Lock()
|
||||
c.conn.Close()
|
||||
c.conn = nil
|
||||
c.connLock.Unlock()
|
||||
c.syncer.Close()
|
||||
|
||||
if c.syncer != nil {
|
||||
c.syncer.Close()
|
||||
c.syncer = nil
|
||||
}
|
||||
|
||||
c.master.Close()
|
||||
|
||||
c.wg.Wait()
|
||||
c.eventHandler.OnPosSynced(c.master.Position(), true)
|
||||
}
|
||||
|
||||
func (c *Canal) WaitDumpDone() <-chan struct{} {
|
||||
return c.dumpDoneCh
|
||||
}
|
||||
|
||||
func (c *Canal) Ctx() context.Context {
|
||||
return c.ctx
|
||||
}
|
||||
|
||||
func (c *Canal) checkTableMatch(key string) bool {
|
||||
// no filter, return true
|
||||
if c.tableMatchCache == nil {
|
||||
return true
|
||||
}
|
||||
|
||||
c.tableLock.RLock()
|
||||
rst, ok := c.tableMatchCache[key]
|
||||
c.tableLock.RUnlock()
|
||||
if ok {
|
||||
// cache hit
|
||||
return rst
|
||||
}
|
||||
matchFlag := false
|
||||
// check include
|
||||
if c.includeTableRegex != nil {
|
||||
for _, reg := range c.includeTableRegex {
|
||||
if reg.MatchString(key) {
|
||||
matchFlag = true
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
// check exclude
|
||||
if matchFlag && c.excludeTableRegex != nil {
|
||||
for _, reg := range c.excludeTableRegex {
|
||||
if reg.MatchString(key) {
|
||||
matchFlag = false
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
c.tableLock.Lock()
|
||||
c.tableMatchCache[key] = matchFlag
|
||||
c.tableLock.Unlock()
|
||||
return matchFlag
|
||||
}
|
||||
|
||||
func (c *Canal) GetTable(db string, table string) (*schema.Table, error) {
|
||||
key := fmt.Sprintf("%s.%s", db, table)
|
||||
c.tableLock.Lock()
|
||||
// if table is excluded, return error and skip parsing event or dump
|
||||
if !c.checkTableMatch(key) {
|
||||
return nil, ErrExcludedTable
|
||||
}
|
||||
c.tableLock.RLock()
|
||||
t, ok := c.tables[key]
|
||||
c.tableLock.Unlock()
|
||||
c.tableLock.RUnlock()
|
||||
|
||||
if ok {
|
||||
return t, nil
|
||||
}
|
||||
|
||||
if c.cfg.DiscardNoMetaRowEvent {
|
||||
c.tableLock.RLock()
|
||||
lastTime, ok := c.errorTablesGetTime[key]
|
||||
c.tableLock.RUnlock()
|
||||
if ok && time.Now().Sub(lastTime) < UnknownTableRetryPeriod {
|
||||
return nil, schema.ErrMissingTableMeta
|
||||
}
|
||||
}
|
||||
|
||||
t, err := schema.NewTable(c, db, table)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
// check table not exists
|
||||
if ok, err1 := schema.IsTableExist(c, db, table); err1 == nil && !ok {
|
||||
return nil, schema.ErrTableNotExist
|
||||
}
|
||||
// work around : RDS HAHeartBeat
|
||||
// ref : https://github.com/alibaba/canal/blob/master/parse/src/main/java/com/alibaba/otter/canal/parse/inbound/mysql/dbsync/LogEventConvert.java#L385
|
||||
// issue : https://github.com/alibaba/canal/issues/222
|
||||
// This is a common error in RDS that canal can't get HAHealthCheckSchema's meta, so we mock a table meta.
|
||||
// If canal just skip and log error, as RDS HA heartbeat interval is very short, so too many HAHeartBeat errors will be logged.
|
||||
if key == schema.HAHealthCheckSchema {
|
||||
// mock ha_health_check meta
|
||||
ta := &schema.Table{
|
||||
Schema: db,
|
||||
Name: table,
|
||||
Columns: make([]schema.TableColumn, 0, 2),
|
||||
Indexes: make([]*schema.Index, 0),
|
||||
}
|
||||
ta.AddColumn("id", "bigint(20)", "", "")
|
||||
ta.AddColumn("type", "char(1)", "", "")
|
||||
c.tableLock.Lock()
|
||||
c.tables[key] = ta
|
||||
c.tableLock.Unlock()
|
||||
return ta, nil
|
||||
}
|
||||
// if DiscardNoMetaRowEvent is true, we just log this error
|
||||
if c.cfg.DiscardNoMetaRowEvent {
|
||||
c.tableLock.Lock()
|
||||
c.errorTablesGetTime[key] = time.Now()
|
||||
c.tableLock.Unlock()
|
||||
// log error and return ErrMissingTableMeta
|
||||
log.Errorf("canal get table meta err: %v", errors.Trace(err))
|
||||
return nil, schema.ErrMissingTableMeta
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c.tableLock.Lock()
|
||||
c.tables[key] = t
|
||||
if c.cfg.DiscardNoMetaRowEvent {
|
||||
// if get table info success, delete this key from errorTablesGetTime
|
||||
delete(c.errorTablesGetTime, key)
|
||||
}
|
||||
c.tableLock.Unlock()
|
||||
|
||||
return t, nil
|
||||
}
|
||||
|
||||
// ClearTableCache clear table cache
|
||||
func (c *Canal) ClearTableCache(db []byte, table []byte) {
|
||||
key := fmt.Sprintf("%s.%s", db, table)
|
||||
c.tableLock.Lock()
|
||||
delete(c.tables, key)
|
||||
if c.cfg.DiscardNoMetaRowEvent {
|
||||
delete(c.errorTablesGetTime, key)
|
||||
}
|
||||
c.tableLock.Unlock()
|
||||
}
|
||||
|
||||
// Check MySQL binlog row image, must be in FULL, MINIMAL, NOBLOB
|
||||
func (c *Canal) CheckBinlogRowImage(image string) error {
|
||||
// need to check MySQL binlog row image? full, minimal or noblob?
|
||||
@ -246,23 +398,34 @@ func (c *Canal) checkBinlogRowFormat() error {
|
||||
}
|
||||
|
||||
func (c *Canal) prepareSyncer() error {
|
||||
seps := strings.Split(c.cfg.Addr, ":")
|
||||
if len(seps) != 2 {
|
||||
return errors.Errorf("invalid mysql addr format %s, must host:port", c.cfg.Addr)
|
||||
}
|
||||
|
||||
port, err := strconv.ParseUint(seps[1], 10, 16)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
cfg := replication.BinlogSyncerConfig{
|
||||
ServerID: c.cfg.ServerID,
|
||||
Flavor: c.cfg.Flavor,
|
||||
Host: seps[0],
|
||||
Port: uint16(port),
|
||||
User: c.cfg.User,
|
||||
Password: c.cfg.Password,
|
||||
ServerID: c.cfg.ServerID,
|
||||
Flavor: c.cfg.Flavor,
|
||||
User: c.cfg.User,
|
||||
Password: c.cfg.Password,
|
||||
Charset: c.cfg.Charset,
|
||||
HeartbeatPeriod: c.cfg.HeartbeatPeriod,
|
||||
ReadTimeout: c.cfg.ReadTimeout,
|
||||
UseDecimal: c.cfg.UseDecimal,
|
||||
ParseTime: c.cfg.ParseTime,
|
||||
SemiSyncEnabled: c.cfg.SemiSyncEnabled,
|
||||
}
|
||||
|
||||
if strings.Contains(c.cfg.Addr, "/") {
|
||||
cfg.Host = c.cfg.Addr
|
||||
} else {
|
||||
seps := strings.Split(c.cfg.Addr, ":")
|
||||
if len(seps) != 2 {
|
||||
return errors.Errorf("invalid mysql addr format %s, must host:port", c.cfg.Addr)
|
||||
}
|
||||
|
||||
port, err := strconv.ParseUint(seps[1], 10, 16)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
cfg.Host = seps[0]
|
||||
cfg.Port = uint16(port)
|
||||
}
|
||||
|
||||
c.syncer = replication.NewBinlogSyncer(cfg)
|
||||
@ -270,10 +433,6 @@ func (c *Canal) prepareSyncer() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Canal) masterInfoPath() string {
|
||||
return path.Join(c.cfg.DataDir, "master.info")
|
||||
}
|
||||
|
||||
// Execute a SQL
|
||||
func (c *Canal) Execute(cmd string, args ...interface{}) (rr *mysql.Result, err error) {
|
||||
c.connLock.Lock()
|
||||
@ -303,5 +462,13 @@ func (c *Canal) Execute(cmd string, args ...interface{}) (rr *mysql.Result, err
|
||||
}
|
||||
|
||||
func (c *Canal) SyncedPosition() mysql.Position {
|
||||
return c.master.Pos()
|
||||
return c.master.Position()
|
||||
}
|
||||
|
||||
func (c *Canal) SyncedTimestamp() uint32 {
|
||||
return c.master.timestamp
|
||||
}
|
||||
|
||||
func (c *Canal) SyncedGTIDSet() mysql.GTIDSet {
|
||||
return c.master.GTIDSet()
|
||||
}
|
||||
|
169
vendor/github.com/siddontang/go-mysql/canal/canal_test.go
generated
vendored
Normal file → Executable file
169
vendor/github.com/siddontang/go-mysql/canal/canal_test.go
generated
vendored
Normal file → Executable file
@ -1,13 +1,15 @@
|
||||
package canal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/ngaut/log"
|
||||
"github.com/juju/errors"
|
||||
. "github.com/pingcap/check"
|
||||
"github.com/siddontang/go-log/log"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
@ -27,19 +29,28 @@ func (s *canalTestSuite) SetUpSuite(c *C) {
|
||||
cfg := NewDefaultConfig()
|
||||
cfg.Addr = fmt.Sprintf("%s:3306", *testHost)
|
||||
cfg.User = "root"
|
||||
cfg.HeartbeatPeriod = 200 * time.Millisecond
|
||||
cfg.ReadTimeout = 300 * time.Millisecond
|
||||
cfg.Dump.ExecutionPath = "mysqldump"
|
||||
cfg.Dump.TableDB = "test"
|
||||
cfg.Dump.Tables = []string{"canal_test"}
|
||||
cfg.Dump.Where = "id>0"
|
||||
|
||||
os.RemoveAll(cfg.DataDir)
|
||||
// include & exclude config
|
||||
cfg.IncludeTableRegex = make([]string, 1)
|
||||
cfg.IncludeTableRegex[0] = ".*\\.canal_test"
|
||||
cfg.ExcludeTableRegex = make([]string, 2)
|
||||
cfg.ExcludeTableRegex[0] = "mysql\\..*"
|
||||
cfg.ExcludeTableRegex[1] = ".*\\..*_inner"
|
||||
|
||||
var err error
|
||||
s.c, err = NewCanal(cfg)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
s.execute(c, "DROP TABLE IF EXISTS test.canal_test")
|
||||
sql := `
|
||||
CREATE TABLE IF NOT EXISTS test.canal_test (
|
||||
id int AUTO_INCREMENT,
|
||||
id int AUTO_INCREMENT,
|
||||
content blob DEFAULT NULL,
|
||||
name varchar(100),
|
||||
PRIMARY KEY(id)
|
||||
)ENGINE=innodb;
|
||||
@ -48,16 +59,22 @@ func (s *canalTestSuite) SetUpSuite(c *C) {
|
||||
s.execute(c, sql)
|
||||
|
||||
s.execute(c, "DELETE FROM test.canal_test")
|
||||
s.execute(c, "INSERT INTO test.canal_test (name) VALUES (?), (?), (?)", "a", "b", "c")
|
||||
s.execute(c, "INSERT INTO test.canal_test (content, name) VALUES (?, ?), (?, ?), (?, ?)", "1", "a", `\0\ndsfasdf`, "b", "", "c")
|
||||
|
||||
s.execute(c, "SET GLOBAL binlog_format = 'ROW'")
|
||||
|
||||
s.c.RegRowsEventHandler(&testRowsEventHandler{})
|
||||
err = s.c.Start()
|
||||
c.Assert(err, IsNil)
|
||||
s.c.SetEventHandler(&testEventHandler{c: c})
|
||||
go func() {
|
||||
err = s.c.Run()
|
||||
c.Assert(err, IsNil)
|
||||
}()
|
||||
}
|
||||
|
||||
func (s *canalTestSuite) TearDownSuite(c *C) {
|
||||
// To test the heartbeat and read timeout,so need to sleep 1 seconds without data transmission
|
||||
c.Logf("Start testing the heartbeat and read timeout")
|
||||
time.Sleep(time.Second)
|
||||
|
||||
if s.c != nil {
|
||||
s.c.Close()
|
||||
s.c = nil
|
||||
@ -70,16 +87,19 @@ func (s *canalTestSuite) execute(c *C, query string, args ...interface{}) *mysql
|
||||
return r
|
||||
}
|
||||
|
||||
type testRowsEventHandler struct {
|
||||
type testEventHandler struct {
|
||||
DummyEventHandler
|
||||
|
||||
c *C
|
||||
}
|
||||
|
||||
func (h *testRowsEventHandler) Do(e *RowsEvent) error {
|
||||
log.Infof("%s %v\n", e.Action, e.Rows)
|
||||
func (h *testEventHandler) OnRow(e *RowsEvent) error {
|
||||
log.Infof("OnRow %s %v\n", e.Action, e.Rows)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *testRowsEventHandler) String() string {
|
||||
return "testRowsEventHandler"
|
||||
func (h *testEventHandler) String() string {
|
||||
return "testEventHandler"
|
||||
}
|
||||
|
||||
func (s *canalTestSuite) TestCanal(c *C) {
|
||||
@ -88,7 +108,126 @@ func (s *canalTestSuite) TestCanal(c *C) {
|
||||
for i := 1; i < 10; i++ {
|
||||
s.execute(c, "INSERT INTO test.canal_test (name) VALUES (?)", fmt.Sprintf("%d", i))
|
||||
}
|
||||
s.execute(c, "ALTER TABLE test.canal_test ADD `age` INT(5) NOT NULL AFTER `name`")
|
||||
s.execute(c, "INSERT INTO test.canal_test (name,age) VALUES (?,?)", "d", "18")
|
||||
|
||||
err := s.c.CatchMasterPos(100)
|
||||
err := s.c.CatchMasterPos(10 * time.Second)
|
||||
c.Assert(err, IsNil)
|
||||
}
|
||||
|
||||
func (s *canalTestSuite) TestCanalFilter(c *C) {
|
||||
// included
|
||||
sch, err := s.c.GetTable("test", "canal_test")
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(sch, NotNil)
|
||||
_, err = s.c.GetTable("not_exist_db", "canal_test")
|
||||
c.Assert(errors.Trace(err), Not(Equals), ErrExcludedTable)
|
||||
// excluded
|
||||
sch, err = s.c.GetTable("test", "canal_test_inner")
|
||||
c.Assert(errors.Cause(err), Equals, ErrExcludedTable)
|
||||
c.Assert(sch, IsNil)
|
||||
sch, err = s.c.GetTable("mysql", "canal_test")
|
||||
c.Assert(errors.Cause(err), Equals, ErrExcludedTable)
|
||||
c.Assert(sch, IsNil)
|
||||
sch, err = s.c.GetTable("not_exist_db", "not_canal_test")
|
||||
c.Assert(errors.Cause(err), Equals, ErrExcludedTable)
|
||||
c.Assert(sch, IsNil)
|
||||
}
|
||||
|
||||
func TestCreateTableExp(t *testing.T) {
|
||||
cases := []string{
|
||||
"CREATE TABLE `mydb.mytable` (`id` int(10)) ENGINE=InnoDB",
|
||||
"CREATE TABLE `mytable` (`id` int(10)) ENGINE=InnoDB",
|
||||
"CREATE TABLE IF NOT EXISTS `mytable` (`id` int(10)) ENGINE=InnoDB",
|
||||
"CREATE TABLE IF NOT EXISTS mytable (`id` int(10)) ENGINE=InnoDB",
|
||||
}
|
||||
table := []byte("mytable")
|
||||
db := []byte("mydb")
|
||||
for _, s := range cases {
|
||||
m := expCreateTable.FindSubmatch([]byte(s))
|
||||
mLen := len(m)
|
||||
if m == nil || !bytes.Equal(m[mLen-1], table) || (len(m[mLen-2]) > 0 && !bytes.Equal(m[mLen-2], db)) {
|
||||
t.Fatalf("TestCreateTableExp: case %s failed\n", s)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAlterTableExp(t *testing.T) {
|
||||
cases := []string{
|
||||
"ALTER TABLE `mydb`.`mytable` ADD `field2` DATE NULL AFTER `field1`;",
|
||||
"ALTER TABLE `mytable` ADD `field2` DATE NULL AFTER `field1`;",
|
||||
"ALTER TABLE mydb.mytable ADD `field2` DATE NULL AFTER `field1`;",
|
||||
"ALTER TABLE mytable ADD `field2` DATE NULL AFTER `field1`;",
|
||||
"ALTER TABLE mydb.mytable ADD field2 DATE NULL AFTER `field1`;",
|
||||
}
|
||||
|
||||
table := []byte("mytable")
|
||||
db := []byte("mydb")
|
||||
for _, s := range cases {
|
||||
m := expAlterTable.FindSubmatch([]byte(s))
|
||||
mLen := len(m)
|
||||
if m == nil || !bytes.Equal(m[mLen-1], table) || (len(m[mLen-2]) > 0 && !bytes.Equal(m[mLen-2], db)) {
|
||||
t.Fatalf("TestAlterTableExp: case %s failed\n", s)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestRenameTableExp(t *testing.T) {
|
||||
cases := []string{
|
||||
"rename table `mydb`.`mytable` to `mydb`.`mytable1`",
|
||||
"rename table `mytable` to `mytable1`",
|
||||
"rename table mydb.mytable to mydb.mytable1",
|
||||
"rename table mytable to mytable1",
|
||||
|
||||
"rename table `mydb`.`mytable` to `mydb`.`mytable2`, `mydb`.`mytable3` to `mydb`.`mytable1`",
|
||||
"rename table `mytable` to `mytable2`, `mytable3` to `mytable1`",
|
||||
"rename table mydb.mytable to mydb.mytable2, mydb.mytable3 to mydb.mytable1",
|
||||
"rename table mytable to mytable2, mytable3 to mytable1",
|
||||
}
|
||||
table := []byte("mytable")
|
||||
db := []byte("mydb")
|
||||
for _, s := range cases {
|
||||
m := expRenameTable.FindSubmatch([]byte(s))
|
||||
mLen := len(m)
|
||||
if m == nil || !bytes.Equal(m[mLen-1], table) || (len(m[mLen-2]) > 0 && !bytes.Equal(m[mLen-2], db)) {
|
||||
t.Fatalf("TestRenameTableExp: case %s failed\n", s)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestDropTableExp(t *testing.T) {
|
||||
cases := []string{
|
||||
"drop table test1",
|
||||
"DROP TABLE test1",
|
||||
"DROP TABLE test1",
|
||||
"DROP table IF EXISTS test.test1",
|
||||
"drop table `test1`",
|
||||
"DROP TABLE `test1`",
|
||||
"DROP table IF EXISTS `test`.`test1`",
|
||||
"DROP TABLE `test1` /* generated by server */",
|
||||
"DROP table if exists test1",
|
||||
"DROP table if exists `test1`",
|
||||
"DROP table if exists test.test1",
|
||||
"DROP table if exists `test`.test1",
|
||||
"DROP table if exists `test`.`test1`",
|
||||
"DROP table if exists test.`test1`",
|
||||
"DROP table if exists test.`test1`",
|
||||
}
|
||||
|
||||
table := []byte("test1")
|
||||
for _, s := range cases {
|
||||
m := expDropTable.FindSubmatch([]byte(s))
|
||||
mLen := len(m)
|
||||
if m == nil {
|
||||
t.Fatalf("TestDropTableExp: case %s failed\n", s)
|
||||
return
|
||||
}
|
||||
if mLen < 4 {
|
||||
t.Fatalf("TestDropTableExp: case %s failed\n", s)
|
||||
return
|
||||
}
|
||||
if !bytes.Equal(m[mLen-1], table) {
|
||||
t.Fatalf("TestDropTableExp: case %s failed\n", s)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
42
vendor/github.com/siddontang/go-mysql/canal/config.go
generated
vendored
42
vendor/github.com/siddontang/go-mysql/canal/config.go
generated
vendored
@ -7,6 +7,7 @@ import (
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/juju/errors"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
type DumpConfig struct {
|
||||
@ -23,8 +24,18 @@ type DumpConfig struct {
|
||||
// Ignore table format is db.table
|
||||
IgnoreTables []string `toml:"ignore_tables"`
|
||||
|
||||
// Dump only selected records. Quotes are mandatory
|
||||
Where string `toml:"where"`
|
||||
|
||||
// If true, discard error msg, else, output to stderr
|
||||
DiscardErr bool `toml:"discard_err"`
|
||||
|
||||
// Set true to skip --master-data if we have no privilege to do
|
||||
// 'FLUSH TABLES WITH READ LOCK'
|
||||
SkipMasterData bool `toml:"skip_master_data"`
|
||||
|
||||
// Set to change the default max_allowed_packet size
|
||||
MaxAllowedPacketMB int `toml:"max_allowed_packet_mb"`
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
@ -32,11 +43,30 @@ type Config struct {
|
||||
User string `toml:"user"`
|
||||
Password string `toml:"password"`
|
||||
|
||||
ServerID uint32 `toml:"server_id"`
|
||||
Flavor string `toml:"flavor"`
|
||||
DataDir string `toml:"data_dir"`
|
||||
Charset string `toml:"charset"`
|
||||
ServerID uint32 `toml:"server_id"`
|
||||
Flavor string `toml:"flavor"`
|
||||
HeartbeatPeriod time.Duration `toml:"heartbeat_period"`
|
||||
ReadTimeout time.Duration `toml:"read_timeout"`
|
||||
|
||||
// IncludeTableRegex or ExcludeTableRegex should contain database name
|
||||
// Only a table which matches IncludeTableRegex and dismatches ExcludeTableRegex will be processed
|
||||
// eg, IncludeTableRegex : [".*\\.canal"], ExcludeTableRegex : ["mysql\\..*"]
|
||||
// this will include all database's 'canal' table, except database 'mysql'
|
||||
// Default IncludeTableRegex and ExcludeTableRegex are empty, this will include all tables
|
||||
IncludeTableRegex []string `toml:"include_table_regex"`
|
||||
ExcludeTableRegex []string `toml:"exclude_table_regex"`
|
||||
|
||||
// discard row event without table meta
|
||||
DiscardNoMetaRowEvent bool `toml:"discard_no_meta_row_event"`
|
||||
|
||||
Dump DumpConfig `toml:"dump"`
|
||||
|
||||
UseDecimal bool `toml:"use_decimal"`
|
||||
ParseTime bool `toml:"parse_time"`
|
||||
|
||||
// SemiSyncEnabled enables semi-sync or not.
|
||||
SemiSyncEnabled bool `toml:"semi_sync_enabled"`
|
||||
}
|
||||
|
||||
func NewConfigWithFile(name string) (*Config, error) {
|
||||
@ -66,14 +96,14 @@ func NewDefaultConfig() *Config {
|
||||
c.User = "root"
|
||||
c.Password = ""
|
||||
|
||||
rand.Seed(time.Now().Unix())
|
||||
c.ServerID = uint32(rand.Intn(1000)) + 1001
|
||||
c.Charset = mysql.DEFAULT_CHARSET
|
||||
c.ServerID = uint32(rand.New(rand.NewSource(time.Now().Unix())).Intn(1000)) + 1001
|
||||
|
||||
c.Flavor = "mysql"
|
||||
|
||||
c.DataDir = "./var"
|
||||
c.Dump.ExecutionPath = "mysqldump"
|
||||
c.Dump.DiscardErr = true
|
||||
c.Dump.SkipMasterData = false
|
||||
|
||||
return c
|
||||
}
|
||||
|
128
vendor/github.com/siddontang/go-mysql/canal/dump.go
generated
vendored
128
vendor/github.com/siddontang/go-mysql/canal/dump.go
generated
vendored
@ -1,12 +1,16 @@
|
||||
package canal
|
||||
|
||||
import (
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/siddontang/go-mysql/dump"
|
||||
"github.com/shopspring/decimal"
|
||||
"github.com/siddontang/go-log/log"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go-mysql/schema"
|
||||
)
|
||||
|
||||
@ -14,6 +18,7 @@ type dumpParseHandler struct {
|
||||
c *Canal
|
||||
name string
|
||||
pos uint64
|
||||
gset mysql.GTIDSet
|
||||
}
|
||||
|
||||
func (h *dumpParseHandler) BinLog(name string, pos uint64) error {
|
||||
@ -23,12 +28,18 @@ func (h *dumpParseHandler) BinLog(name string, pos uint64) error {
|
||||
}
|
||||
|
||||
func (h *dumpParseHandler) Data(db string, table string, values []string) error {
|
||||
if h.c.isClosed() {
|
||||
return errCanalClosed
|
||||
if err := h.c.ctx.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
tableInfo, err := h.c.GetTable(db, table)
|
||||
if err != nil {
|
||||
e := errors.Cause(err)
|
||||
if e == ErrExcludedTable ||
|
||||
e == schema.ErrTableNotExist ||
|
||||
e == schema.ErrMissingTableMeta {
|
||||
return nil
|
||||
}
|
||||
log.Errorf("get %s.%s information err: %v", db, table, err)
|
||||
return errors.Trace(err)
|
||||
}
|
||||
@ -38,32 +49,51 @@ func (h *dumpParseHandler) Data(db string, table string, values []string) error
|
||||
for i, v := range values {
|
||||
if v == "NULL" {
|
||||
vs[i] = nil
|
||||
} else if v == "_binary ''" {
|
||||
vs[i] = []byte{}
|
||||
} else if v[0] != '\'' {
|
||||
if tableInfo.Columns[i].Type == schema.TYPE_NUMBER {
|
||||
n, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
log.Errorf("parse row %v at %d error %v, skip", values, i, err)
|
||||
return dump.ErrSkip
|
||||
return fmt.Errorf("parse row %v at %d error %v, int expected", values, i, err)
|
||||
}
|
||||
vs[i] = n
|
||||
} else if tableInfo.Columns[i].Type == schema.TYPE_FLOAT {
|
||||
f, err := strconv.ParseFloat(v, 64)
|
||||
if err != nil {
|
||||
log.Errorf("parse row %v at %d error %v, skip", values, i, err)
|
||||
return dump.ErrSkip
|
||||
return fmt.Errorf("parse row %v at %d error %v, float expected", values, i, err)
|
||||
}
|
||||
vs[i] = f
|
||||
} else if tableInfo.Columns[i].Type == schema.TYPE_DECIMAL {
|
||||
if h.c.cfg.UseDecimal {
|
||||
d, err := decimal.NewFromString(v)
|
||||
if err != nil {
|
||||
return fmt.Errorf("parse row %v at %d error %v, decimal expected", values, i, err)
|
||||
}
|
||||
vs[i] = d
|
||||
} else {
|
||||
f, err := strconv.ParseFloat(v, 64)
|
||||
if err != nil {
|
||||
return fmt.Errorf("parse row %v at %d error %v, float expected", values, i, err)
|
||||
}
|
||||
vs[i] = f
|
||||
}
|
||||
} else if strings.HasPrefix(v, "0x") {
|
||||
buf, err := hex.DecodeString(v[2:])
|
||||
if err != nil {
|
||||
return fmt.Errorf("parse row %v at %d error %v, hex literal expected", values, i, err)
|
||||
}
|
||||
vs[i] = string(buf)
|
||||
} else {
|
||||
log.Errorf("parse row %v error, invalid type at %d, skip", values, i)
|
||||
return dump.ErrSkip
|
||||
return fmt.Errorf("parse row %v error, invalid type at %d", values, i)
|
||||
}
|
||||
} else {
|
||||
vs[i] = v[1 : len(v)-1]
|
||||
}
|
||||
}
|
||||
|
||||
events := newRowsEvent(tableInfo, InsertAction, [][]interface{}{vs})
|
||||
return h.c.travelRowsEventHandler(events)
|
||||
events := newRowsEvent(tableInfo, InsertAction, [][]interface{}{vs}, nil)
|
||||
return h.c.eventHandler.OnRow(events)
|
||||
}
|
||||
|
||||
func (c *Canal) AddDumpDatabases(dbs ...string) {
|
||||
@ -90,10 +120,64 @@ func (c *Canal) AddDumpIgnoreTables(db string, tables ...string) {
|
||||
c.dumper.AddIgnoreTables(db, tables...)
|
||||
}
|
||||
|
||||
func (c *Canal) dump() error {
|
||||
if c.dumper == nil {
|
||||
return errors.New("mysqldump does not exist")
|
||||
}
|
||||
|
||||
c.master.UpdateTimestamp(uint32(time.Now().Unix()))
|
||||
|
||||
h := &dumpParseHandler{c: c}
|
||||
// If users call StartFromGTID with empty position to start dumping with gtid,
|
||||
// we record the current gtid position before dump starts.
|
||||
//
|
||||
// See tryDump() to see when dump is skipped.
|
||||
if c.master.GTIDSet() != nil {
|
||||
gset, err := c.GetMasterGTIDSet()
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
h.gset = gset
|
||||
}
|
||||
|
||||
if c.cfg.Dump.SkipMasterData {
|
||||
pos, err := c.GetMasterPos()
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
log.Infof("skip master data, get current binlog position %v", pos)
|
||||
h.name = pos.Name
|
||||
h.pos = uint64(pos.Pos)
|
||||
}
|
||||
|
||||
start := time.Now()
|
||||
log.Info("try dump MySQL and parse")
|
||||
if err := c.dumper.DumpAndParse(h); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
pos := mysql.Position{Name: h.name, Pos: uint32(h.pos)}
|
||||
c.master.Update(pos)
|
||||
if err := c.eventHandler.OnPosSynced(pos, true); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
var startPos fmt.Stringer = pos
|
||||
if h.gset != nil {
|
||||
c.master.UpdateGTIDSet(h.gset)
|
||||
startPos = h.gset
|
||||
}
|
||||
log.Infof("dump MySQL and parse OK, use %0.2f seconds, start binlog replication at %s",
|
||||
time.Now().Sub(start).Seconds(), startPos)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Canal) tryDump() error {
|
||||
if len(c.master.Name) > 0 && c.master.Position > 0 {
|
||||
pos := c.master.Position()
|
||||
gset := c.master.GTIDSet()
|
||||
if (len(pos.Name) > 0 && pos.Pos > 0) ||
|
||||
(gset != nil && gset.String() != "") {
|
||||
// we will sync with binlog name and position
|
||||
log.Infof("skip dump, use last binlog replication pos (%s, %d)", c.master.Name, c.master.Position)
|
||||
log.Infof("skip dump, use last binlog replication pos %s or GTID set %s", pos, gset)
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -102,19 +186,5 @@ func (c *Canal) tryDump() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
h := &dumpParseHandler{c: c}
|
||||
|
||||
start := time.Now()
|
||||
log.Info("try dump MySQL and parse")
|
||||
if err := c.dumper.DumpAndParse(h); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
log.Infof("dump MySQL and parse OK, use %0.2f seconds, start binlog replication at (%s, %d)",
|
||||
time.Now().Sub(start).Seconds(), h.name, h.pos)
|
||||
|
||||
c.master.Update(h.name, uint32(h.pos))
|
||||
c.master.Save(true)
|
||||
|
||||
return nil
|
||||
return c.dump()
|
||||
}
|
||||
|
56
vendor/github.com/siddontang/go-mysql/canal/handler.go
generated
vendored
56
vendor/github.com/siddontang/go-mysql/canal/handler.go
generated
vendored
@ -1,41 +1,41 @@
|
||||
package canal
|
||||
|
||||
import (
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go-mysql/replication"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrHandleInterrupted = errors.New("do handler error, interrupted")
|
||||
)
|
||||
|
||||
type RowsEventHandler interface {
|
||||
// Handle RowsEvent, if return ErrHandleInterrupted, canal will
|
||||
// stop the sync
|
||||
Do(e *RowsEvent) error
|
||||
type EventHandler interface {
|
||||
OnRotate(roateEvent *replication.RotateEvent) error
|
||||
// OnTableChanged is called when the table is created, altered, renamed or dropped.
|
||||
// You need to clear the associated data like cache with the table.
|
||||
// It will be called before OnDDL.
|
||||
OnTableChanged(schema string, table string) error
|
||||
OnDDL(nextPos mysql.Position, queryEvent *replication.QueryEvent) error
|
||||
OnRow(e *RowsEvent) error
|
||||
OnXID(nextPos mysql.Position) error
|
||||
OnGTID(gtid mysql.GTIDSet) error
|
||||
// OnPosSynced Use your own way to sync position. When force is true, sync position immediately.
|
||||
OnPosSynced(pos mysql.Position, force bool) error
|
||||
String() string
|
||||
}
|
||||
|
||||
func (c *Canal) RegRowsEventHandler(h RowsEventHandler) {
|
||||
c.rsLock.Lock()
|
||||
c.rsHandlers = append(c.rsHandlers, h)
|
||||
c.rsLock.Unlock()
|
||||
type DummyEventHandler struct {
|
||||
}
|
||||
|
||||
func (c *Canal) travelRowsEventHandler(e *RowsEvent) error {
|
||||
c.rsLock.Lock()
|
||||
defer c.rsLock.Unlock()
|
||||
|
||||
var err error
|
||||
for _, h := range c.rsHandlers {
|
||||
if err = h.Do(e); err != nil && !mysql.ErrorEqual(err, ErrHandleInterrupted) {
|
||||
log.Errorf("handle %v err: %v", h, err)
|
||||
} else if mysql.ErrorEqual(err, ErrHandleInterrupted) {
|
||||
log.Errorf("handle %v err, interrupted", h)
|
||||
return ErrHandleInterrupted
|
||||
}
|
||||
|
||||
}
|
||||
func (h *DummyEventHandler) OnRotate(*replication.RotateEvent) error { return nil }
|
||||
func (h *DummyEventHandler) OnTableChanged(schema string, table string) error { return nil }
|
||||
func (h *DummyEventHandler) OnDDL(nextPos mysql.Position, queryEvent *replication.QueryEvent) error {
|
||||
return nil
|
||||
}
|
||||
func (h *DummyEventHandler) OnRow(*RowsEvent) error { return nil }
|
||||
func (h *DummyEventHandler) OnXID(mysql.Position) error { return nil }
|
||||
func (h *DummyEventHandler) OnGTID(mysql.GTIDSet) error { return nil }
|
||||
func (h *DummyEventHandler) OnPosSynced(mysql.Position, bool) error { return nil }
|
||||
func (h *DummyEventHandler) String() string { return "DummyEventHandler" }
|
||||
|
||||
// `SetEventHandler` registers the sync handler, you must register your
|
||||
// own handler before starting Canal.
|
||||
func (c *Canal) SetEventHandler(h EventHandler) {
|
||||
c.eventHandler = h
|
||||
}
|
||||
|
113
vendor/github.com/siddontang/go-mysql/canal/master.go
generated
vendored
113
vendor/github.com/siddontang/go-mysql/canal/master.go
generated
vendored
@ -1,89 +1,66 @@
|
||||
package canal
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/toml"
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/siddontang/go-log/log"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go/ioutil2"
|
||||
)
|
||||
|
||||
type masterInfo struct {
|
||||
Addr string `toml:"addr"`
|
||||
Name string `toml:"bin_name"`
|
||||
Position uint32 `toml:"bin_pos"`
|
||||
sync.RWMutex
|
||||
|
||||
name string
|
||||
pos mysql.Position
|
||||
|
||||
l sync.Mutex
|
||||
gset mysql.GTIDSet
|
||||
|
||||
lastSaveTime time.Time
|
||||
timestamp uint32
|
||||
}
|
||||
|
||||
func loadMasterInfo(name string) (*masterInfo, error) {
|
||||
var m masterInfo
|
||||
func (m *masterInfo) Update(pos mysql.Position) {
|
||||
log.Debugf("update master position %s", pos)
|
||||
|
||||
m.name = name
|
||||
|
||||
f, err := os.Open(name)
|
||||
if err != nil && !os.IsNotExist(errors.Cause(err)) {
|
||||
return nil, errors.Trace(err)
|
||||
} else if os.IsNotExist(errors.Cause(err)) {
|
||||
return &m, nil
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
_, err = toml.DecodeReader(f, &m)
|
||||
|
||||
return &m, err
|
||||
m.Lock()
|
||||
m.pos = pos
|
||||
m.Unlock()
|
||||
}
|
||||
|
||||
func (m *masterInfo) Save(force bool) error {
|
||||
m.l.Lock()
|
||||
defer m.l.Unlock()
|
||||
func (m *masterInfo) UpdateTimestamp(ts uint32) {
|
||||
log.Debugf("update master timestamp %s", ts)
|
||||
|
||||
n := time.Now()
|
||||
if !force && n.Sub(m.lastSaveTime) < time.Second {
|
||||
m.Lock()
|
||||
m.timestamp = ts
|
||||
m.Unlock()
|
||||
}
|
||||
|
||||
func (m *masterInfo) UpdateGTIDSet(gset mysql.GTIDSet) {
|
||||
log.Debugf("update master gtid set %s", gset)
|
||||
|
||||
m.Lock()
|
||||
m.gset = gset
|
||||
m.Unlock()
|
||||
}
|
||||
|
||||
func (m *masterInfo) Position() mysql.Position {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.pos
|
||||
}
|
||||
|
||||
func (m *masterInfo) Timestamp() uint32 {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
return m.timestamp
|
||||
}
|
||||
|
||||
func (m *masterInfo) GTIDSet() mysql.GTIDSet {
|
||||
m.RLock()
|
||||
defer m.RUnlock()
|
||||
|
||||
if m.gset == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
e := toml.NewEncoder(&buf)
|
||||
|
||||
e.Encode(m)
|
||||
|
||||
var err error
|
||||
if err = ioutil2.WriteFileAtomic(m.name, buf.Bytes(), 0644); err != nil {
|
||||
log.Errorf("canal save master info to file %s err %v", m.name, err)
|
||||
}
|
||||
|
||||
m.lastSaveTime = n
|
||||
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
func (m *masterInfo) Update(name string, pos uint32) {
|
||||
m.l.Lock()
|
||||
m.Name = name
|
||||
m.Position = pos
|
||||
m.l.Unlock()
|
||||
}
|
||||
|
||||
func (m *masterInfo) Pos() mysql.Position {
|
||||
var pos mysql.Position
|
||||
m.l.Lock()
|
||||
pos.Name = m.Name
|
||||
pos.Pos = m.Position
|
||||
m.l.Unlock()
|
||||
|
||||
return pos
|
||||
}
|
||||
|
||||
func (m *masterInfo) Close() {
|
||||
m.Save(true)
|
||||
return m.gset.Clone()
|
||||
}
|
||||
|
48
vendor/github.com/siddontang/go-mysql/canal/rows.go
generated
vendored
48
vendor/github.com/siddontang/go-mysql/canal/rows.go
generated
vendored
@ -3,16 +3,18 @@ package canal
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/siddontang/go-mysql/replication"
|
||||
"github.com/siddontang/go-mysql/schema"
|
||||
)
|
||||
|
||||
// The action name for sync.
|
||||
const (
|
||||
UpdateAction = "update"
|
||||
InsertAction = "insert"
|
||||
DeleteAction = "delete"
|
||||
)
|
||||
|
||||
// RowsEvent is the event for row replication.
|
||||
type RowsEvent struct {
|
||||
Table *schema.Table
|
||||
Action string
|
||||
@ -22,35 +24,49 @@ type RowsEvent struct {
|
||||
// Two rows for one event, format is [before update row, after update row]
|
||||
// for update v0, only one row for a event, and we don't support this version.
|
||||
Rows [][]interface{}
|
||||
// Header can be used to inspect the event
|
||||
Header *replication.EventHeader
|
||||
}
|
||||
|
||||
func newRowsEvent(table *schema.Table, action string, rows [][]interface{}) *RowsEvent {
|
||||
func newRowsEvent(table *schema.Table, action string, rows [][]interface{}, header *replication.EventHeader) *RowsEvent {
|
||||
e := new(RowsEvent)
|
||||
|
||||
e.Table = table
|
||||
e.Action = action
|
||||
e.Rows = rows
|
||||
e.Header = header
|
||||
|
||||
e.handleUnsigned()
|
||||
|
||||
return e
|
||||
}
|
||||
|
||||
// Get primary keys in one row for a table, a table may use multi fields as the PK
|
||||
func GetPKValues(table *schema.Table, row []interface{}) ([]interface{}, error) {
|
||||
indexes := table.PKColumns
|
||||
if len(indexes) == 0 {
|
||||
return nil, errors.Errorf("table %s has no PK", table)
|
||||
} else if len(table.Columns) != len(row) {
|
||||
return nil, errors.Errorf("table %s has %d columns, but row data %v len is %d", table,
|
||||
len(table.Columns), row, len(row))
|
||||
func (r *RowsEvent) handleUnsigned() {
|
||||
// Handle Unsigned Columns here, for binlog replication, we can't know the integer is unsigned or not,
|
||||
// so we use int type but this may cause overflow outside sometimes, so we must convert to the really .
|
||||
// unsigned type
|
||||
if len(r.Table.UnsignedColumns) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
values := make([]interface{}, 0, len(indexes))
|
||||
|
||||
for _, index := range indexes {
|
||||
values = append(values, row[index])
|
||||
for i := 0; i < len(r.Rows); i++ {
|
||||
for _, index := range r.Table.UnsignedColumns {
|
||||
switch t := r.Rows[i][index].(type) {
|
||||
case int8:
|
||||
r.Rows[i][index] = uint8(t)
|
||||
case int16:
|
||||
r.Rows[i][index] = uint16(t)
|
||||
case int32:
|
||||
r.Rows[i][index] = uint32(t)
|
||||
case int64:
|
||||
r.Rows[i][index] = uint64(t)
|
||||
case int:
|
||||
r.Rows[i][index] = uint(t)
|
||||
default:
|
||||
// nothing to do
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return values, nil
|
||||
}
|
||||
|
||||
// String implements fmt.Stringer interface.
|
||||
|
226
vendor/github.com/siddontang/go-mysql/canal/sync.go
generated
vendored
226
vendor/github.com/siddontang/go-mysql/canal/sync.go
generated
vendored
@ -1,49 +1,68 @@
|
||||
package canal
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/satori/go.uuid"
|
||||
"github.com/siddontang/go-log/log"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go-mysql/replication"
|
||||
"github.com/siddontang/go-mysql/schema"
|
||||
)
|
||||
|
||||
func (c *Canal) startSyncBinlog() error {
|
||||
pos := mysql.Position{c.master.Name, c.master.Position}
|
||||
var (
|
||||
expCreateTable = regexp.MustCompile("(?i)^CREATE\\sTABLE(\\sIF\\sNOT\\sEXISTS)?\\s`{0,1}(.*?)`{0,1}\\.{0,1}`{0,1}([^`\\.]+?)`{0,1}\\s.*")
|
||||
expAlterTable = regexp.MustCompile("(?i)^ALTER\\sTABLE\\s.*?`{0,1}(.*?)`{0,1}\\.{0,1}`{0,1}([^`\\.]+?)`{0,1}\\s.*")
|
||||
expRenameTable = regexp.MustCompile("(?i)^RENAME\\sTABLE\\s.*?`{0,1}(.*?)`{0,1}\\.{0,1}`{0,1}([^`\\.]+?)`{0,1}\\s{1,}TO\\s.*?")
|
||||
expDropTable = regexp.MustCompile("(?i)^DROP\\sTABLE(\\sIF\\sEXISTS){0,1}\\s`{0,1}(.*?)`{0,1}\\.{0,1}`{0,1}([^`\\.]+?)`{0,1}(?:$|\\s)")
|
||||
expTruncateTable = regexp.MustCompile("(?i)^TRUNCATE\\s+(?:TABLE\\s+)?(?:`?([^`\\s]+)`?\\.`?)?([^`\\s]+)`?")
|
||||
)
|
||||
|
||||
log.Infof("start sync binlog at %v", pos)
|
||||
func (c *Canal) startSyncer() (*replication.BinlogStreamer, error) {
|
||||
gset := c.master.GTIDSet()
|
||||
if gset == nil {
|
||||
pos := c.master.Position()
|
||||
s, err := c.syncer.StartSync(pos)
|
||||
if err != nil {
|
||||
return nil, errors.Errorf("start sync replication at binlog %v error %v", pos, err)
|
||||
}
|
||||
log.Infof("start sync binlog at binlog file %v", pos)
|
||||
return s, nil
|
||||
} else {
|
||||
s, err := c.syncer.StartSyncGTID(gset)
|
||||
if err != nil {
|
||||
return nil, errors.Errorf("start sync replication at GTID set %v error %v", gset, err)
|
||||
}
|
||||
log.Infof("start sync binlog at GTID set %v", gset)
|
||||
return s, nil
|
||||
}
|
||||
}
|
||||
|
||||
s, err := c.syncer.StartSync(pos)
|
||||
func (c *Canal) runSyncBinlog() error {
|
||||
s, err := c.startSyncer()
|
||||
if err != nil {
|
||||
return errors.Errorf("start sync replication at %v error %v", pos, err)
|
||||
return err
|
||||
}
|
||||
|
||||
timeout := time.Second
|
||||
forceSavePos := false
|
||||
savePos := false
|
||||
force := false
|
||||
for {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
ev, err := s.GetEvent(ctx)
|
||||
cancel()
|
||||
|
||||
if err == context.DeadlineExceeded {
|
||||
timeout = 2 * timeout
|
||||
continue
|
||||
}
|
||||
ev, err := s.GetEvent(c.ctx)
|
||||
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
savePos = false
|
||||
force = false
|
||||
pos := c.master.Position()
|
||||
|
||||
timeout = time.Second
|
||||
|
||||
curPos := pos.Pos
|
||||
//next binlog pos
|
||||
pos.Pos = ev.Header.LogPos
|
||||
|
||||
forceSavePos = false
|
||||
|
||||
// We only save position with RotateEvent and XIDEvent.
|
||||
// For RowsEvent, we can't save the position until meeting XIDEvent
|
||||
// which tells the whole transaction is over.
|
||||
@ -52,24 +71,105 @@ func (c *Canal) startSyncBinlog() error {
|
||||
case *replication.RotateEvent:
|
||||
pos.Name = string(e.NextLogName)
|
||||
pos.Pos = uint32(e.Position)
|
||||
// r.ev <- pos
|
||||
forceSavePos = true
|
||||
log.Infof("rotate binlog to %v", pos)
|
||||
log.Infof("rotate binlog to %s", pos)
|
||||
savePos = true
|
||||
force = true
|
||||
if err = c.eventHandler.OnRotate(e); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
case *replication.RowsEvent:
|
||||
// we only focus row based event
|
||||
if err = c.handleRowsEvent(ev); err != nil {
|
||||
log.Errorf("handle rows event error %v", err)
|
||||
return errors.Trace(err)
|
||||
err = c.handleRowsEvent(ev)
|
||||
if err != nil {
|
||||
e := errors.Cause(err)
|
||||
// if error is not ErrExcludedTable or ErrTableNotExist or ErrMissingTableMeta, stop canal
|
||||
if e != ErrExcludedTable &&
|
||||
e != schema.ErrTableNotExist &&
|
||||
e != schema.ErrMissingTableMeta {
|
||||
log.Errorf("handle rows event at (%s, %d) error %v", pos.Name, curPos, err)
|
||||
return errors.Trace(err)
|
||||
}
|
||||
}
|
||||
continue
|
||||
case *replication.XIDEvent:
|
||||
if e.GSet != nil {
|
||||
c.master.UpdateGTIDSet(e.GSet)
|
||||
}
|
||||
savePos = true
|
||||
// try to save the position later
|
||||
if err := c.eventHandler.OnXID(pos); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
case *replication.MariadbGTIDEvent:
|
||||
// try to save the GTID later
|
||||
gtid, err := mysql.ParseMariadbGTIDSet(e.GTID.String())
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
if err := c.eventHandler.OnGTID(gtid); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
case *replication.GTIDEvent:
|
||||
u, _ := uuid.FromBytes(e.SID)
|
||||
gtid, err := mysql.ParseMysqlGTIDSet(fmt.Sprintf("%s:%d", u.String(), e.GNO))
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
if err := c.eventHandler.OnGTID(gtid); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
case *replication.QueryEvent:
|
||||
if e.GSet != nil {
|
||||
c.master.UpdateGTIDSet(e.GSet)
|
||||
}
|
||||
var (
|
||||
mb [][]byte
|
||||
db []byte
|
||||
table []byte
|
||||
)
|
||||
regexps := []regexp.Regexp{*expCreateTable, *expAlterTable, *expRenameTable, *expDropTable, *expTruncateTable}
|
||||
for _, reg := range regexps {
|
||||
mb = reg.FindSubmatch(e.Query)
|
||||
if len(mb) != 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
mbLen := len(mb)
|
||||
if mbLen == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// the first last is table name, the second last is database name(if exists)
|
||||
if len(mb[mbLen-2]) == 0 {
|
||||
db = e.Schema
|
||||
} else {
|
||||
db = mb[mbLen-2]
|
||||
}
|
||||
table = mb[mbLen-1]
|
||||
|
||||
savePos = true
|
||||
force = true
|
||||
c.ClearTableCache(db, table)
|
||||
log.Infof("table structure changed, clear table cache: %s.%s\n", db, table)
|
||||
if err = c.eventHandler.OnTableChanged(string(db), string(table)); err != nil && errors.Cause(err) != schema.ErrTableNotExist {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
// Now we only handle Table Changed DDL, maybe we will support more later.
|
||||
if err = c.eventHandler.OnDDL(pos, e); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
default:
|
||||
continue
|
||||
}
|
||||
|
||||
c.master.Update(pos.Name, pos.Pos)
|
||||
c.master.Save(forceSavePos)
|
||||
if savePos {
|
||||
c.master.Update(pos)
|
||||
c.master.UpdateTimestamp(ev.Header.Timestamp)
|
||||
if err := c.eventHandler.OnPosSynced(pos, force); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
@ -84,7 +184,7 @@ func (c *Canal) handleRowsEvent(e *replication.BinlogEvent) error {
|
||||
|
||||
t, err := c.GetTable(schema, table)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
return err
|
||||
}
|
||||
var action string
|
||||
switch e.Header.EventType {
|
||||
@ -97,25 +197,31 @@ func (c *Canal) handleRowsEvent(e *replication.BinlogEvent) error {
|
||||
default:
|
||||
return errors.Errorf("%s not supported now", e.Header.EventType)
|
||||
}
|
||||
events := newRowsEvent(t, action, ev.Rows)
|
||||
return c.travelRowsEventHandler(events)
|
||||
events := newRowsEvent(t, action, ev.Rows, e.Header)
|
||||
return c.eventHandler.OnRow(events)
|
||||
}
|
||||
|
||||
func (c *Canal) WaitUntilPos(pos mysql.Position, timeout int) error {
|
||||
if timeout <= 0 {
|
||||
timeout = 60
|
||||
}
|
||||
func (c *Canal) FlushBinlog() error {
|
||||
_, err := c.Execute("FLUSH BINARY LOGS")
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
timer := time.NewTimer(time.Duration(timeout) * time.Second)
|
||||
func (c *Canal) WaitUntilPos(pos mysql.Position, timeout time.Duration) error {
|
||||
timer := time.NewTimer(timeout)
|
||||
for {
|
||||
select {
|
||||
case <-timer.C:
|
||||
return errors.Errorf("wait position %v err", pos)
|
||||
return errors.Errorf("wait position %v too long > %s", pos, timeout)
|
||||
default:
|
||||
curpos := c.master.Pos()
|
||||
if curpos.Compare(pos) >= 0 {
|
||||
err := c.FlushBinlog()
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
curPos := c.master.Position()
|
||||
if curPos.Compare(pos) >= 0 {
|
||||
return nil
|
||||
} else {
|
||||
log.Debugf("master pos is %v, wait catching %v", curPos, pos)
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
}
|
||||
}
|
||||
@ -124,14 +230,46 @@ func (c *Canal) WaitUntilPos(pos mysql.Position, timeout int) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Canal) CatchMasterPos(timeout int) error {
|
||||
func (c *Canal) GetMasterPos() (mysql.Position, error) {
|
||||
rr, err := c.Execute("SHOW MASTER STATUS")
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
return mysql.Position{}, errors.Trace(err)
|
||||
}
|
||||
|
||||
name, _ := rr.GetString(0, 0)
|
||||
pos, _ := rr.GetInt(0, 1)
|
||||
|
||||
return c.WaitUntilPos(mysql.Position{name, uint32(pos)}, timeout)
|
||||
return mysql.Position{Name: name, Pos: uint32(pos)}, nil
|
||||
}
|
||||
|
||||
func (c *Canal) GetMasterGTIDSet() (mysql.GTIDSet, error) {
|
||||
query := ""
|
||||
switch c.cfg.Flavor {
|
||||
case mysql.MariaDBFlavor:
|
||||
query = "SELECT @@GLOBAL.gtid_current_pos"
|
||||
default:
|
||||
query = "SELECT @@GLOBAL.GTID_EXECUTED"
|
||||
}
|
||||
rr, err := c.Execute(query)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
gx, err := rr.GetString(0, 0)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
gset, err := mysql.ParseGTIDSet(c.cfg.Flavor, gx)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
return gset, nil
|
||||
}
|
||||
|
||||
func (c *Canal) CatchMasterPos(timeout time.Duration) error {
|
||||
pos, err := c.GetMasterPos()
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
return c.WaitUntilPos(pos, timeout)
|
||||
}
|
||||
|
6
vendor/github.com/siddontang/go-mysql/clear_vendor.sh
generated
vendored
Executable file
6
vendor/github.com/siddontang/go-mysql/clear_vendor.sh
generated
vendored
Executable file
@ -0,0 +1,6 @@
|
||||
find vendor \( -type f -or -type l \) -not -name "*.go" -not -name "LICENSE" -not -name "*.s" -not -name "PATENTS" -not -name "*.h" -not -name "*.c" | xargs -I {} rm {}
|
||||
# delete all test files
|
||||
find vendor -type f -name "*_generated.go" | xargs -I {} rm {}
|
||||
find vendor -type f -name "*_test.go" | xargs -I {} rm {}
|
||||
find vendor -type d -name "_vendor" | xargs -I {} rm -rf {}
|
||||
find vendor -type d -empty | xargs -I {} rm -rf {}
|
176
vendor/github.com/siddontang/go-mysql/client/auth.go
generated
vendored
176
vendor/github.com/siddontang/go-mysql/client/auth.go
generated
vendored
@ -4,12 +4,29 @@ import (
|
||||
"bytes"
|
||||
"crypto/tls"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
|
||||
"github.com/juju/errors"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go-mysql/packet"
|
||||
)
|
||||
|
||||
const defaultAuthPluginName = AUTH_NATIVE_PASSWORD
|
||||
|
||||
// defines the supported auth plugins
|
||||
var supportedAuthPlugins = []string{AUTH_NATIVE_PASSWORD, AUTH_SHA256_PASSWORD, AUTH_CACHING_SHA2_PASSWORD}
|
||||
|
||||
// helper function to determine what auth methods are allowed by this client
|
||||
func authPluginAllowed(pluginName string) bool {
|
||||
for _, p := range supportedAuthPlugins {
|
||||
if pluginName == p {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// See: http://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::Handshake
|
||||
func (c *Conn) readInitialHandshake() error {
|
||||
data, err := c.ReadPacket()
|
||||
if err != nil {
|
||||
@ -24,39 +41,44 @@ func (c *Conn) readInitialHandshake() error {
|
||||
return errors.Errorf("invalid protocol version %d, must >= 10", data[0])
|
||||
}
|
||||
|
||||
//skip mysql version
|
||||
//mysql version end with 0x00
|
||||
// skip mysql version
|
||||
// mysql version end with 0x00
|
||||
pos := 1 + bytes.IndexByte(data[1:], 0x00) + 1
|
||||
|
||||
//connection id length is 4
|
||||
// connection id length is 4
|
||||
c.connectionID = uint32(binary.LittleEndian.Uint32(data[pos : pos+4]))
|
||||
pos += 4
|
||||
|
||||
c.salt = []byte{}
|
||||
c.salt = append(c.salt, data[pos:pos+8]...)
|
||||
|
||||
//skip filter
|
||||
// skip filter
|
||||
pos += 8 + 1
|
||||
|
||||
//capability lower 2 bytes
|
||||
// capability lower 2 bytes
|
||||
c.capability = uint32(binary.LittleEndian.Uint16(data[pos : pos+2]))
|
||||
|
||||
// check protocol
|
||||
if c.capability&CLIENT_PROTOCOL_41 == 0 {
|
||||
return errors.New("the MySQL server can not support protocol 41 and above required by the client")
|
||||
}
|
||||
if c.capability&CLIENT_SSL == 0 && c.tlsConfig != nil {
|
||||
return errors.New("the MySQL Server does not support TLS required by the client")
|
||||
}
|
||||
pos += 2
|
||||
|
||||
if len(data) > pos {
|
||||
//skip server charset
|
||||
// skip server charset
|
||||
//c.charset = data[pos]
|
||||
pos += 1
|
||||
|
||||
c.status = binary.LittleEndian.Uint16(data[pos : pos+2])
|
||||
pos += 2
|
||||
|
||||
// capability flags (upper 2 bytes)
|
||||
c.capability = uint32(binary.LittleEndian.Uint16(data[pos:pos+2]))<<16 | c.capability
|
||||
|
||||
pos += 2
|
||||
|
||||
//skip auth data len or [00]
|
||||
//skip reserved (all [00])
|
||||
// skip auth data len or [00]
|
||||
// skip reserved (all [00])
|
||||
pos += 10 + 1
|
||||
|
||||
// The documentation is ambiguous about the length.
|
||||
@ -64,78 +86,131 @@ func (c *Conn) readInitialHandshake() error {
|
||||
// mysql-proxy also use 12
|
||||
// which is not documented but seems to work.
|
||||
c.salt = append(c.salt, data[pos:pos+12]...)
|
||||
pos += 13
|
||||
// auth plugin
|
||||
if end := bytes.IndexByte(data[pos:], 0x00); end != -1 {
|
||||
c.authPluginName = string(data[pos : pos+end])
|
||||
} else {
|
||||
c.authPluginName = string(data[pos:])
|
||||
}
|
||||
}
|
||||
|
||||
// if server gives no default auth plugin name, use a client default
|
||||
if c.authPluginName == "" {
|
||||
c.authPluginName = defaultAuthPluginName
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// generate auth response data according to auth plugin
|
||||
//
|
||||
// NOTE: the returned boolean value indicates whether to add a \NUL to the end of data.
|
||||
// it is quite tricky because MySQl server expects different formats of responses in different auth situations.
|
||||
// here the \NUL needs to be added when sending back the empty password or cleartext password in 'sha256_password'
|
||||
// authentication.
|
||||
func (c *Conn) genAuthResponse(authData []byte) ([]byte, bool, error) {
|
||||
// password hashing
|
||||
switch c.authPluginName {
|
||||
case AUTH_NATIVE_PASSWORD:
|
||||
return CalcPassword(authData[:20], []byte(c.password)), false, nil
|
||||
case AUTH_CACHING_SHA2_PASSWORD:
|
||||
return CalcCachingSha2Password(authData, c.password), false, nil
|
||||
case AUTH_SHA256_PASSWORD:
|
||||
if len(c.password) == 0 {
|
||||
return nil, true, nil
|
||||
}
|
||||
if c.tlsConfig != nil || c.proto == "unix" {
|
||||
// write cleartext auth packet
|
||||
// see: https://dev.mysql.com/doc/refman/8.0/en/sha256-pluggable-authentication.html
|
||||
return []byte(c.password), true, nil
|
||||
} else {
|
||||
// request public key from server
|
||||
// see: https://dev.mysql.com/doc/internals/en/public-key-retrieval.html
|
||||
return []byte{1}, false, nil
|
||||
}
|
||||
default:
|
||||
// not reachable
|
||||
return nil, false, fmt.Errorf("auth plugin '%s' is not supported", c.authPluginName)
|
||||
}
|
||||
}
|
||||
|
||||
// See: http://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::HandshakeResponse
|
||||
func (c *Conn) writeAuthHandshake() error {
|
||||
if !authPluginAllowed(c.authPluginName) {
|
||||
return fmt.Errorf("unknow auth plugin name '%s'", c.authPluginName)
|
||||
}
|
||||
// Adjust client capability flags based on server support
|
||||
capability := CLIENT_PROTOCOL_41 | CLIENT_SECURE_CONNECTION |
|
||||
CLIENT_LONG_PASSWORD | CLIENT_TRANSACTIONS | CLIENT_LONG_FLAG
|
||||
CLIENT_LONG_PASSWORD | CLIENT_TRANSACTIONS | CLIENT_PLUGIN_AUTH | c.capability&CLIENT_LONG_FLAG
|
||||
|
||||
// To enable TLS / SSL
|
||||
if c.TLSConfig != nil {
|
||||
capability |= CLIENT_PLUGIN_AUTH
|
||||
if c.tlsConfig != nil {
|
||||
capability |= CLIENT_SSL
|
||||
}
|
||||
|
||||
capability &= c.capability
|
||||
auth, addNull, err := c.genAuthResponse(c.salt)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// encode length of the auth plugin data
|
||||
// here we use the Length-Encoded-Integer(LEI) as the data length may not fit into one byte
|
||||
// see: https://dev.mysql.com/doc/internals/en/integer.html#length-encoded-integer
|
||||
var authRespLEIBuf [9]byte
|
||||
authRespLEI := AppendLengthEncodedInteger(authRespLEIBuf[:0], uint64(len(auth)))
|
||||
if len(authRespLEI) > 1 {
|
||||
// if the length can not be written in 1 byte, it must be written as a
|
||||
// length encoded integer
|
||||
capability |= CLIENT_PLUGIN_AUTH_LENENC_CLIENT_DATA
|
||||
}
|
||||
|
||||
//packet length
|
||||
//capbility 4
|
||||
//capability 4
|
||||
//max-packet size 4
|
||||
//charset 1
|
||||
//reserved all[0] 23
|
||||
length := 4 + 4 + 1 + 23
|
||||
|
||||
//username
|
||||
length += len(c.user) + 1
|
||||
|
||||
//we only support secure connection
|
||||
auth := CalcPassword(c.salt, []byte(c.password))
|
||||
|
||||
length += 1 + len(auth)
|
||||
|
||||
//auth
|
||||
//mysql_native_password + null-terminated
|
||||
length := 4 + 4 + 1 + 23 + len(c.user) + 1 + len(authRespLEI) + len(auth) + 21 + 1
|
||||
if addNull {
|
||||
length++
|
||||
}
|
||||
// db name
|
||||
if len(c.db) > 0 {
|
||||
capability |= CLIENT_CONNECT_WITH_DB
|
||||
|
||||
length += len(c.db) + 1
|
||||
}
|
||||
|
||||
// mysql_native_password + null-terminated
|
||||
length += 21 + 1
|
||||
|
||||
c.capability = capability
|
||||
|
||||
data := make([]byte, length+4)
|
||||
|
||||
//capability [32 bit]
|
||||
// capability [32 bit]
|
||||
data[4] = byte(capability)
|
||||
data[5] = byte(capability >> 8)
|
||||
data[6] = byte(capability >> 16)
|
||||
data[7] = byte(capability >> 24)
|
||||
|
||||
//MaxPacketSize [32 bit] (none)
|
||||
//data[8] = 0x00
|
||||
//data[9] = 0x00
|
||||
//data[10] = 0x00
|
||||
//data[11] = 0x00
|
||||
// MaxPacketSize [32 bit] (none)
|
||||
data[8] = 0x00
|
||||
data[9] = 0x00
|
||||
data[10] = 0x00
|
||||
data[11] = 0x00
|
||||
|
||||
//Charset [1 byte]
|
||||
//use default collation id 33 here, is utf-8
|
||||
// Charset [1 byte]
|
||||
// use default collation id 33 here, is utf-8
|
||||
data[12] = byte(DEFAULT_COLLATION_ID)
|
||||
|
||||
// SSL Connection Request Packet
|
||||
// http://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::SSLRequest
|
||||
if c.TLSConfig != nil {
|
||||
if c.tlsConfig != nil {
|
||||
// Send TLS / SSL request packet
|
||||
if err := c.WritePacket(data[:(4+4+1+23)+4]); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Switch to TLS
|
||||
tlsConn := tls.Client(c.Conn.Conn, c.TLSConfig)
|
||||
tlsConn := tls.Client(c.Conn.Conn, c.tlsConfig)
|
||||
if err := tlsConn.Handshake(); err != nil {
|
||||
return err
|
||||
}
|
||||
@ -145,10 +220,13 @@ func (c *Conn) writeAuthHandshake() error {
|
||||
c.Sequence = currentSequence
|
||||
}
|
||||
|
||||
//Filler [23 bytes] (all 0x00)
|
||||
pos := 13 + 23
|
||||
// Filler [23 bytes] (all 0x00)
|
||||
pos := 13
|
||||
for ; pos < 13+23; pos++ {
|
||||
data[pos] = 0
|
||||
}
|
||||
|
||||
//User [null terminated string]
|
||||
// User [null terminated string]
|
||||
if len(c.user) > 0 {
|
||||
pos += copy(data[pos:], c.user)
|
||||
}
|
||||
@ -156,8 +234,12 @@ func (c *Conn) writeAuthHandshake() error {
|
||||
pos++
|
||||
|
||||
// auth [length encoded integer]
|
||||
data[pos] = byte(len(auth))
|
||||
pos += 1 + copy(data[pos+1:], auth)
|
||||
pos += copy(data[pos:], authRespLEI)
|
||||
pos += copy(data[pos:], auth)
|
||||
if addNull {
|
||||
data[pos] = 0x00
|
||||
pos++
|
||||
}
|
||||
|
||||
// db [null terminated string]
|
||||
if len(c.db) > 0 {
|
||||
@ -167,7 +249,7 @@ func (c *Conn) writeAuthHandshake() error {
|
||||
}
|
||||
|
||||
// Assume native client during response
|
||||
pos += copy(data[pos:], "mysql_native_password")
|
||||
pos += copy(data[pos:], c.authPluginName)
|
||||
data[pos] = 0x00
|
||||
|
||||
return c.WritePacket(data)
|
||||
|
69
vendor/github.com/siddontang/go-mysql/client/client_test.go
generated
vendored
69
vendor/github.com/siddontang/go-mysql/client/client_test.go
generated
vendored
@ -1,41 +1,56 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"flag"
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/juju/errors"
|
||||
. "github.com/pingcap/check"
|
||||
"github.com/siddontang/go-mysql/test_util/test_keys"
|
||||
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
var testHost = flag.String("host", "127.0.0.1", "MySQL server host")
|
||||
var testPort = flag.Int("port", 3306, "MySQL server port")
|
||||
// We cover the whole range of MySQL server versions using docker-compose to bind them to different ports for testing.
|
||||
// MySQL is constantly updating auth plugin to make it secure:
|
||||
// starting from MySQL 8.0.4, a new auth plugin is introduced, causing plain password auth to fail with error:
|
||||
// ERROR 1251 (08004): Client does not support authentication protocol requested by server; consider upgrading MySQL client
|
||||
// Hint: use docker-compose to start corresponding MySQL docker containers and add the their ports here
|
||||
var testPort = flag.String("port", "3306", "MySQL server port") // choose one or more form 5561,5641,3306,5722,8003,8012,8013, e.g. '3306,5722,8003'
|
||||
var testUser = flag.String("user", "root", "MySQL user")
|
||||
var testPassword = flag.String("pass", "", "MySQL password")
|
||||
var testDB = flag.String("db", "test", "MySQL test database")
|
||||
|
||||
func Test(t *testing.T) {
|
||||
segs := strings.Split(*testPort, ",")
|
||||
for _, seg := range segs {
|
||||
Suite(&clientTestSuite{port: seg})
|
||||
}
|
||||
TestingT(t)
|
||||
}
|
||||
|
||||
type clientTestSuite struct {
|
||||
c *Conn
|
||||
c *Conn
|
||||
port string
|
||||
}
|
||||
|
||||
var _ = Suite(&clientTestSuite{})
|
||||
|
||||
func (s *clientTestSuite) SetUpSuite(c *C) {
|
||||
var err error
|
||||
addr := fmt.Sprintf("%s:%d", *testHost, *testPort)
|
||||
s.c, err = Connect(addr, *testUser, *testPassword, *testDB)
|
||||
addr := fmt.Sprintf("%s:%s", *testHost, s.port)
|
||||
s.c, err = Connect(addr, *testUser, *testPassword, "")
|
||||
if err != nil {
|
||||
c.Fatal(err)
|
||||
}
|
||||
|
||||
_, err = s.c.Execute("CREATE DATABASE IF NOT EXISTS " + *testDB)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
_, err = s.c.Execute("USE " + *testDB)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
s.testConn_CreateTable(c)
|
||||
s.testStmt_CreateTable(c)
|
||||
}
|
||||
@ -78,12 +93,15 @@ func (s *clientTestSuite) TestConn_Ping(c *C) {
|
||||
c.Assert(err, IsNil)
|
||||
}
|
||||
|
||||
func (s *clientTestSuite) TestConn_TLS(c *C) {
|
||||
// NOTE for MySQL 5.5 and 5.6, server side has to config SSL to pass the TLS test, otherwise, it will throw error that
|
||||
// MySQL server does not support TLS required by the client. However, for MySQL 5.7 and above, auto generated certificates
|
||||
// are used by default so that manual config is no longer necessary.
|
||||
func (s *clientTestSuite) TestConn_TLS_Verify(c *C) {
|
||||
// Verify that the provided tls.Config is used when attempting to connect to mysql.
|
||||
// An empty tls.Config will result in a connection error.
|
||||
addr := fmt.Sprintf("%s:%d", *testHost, *testPort)
|
||||
addr := fmt.Sprintf("%s:%s", *testHost, s.port)
|
||||
_, err := Connect(addr, *testUser, *testPassword, *testDB, func(c *Conn) {
|
||||
c.TLSConfig = &tls.Config{}
|
||||
c.UseSSL(false)
|
||||
})
|
||||
if err == nil {
|
||||
c.Fatal("expected error")
|
||||
@ -91,7 +109,34 @@ func (s *clientTestSuite) TestConn_TLS(c *C) {
|
||||
|
||||
expected := "either ServerName or InsecureSkipVerify must be specified in the tls.Config"
|
||||
if !strings.Contains(err.Error(), expected) {
|
||||
c.Fatal("expected '%s' to contain '%s'", err.Error(), expected)
|
||||
c.Fatalf("expected '%s' to contain '%s'", err.Error(), expected)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *clientTestSuite) TestConn_TLS_Skip_Verify(c *C) {
|
||||
// An empty tls.Config will result in a connection error but we can configure to skip it.
|
||||
addr := fmt.Sprintf("%s:%s", *testHost, s.port)
|
||||
_, err := Connect(addr, *testUser, *testPassword, *testDB, func(c *Conn) {
|
||||
c.UseSSL(true)
|
||||
})
|
||||
c.Assert(err, Equals, nil)
|
||||
}
|
||||
|
||||
func (s *clientTestSuite) TestConn_TLS_Certificate(c *C) {
|
||||
// This test uses the TLS suite in 'go-mysql/docker/resources'. The certificates are not valid for any names.
|
||||
// And if server uses auto-generated certificates, it will be an error like:
|
||||
// "x509: certificate is valid for MySQL_Server_8.0.12_Auto_Generated_Server_Certificate, not not-a-valid-name"
|
||||
tlsConfig := NewClientTLSConfig(test_keys.CaPem, test_keys.CertPem, test_keys.KeyPem, false, "not-a-valid-name")
|
||||
addr := fmt.Sprintf("%s:%s", *testHost, s.port)
|
||||
_, err := Connect(addr, *testUser, *testPassword, *testDB, func(c *Conn) {
|
||||
c.SetTLSConfig(tlsConfig)
|
||||
})
|
||||
if err == nil {
|
||||
c.Fatal("expected error")
|
||||
}
|
||||
if !strings.Contains(errors.Details(err), "certificate is not valid for any names") &&
|
||||
!strings.Contains(errors.Details(err), "certificate is valid for") {
|
||||
c.Fatalf("expected errors for server name verification, but got unknown error: %s", errors.Details(err))
|
||||
}
|
||||
}
|
||||
|
||||
@ -349,4 +394,4 @@ func (s *clientTestSuite) TestStmt_Trans(c *C) {
|
||||
|
||||
str, _ = r.GetString(0, 0)
|
||||
c.Assert(str, Equals, `abc`)
|
||||
}
|
||||
}
|
21
vendor/github.com/siddontang/go-mysql/client/conn.go
generated
vendored
21
vendor/github.com/siddontang/go-mysql/client/conn.go
generated
vendored
@ -18,7 +18,8 @@ type Conn struct {
|
||||
user string
|
||||
password string
|
||||
db string
|
||||
TLSConfig *tls.Config
|
||||
tlsConfig *tls.Config
|
||||
proto string
|
||||
|
||||
capability uint32
|
||||
|
||||
@ -26,7 +27,8 @@ type Conn struct {
|
||||
|
||||
charset string
|
||||
|
||||
salt []byte
|
||||
salt []byte
|
||||
authPluginName string
|
||||
|
||||
connectionID uint32
|
||||
}
|
||||
@ -56,6 +58,7 @@ func Connect(addr string, user string, password string, dbName string, options .
|
||||
c.user = user
|
||||
c.password = password
|
||||
c.db = dbName
|
||||
c.proto = proto
|
||||
|
||||
//use default charset here, utf-8
|
||||
c.charset = DEFAULT_CHARSET
|
||||
@ -85,7 +88,7 @@ func (c *Conn) handshake() error {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
if _, err := c.readOK(); err != nil {
|
||||
if err := c.handleAuthResult(); err != nil {
|
||||
c.Close()
|
||||
return errors.Trace(err)
|
||||
}
|
||||
@ -109,6 +112,18 @@ func (c *Conn) Ping() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// use default SSL
|
||||
// pass to options when connect
|
||||
func (c *Conn) UseSSL(insecureSkipVerify bool) {
|
||||
c.tlsConfig = &tls.Config{InsecureSkipVerify: insecureSkipVerify}
|
||||
}
|
||||
|
||||
// use user-specified TLS config
|
||||
// pass to options when connect
|
||||
func (c *Conn) SetTLSConfig(config *tls.Config) {
|
||||
c.tlsConfig = config
|
||||
}
|
||||
|
||||
func (c *Conn) UseDB(dbName string) error {
|
||||
if c.db == dbName {
|
||||
return nil
|
||||
|
120
vendor/github.com/siddontang/go-mysql/client/resp.go
generated
vendored
120
vendor/github.com/siddontang/go-mysql/client/resp.go
generated
vendored
@ -1,8 +1,14 @@
|
||||
package client
|
||||
|
||||
import "C"
|
||||
import (
|
||||
"encoding/binary"
|
||||
|
||||
"bytes"
|
||||
"crypto/rsa"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
|
||||
"github.com/juju/errors"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go/hack"
|
||||
@ -32,7 +38,7 @@ func (c *Conn) isEOFPacket(data []byte) bool {
|
||||
|
||||
func (c *Conn) handleOKPacket(data []byte) (*Result, error) {
|
||||
var n int
|
||||
var pos int = 1
|
||||
var pos = 1
|
||||
|
||||
r := new(Result)
|
||||
|
||||
@ -64,7 +70,7 @@ func (c *Conn) handleOKPacket(data []byte) (*Result, error) {
|
||||
func (c *Conn) handleErrorPacket(data []byte) error {
|
||||
e := new(MyError)
|
||||
|
||||
var pos int = 1
|
||||
var pos = 1
|
||||
|
||||
e.Code = binary.LittleEndian.Uint16(data[pos:])
|
||||
pos += 2
|
||||
@ -81,6 +87,116 @@ func (c *Conn) handleErrorPacket(data []byte) error {
|
||||
return e
|
||||
}
|
||||
|
||||
func (c *Conn) handleAuthResult() error {
|
||||
data, switchToPlugin, err := c.readAuthResult()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// handle auth switch, only support 'sha256_password', and 'caching_sha2_password'
|
||||
if switchToPlugin != "" {
|
||||
//fmt.Printf("now switching auth plugin to '%s'\n", switchToPlugin)
|
||||
if data == nil {
|
||||
data = c.salt
|
||||
} else {
|
||||
copy(c.salt, data)
|
||||
}
|
||||
c.authPluginName = switchToPlugin
|
||||
auth, addNull, err := c.genAuthResponse(data)
|
||||
if err = c.WriteAuthSwitchPacket(auth, addNull); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Read Result Packet
|
||||
data, switchToPlugin, err = c.readAuthResult()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Do not allow to change the auth plugin more than once
|
||||
if switchToPlugin != "" {
|
||||
return errors.Errorf("can not switch auth plugin more than once")
|
||||
}
|
||||
}
|
||||
|
||||
// handle caching_sha2_password
|
||||
if c.authPluginName == AUTH_CACHING_SHA2_PASSWORD {
|
||||
if data == nil {
|
||||
return nil // auth already succeeded
|
||||
}
|
||||
if data[0] == CACHE_SHA2_FAST_AUTH {
|
||||
if _, err = c.readOK(); err == nil {
|
||||
return nil // auth successful
|
||||
}
|
||||
} else if data[0] == CACHE_SHA2_FULL_AUTH {
|
||||
// need full authentication
|
||||
if c.tlsConfig != nil || c.proto == "unix" {
|
||||
if err = c.WriteClearAuthPacket(c.password); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err = c.WritePublicKeyAuthPacket(c.password, c.salt); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
} else {
|
||||
errors.Errorf("invalid packet")
|
||||
}
|
||||
} else if c.authPluginName == AUTH_SHA256_PASSWORD {
|
||||
if len(data) == 0 {
|
||||
return nil // auth already succeeded
|
||||
}
|
||||
block, _ := pem.Decode(data)
|
||||
pub, err := x509.ParsePKIXPublicKey(block.Bytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// send encrypted password
|
||||
err = c.WriteEncryptedPassword(c.password, c.salt, pub.(*rsa.PublicKey))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
_, err = c.readOK()
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Conn) readAuthResult() ([]byte, string, error) {
|
||||
data, err := c.ReadPacket()
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
// see: https://insidemysql.com/preparing-your-community-connector-for-mysql-8-part-2-sha256/
|
||||
// packet indicator
|
||||
switch data[0] {
|
||||
|
||||
case OK_HEADER:
|
||||
_, err := c.handleOKPacket(data)
|
||||
return nil, "", err
|
||||
|
||||
case MORE_DATE_HEADER:
|
||||
return data[1:], "", err
|
||||
|
||||
case EOF_HEADER:
|
||||
// server wants to switch auth
|
||||
if len(data) < 1 {
|
||||
// https://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::OldAuthSwitchRequest
|
||||
return nil, AUTH_MYSQL_OLD_PASSWORD, nil
|
||||
}
|
||||
pluginEndIndex := bytes.IndexByte(data, 0x00)
|
||||
if pluginEndIndex < 0 {
|
||||
return nil, "", errors.New("invalid packet")
|
||||
}
|
||||
plugin := string(data[1:pluginEndIndex])
|
||||
authData := data[pluginEndIndex+1:]
|
||||
return authData, plugin, nil
|
||||
|
||||
default: // Error otherwise
|
||||
return nil, "", c.handleErrorPacket(data)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Conn) readOK() (*Result, error) {
|
||||
data, err := c.ReadPacket()
|
||||
if err != nil {
|
||||
|
28
vendor/github.com/siddontang/go-mysql/client/tls.go
generated
vendored
Normal file
28
vendor/github.com/siddontang/go-mysql/client/tls.go
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
package client
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
)
|
||||
|
||||
// generate TLS config for client side
|
||||
// if insecureSkipVerify is set to true, serverName will not be validated
|
||||
func NewClientTLSConfig(caPem, certPem, keyPem []byte, insecureSkipVerify bool, serverName string) *tls.Config {
|
||||
pool := x509.NewCertPool()
|
||||
if !pool.AppendCertsFromPEM(caPem) {
|
||||
panic("failed to add ca PEM")
|
||||
}
|
||||
|
||||
cert, err := tls.X509KeyPair(certPem, keyPem)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
config := &tls.Config{
|
||||
Certificates: []tls.Certificate{cert},
|
||||
RootCAs: pool,
|
||||
InsecureSkipVerify: insecureSkipVerify,
|
||||
ServerName: serverName,
|
||||
}
|
||||
return config
|
||||
}
|
2
vendor/github.com/siddontang/go-mysql/cmd/go-binlogparser/main.go
generated
vendored
2
vendor/github.com/siddontang/go-mysql/cmd/go-binlogparser/main.go
generated
vendored
@ -23,6 +23,6 @@ func main() {
|
||||
err := p.ParseFile(*name, *offset, f)
|
||||
|
||||
if err != nil {
|
||||
println(err)
|
||||
println(err.Error())
|
||||
}
|
||||
}
|
||||
|
33
vendor/github.com/siddontang/go-mysql/cmd/go-canal/main.go
generated
vendored
33
vendor/github.com/siddontang/go-mysql/cmd/go-canal/main.go
generated
vendored
@ -7,8 +7,10 @@ import (
|
||||
"os/signal"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/siddontang/go-mysql/canal"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
var host = flag.String("host", "127.0.0.1", "MySQL host")
|
||||
@ -18,8 +20,6 @@ var password = flag.String("password", "", "MySQL password")
|
||||
|
||||
var flavor = flag.String("flavor", "mysql", "Flavor: mysql or mariadb")
|
||||
|
||||
var dataDir = flag.String("data-dir", "./var", "Path to store data, like master.info")
|
||||
|
||||
var serverID = flag.Int("server-id", 101, "Unique Server ID")
|
||||
var mysqldump = flag.String("mysqldump", "mysqldump", "mysqldump execution path")
|
||||
|
||||
@ -28,6 +28,12 @@ var tables = flag.String("tables", "", "dump tables, seperated by comma, will ov
|
||||
var tableDB = flag.String("table_db", "test", "database for dump tables")
|
||||
var ignoreTables = flag.String("ignore_tables", "", "ignore tables, must be database.table format, separated by comma")
|
||||
|
||||
var startName = flag.String("bin_name", "", "start sync from binlog name")
|
||||
var startPos = flag.Uint("bin_pos", 0, "start sync from binlog position of")
|
||||
|
||||
var heartbeatPeriod = flag.Duration("heartbeat", 60*time.Second, "master heartbeat period")
|
||||
var readTimeout = flag.Duration("read_timeout", 90*time.Second, "connection read timeout")
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
@ -36,8 +42,10 @@ func main() {
|
||||
cfg.User = *user
|
||||
cfg.Password = *password
|
||||
cfg.Flavor = *flavor
|
||||
cfg.DataDir = *dataDir
|
||||
cfg.UseDecimal = true
|
||||
|
||||
cfg.ReadTimeout = *readTimeout
|
||||
cfg.HeartbeatPeriod = *heartbeatPeriod
|
||||
cfg.ServerID = uint32(*serverID)
|
||||
cfg.Dump.ExecutionPath = *mysqldump
|
||||
cfg.Dump.DiscardErr = false
|
||||
@ -65,14 +73,20 @@ func main() {
|
||||
c.AddDumpDatabases(subs...)
|
||||
}
|
||||
|
||||
c.RegRowsEventHandler(&handler{})
|
||||
c.SetEventHandler(&handler{})
|
||||
|
||||
err = c.Start()
|
||||
if err != nil {
|
||||
fmt.Printf("start canal err %V", err)
|
||||
os.Exit(1)
|
||||
startPos := mysql.Position{
|
||||
Name: *startName,
|
||||
Pos: uint32(*startPos),
|
||||
}
|
||||
|
||||
go func() {
|
||||
err = c.RunFrom(startPos)
|
||||
if err != nil {
|
||||
fmt.Printf("start canal err %v", err)
|
||||
}
|
||||
}()
|
||||
|
||||
sc := make(chan os.Signal, 1)
|
||||
signal.Notify(sc,
|
||||
os.Kill,
|
||||
@ -88,9 +102,10 @@ func main() {
|
||||
}
|
||||
|
||||
type handler struct {
|
||||
canal.DummyEventHandler
|
||||
}
|
||||
|
||||
func (h *handler) Do(e *canal.RowsEvent) error {
|
||||
func (h *handler) OnRow(e *canal.RowsEvent) error {
|
||||
fmt.Printf("%v\n", e)
|
||||
|
||||
return nil
|
||||
|
11
vendor/github.com/siddontang/go-mysql/cmd/go-mysqlbinlog/main.go
generated
vendored
11
vendor/github.com/siddontang/go-mysql/cmd/go-mysqlbinlog/main.go
generated
vendored
@ -4,12 +4,11 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go-mysql/replication"
|
||||
@ -43,11 +42,12 @@ func main() {
|
||||
Password: *password,
|
||||
RawModeEnabled: *rawMode,
|
||||
SemiSyncEnabled: *semiSync,
|
||||
UseDecimal: true,
|
||||
}
|
||||
|
||||
b := replication.NewBinlogSyncer(cfg)
|
||||
|
||||
pos := mysql.Position{*file, uint32(*pos)}
|
||||
pos := mysql.Position{Name: *file, Pos: uint32(*pos)}
|
||||
if len(*backupPath) > 0 {
|
||||
// Backup will always use RawMode.
|
||||
err := b.StartBackup(*backupPath, pos, 0)
|
||||
@ -65,6 +65,11 @@ func main() {
|
||||
for {
|
||||
e, err := s.GetEvent(context.Background())
|
||||
if err != nil {
|
||||
// Try to output all left events
|
||||
events := s.DumpEvents()
|
||||
for _, e := range events {
|
||||
e.Dump(os.Stdout)
|
||||
}
|
||||
fmt.Printf("Get event error: %v\n", errors.ErrorStack(err))
|
||||
return
|
||||
}
|
||||
|
80
vendor/github.com/siddontang/go-mysql/docker/docker-compose.yaml
generated
vendored
Normal file
80
vendor/github.com/siddontang/go-mysql/docker/docker-compose.yaml
generated
vendored
Normal file
@ -0,0 +1,80 @@
|
||||
version: '3'
|
||||
services:
|
||||
|
||||
mysql-5.5.61:
|
||||
image: "mysql:5.5.61"
|
||||
container_name: "mysql-server-5.5.61"
|
||||
ports:
|
||||
- "5561:3306"
|
||||
command: --ssl=TRUE --ssl-ca=/usr/local/mysql/ca.pem --ssl-cert=/usr/local/mysql/server-cert.pem --ssl-key=/usr/local/mysql/server-key.pem
|
||||
volumes:
|
||||
- ./resources/ca.pem:/usr/local/mysql/ca.pem
|
||||
- ./resources/server-cert.pem:/usr/local/mysql/server-cert.pem
|
||||
- ./resources/server-key.pem:/usr/local/mysql/server-key.pem
|
||||
environment:
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=true
|
||||
- bind-address=0.0.0.0
|
||||
|
||||
mysql-5.6.41:
|
||||
image: "mysql:5.6.41"
|
||||
container_name: "mysql-server-5.6.41"
|
||||
ports:
|
||||
- "5641:3306"
|
||||
command: --ssl=TRUE --ssl-ca=/usr/local/mysql/ca.pem --ssl-cert=/usr/local/mysql/server-cert.pem --ssl-key=/usr/local/mysql/server-key.pem
|
||||
volumes:
|
||||
- ./resources/ca.pem:/usr/local/mysql/ca.pem
|
||||
- ./resources/server-cert.pem:/usr/local/mysql/server-cert.pem
|
||||
- ./resources/server-key.pem:/usr/local/mysql/server-key.pem
|
||||
environment:
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=true
|
||||
- bind-address=0.0.0.0
|
||||
|
||||
mysql-default:
|
||||
image: "mysql:5.7.22"
|
||||
container_name: "mysql-server-default"
|
||||
ports:
|
||||
- "3306:3306"
|
||||
command: ["mysqld", "--log-bin=mysql-bin", "--server-id=1"]
|
||||
environment:
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=true
|
||||
- bind-address=0.0.0.0
|
||||
|
||||
mysql-5.7.22:
|
||||
image: "mysql:5.7.22"
|
||||
container_name: "mysql-server-5.7.22"
|
||||
ports:
|
||||
- "5722:3306"
|
||||
environment:
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=true
|
||||
- bind-address=0.0.0.0
|
||||
|
||||
mysql-8.0.3:
|
||||
image: "mysql:8.0.3"
|
||||
container_name: "mysql-server-8.0.3"
|
||||
ports:
|
||||
- "8003:3306"
|
||||
environment:
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=true
|
||||
- bind-address=0.0.0.0
|
||||
|
||||
mysql-8.0.12:
|
||||
image: "mysql:8.0.12"
|
||||
container_name: "mysql-server-8.0.12"
|
||||
ports:
|
||||
- "8012:3306"
|
||||
environment:
|
||||
#- MYSQL_ROOT_PASSWORD=abc123
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=true
|
||||
- bind-address=0.0.0.0
|
||||
|
||||
mysql-8.0.12-sha256:
|
||||
image: "mysql:8.0.12"
|
||||
container_name: "mysql-server-8.0.12-sha256"
|
||||
ports:
|
||||
- "8013:3306"
|
||||
entrypoint: ['/entrypoint.sh', '--default-authentication-plugin=sha256_password']
|
||||
environment:
|
||||
#- MYSQL_ROOT_PASSWORD=abc123
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=true
|
||||
- bind-address=0.0.0.0
|
||||
|
27
vendor/github.com/siddontang/go-mysql/docker/resources/ca.key
generated
vendored
Normal file
27
vendor/github.com/siddontang/go-mysql/docker/resources/ca.key
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEAsV6xlhFxMn14Pn7XBRGLt8/HXmhVVu20IKFgIOyX7gAZr0QL
|
||||
suT1fGf5zH9HrlgOMkfdhV847U03KPfUnBsi9lS6/xOxnH/OzTYM0WW0eNMGF7eo
|
||||
xrS64GSbPVX4pLi5+uwrrZT5HmDgZi49ANmuX6UYmH/eRRvSIoYUTV6t0aYsLyKv
|
||||
lpEAtRAe4AlKB236j5ggmJ36QUhTFTbeNbeOOgloTEdPK8Y/kgpnhiqzMdPqqIc7
|
||||
IeXUc456yX8MJUgniTM2qCNTFdEw+C2Ok0RbU6TI2SuEgVF4jtCcVEKxZ8kYbioO
|
||||
NaePQKFR/EhdXO+/ag1IEdXElH9knLOfB+zCgwIDAQABAoIBAC2U0jponRiGmgIl
|
||||
gohw6+D+6pNeaKAAUkwYbKXJZ3noWLFr4T3GDTg9WDqvcvJg+rT9NvZxdCW3tDc5
|
||||
CVBcwO1g9PVcUEaRqcme3EhrxKdQQ76QmjUGeQf1ktd+YnmiZ1kOnGLtZ9/gsYpQ
|
||||
06iGSIOX3+xA4BQOhEAPCOShMjYv+pWvWGhZCSmeulKulNVPBbG2H1I9EoT5Wd+Q
|
||||
8LUfgZOuUXrtcsuvEf2XeacCo0pUbjx8ErhDHP6aPasFAXq15Bm8DnsUOrrsjcLy
|
||||
sPy/mHwpd6kTw+O3EzjTdaYSFRoDSpfpIS5Bk+yicdxOmTwp1pzDu6HyYnuOnc9Q
|
||||
JQ8HvlECgYEA2z+1HKVz5k7NYyRihW4l30vAcAGcgG1RObB6DmLbGu4MPvMymgLO
|
||||
1QhYjlCcKfRHhVS2864op3Oba2fIgCc2am0DIQQ6kZ23ick78aj9G2ZXYpdpIPLu
|
||||
Kl1AZHj6XDrOPVqidwcE6iYHLLWp9x4Atgw5d44XmhQ0kwrqAfccOX8CgYEAzxnl
|
||||
7Uu+v5WI3hBVJpxS6eoS1TdztVIJaumyE43pBoHEuJrp4MRf0Lu2DiDpH8R3/RoE
|
||||
o+ykn6xzphYwUopYaCWzYTKoXvxCvmLkDjHcpdzLtwWbKG+MJih2nTADEDI7sK4e
|
||||
a3IU8miK6FeqkQHfs/5dlQa8q31yxiukw0qQEP0CgYAtLg6jTZD5l6mJUZkfx9f0
|
||||
EMciDaLzcBN54Nz2E/b0sLNDUZhO1l9K1QJyqTfVCWqnlhJxWqU0BIW1d1iA2BPF
|
||||
kJtBdX6gPTDyKs64eMtXlxpQzcSzLnxXrIm1apyk3tVbHU83WfHwUk/OLc1NiBg7
|
||||
a394HIbOkHVZC7m3F/Xv/wKBgQDHrM2du8D+kJs0l4SxxFjAxPlBb8R01tLTrNwP
|
||||
tGwu5OEZp+rE1jEXXFRMTPjXsyKI+hPtRJT4ilm6kXwnqNFSIL9RgHkLk6Z6T3hY
|
||||
I0T8+ePD43jURLBYffzW0tqxO+2HDGmx6H0/twHuv89pHehkb2Qk8ijoIvyNCrlB
|
||||
vVsntQKBgCK04nbb+G45D6TKCcZ6XKT/+qneJQE5cfvHl5EqrfjSmlnEUpJjJfyc
|
||||
6Q1PtXtWOtOScU93f1JKL7+JBbWDn9uBlboM8BSkAVVd/2vyg88RuEtIru1syxcW
|
||||
d1rMxqaMRJuhuqaS33CoPUpn30b4zVrPhQJ2+TwDAol4qIGHaie8
|
||||
-----END RSA PRIVATE KEY-----
|
22
vendor/github.com/siddontang/go-mysql/docker/resources/ca.pem
generated
vendored
Normal file
22
vendor/github.com/siddontang/go-mysql/docker/resources/ca.pem
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDtTCCAp2gAwIBAgIJANeS1FOzWXlZMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV
|
||||
BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX
|
||||
aWRnaXRzIFB0eSBMdGQwHhcNMTgwODE2MTUxNDE5WhcNMjEwNjA1MTUxNDE5WjBF
|
||||
MQswCQYDVQQGEwJBVTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50
|
||||
ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
|
||||
CgKCAQEAsV6xlhFxMn14Pn7XBRGLt8/HXmhVVu20IKFgIOyX7gAZr0QLsuT1fGf5
|
||||
zH9HrlgOMkfdhV847U03KPfUnBsi9lS6/xOxnH/OzTYM0WW0eNMGF7eoxrS64GSb
|
||||
PVX4pLi5+uwrrZT5HmDgZi49ANmuX6UYmH/eRRvSIoYUTV6t0aYsLyKvlpEAtRAe
|
||||
4AlKB236j5ggmJ36QUhTFTbeNbeOOgloTEdPK8Y/kgpnhiqzMdPqqIc7IeXUc456
|
||||
yX8MJUgniTM2qCNTFdEw+C2Ok0RbU6TI2SuEgVF4jtCcVEKxZ8kYbioONaePQKFR
|
||||
/EhdXO+/ag1IEdXElH9knLOfB+zCgwIDAQABo4GnMIGkMB0GA1UdDgQWBBQgHiwD
|
||||
00upIbCOunlK4HRw89DhjjB1BgNVHSMEbjBsgBQgHiwD00upIbCOunlK4HRw89Dh
|
||||
jqFJpEcwRTELMAkGA1UEBhMCQVUxEzARBgNVBAgTClNvbWUtU3RhdGUxITAfBgNV
|
||||
BAoTGEludGVybmV0IFdpZGdpdHMgUHR5IEx0ZIIJANeS1FOzWXlZMAwGA1UdEwQF
|
||||
MAMBAf8wDQYJKoZIhvcNAQEFBQADggEBAFMZFQTFKU5tWIpWh8BbVZeVZcng0Kiq
|
||||
qwbhVwaTkqtfmbqw8/w+faOWylmLncQEMmgvnUltGMQlQKBwQM2byzPkz9phal3g
|
||||
uI0JWJYqtcMyIQUB9QbbhrDNC9kdt/ji/x6rrIqzaMRuiBXqH5LQ9h856yXzArqd
|
||||
cAQGzzYpbUCIv7ciSB93cKkU73fQLZVy5ZBy1+oAa1V9U4cb4G/20/PDmT+G3Gxz
|
||||
pEjeDKtz8XINoWgA2cSdfAhNZt5vqJaCIZ8qN0z6C7SUKwUBderERUMLUXdhUldC
|
||||
KTVHyEPvd0aULd5S5vEpKCnHcQmFcLdoN8t9k9pR9ZgwqXbyJHlxWFo=
|
||||
-----END CERTIFICATE-----
|
19
vendor/github.com/siddontang/go-mysql/docker/resources/client-cert.pem
generated
vendored
Normal file
19
vendor/github.com/siddontang/go-mysql/docker/resources/client-cert.pem
generated
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDBjCCAe4CCQDg06wCf7hcuTANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB
|
||||
VTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0
|
||||
cyBQdHkgTHRkMB4XDTE4MDgxOTA4NDY0N1oXDTI4MDgxNjA4NDY0N1owRTELMAkG
|
||||
A1UEBhMCQVUxEzARBgNVBAgTClNvbWUtU3RhdGUxITAfBgNVBAoTGEludGVybmV0
|
||||
IFdpZGdpdHMgUHR5IEx0ZDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
|
||||
AMmivNyk3Rc1ZvLPhb3WPNkf9f2G4g9nMc0+eMrR1IKJ1U1A98ojeIBT+pfk1bSq
|
||||
Ol0UDm66Vd3YQ+4HpyYHaYV6mwoTEulL9Quk8RLa7TRwQu3PLi3o567RhVIrx8Z3
|
||||
umuWb9UUzJfSFH04Uy9+By4CJCqIQXU4BocLIKHhIkNjmAQ9fWO1hZ8zmPHSEfvu
|
||||
Wqa/DYKGvF0MJr4Lnkm/sKUd+O94p9suvwM6OGIDibACiKRF2H+JbgQLbA58zkLv
|
||||
DHtXOqsCL7HxiONX8VDrQjN/66Nh9omk/Bx2Ec8IqappHvWf768HSH79x/znaial
|
||||
VEV+6K0gP+voJHfnA10laWMCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAPD+Fn1qj
|
||||
HN62GD3eIgx6wJxYuemhdbgmEwrZZf4V70lS6e9Iloif0nBiISDxJUpXVWNRCN3Z
|
||||
3QVC++F7deDmWL/3dSpXRvWsapzbCUhVQ2iBcnZ7QCOdvAqYR1ecZx70zvXCwBcd
|
||||
6XKmRtdeNV6B211KRFmTYtVyPq4rcWrkTPGwPBncJI1eQQmyFv2T9SwVVp96Nbrq
|
||||
sf7zrJGmuVCdXGPRi/ALVHtJCz6oPoft3I707eMe+ijnFqwGbmMD4fMD6Ync/hEz
|
||||
PyR5FMZkXSXHS0gkA5pfwW7wJ2WSWDhI6JMS1gbatY7QzgHbKoQpxBPUXlnzzj2h
|
||||
7O9cgFTh/XOZXQ==
|
||||
-----END CERTIFICATE-----
|
27
vendor/github.com/siddontang/go-mysql/docker/resources/client-key.pem
generated
vendored
Normal file
27
vendor/github.com/siddontang/go-mysql/docker/resources/client-key.pem
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEAyaK83KTdFzVm8s+FvdY82R/1/YbiD2cxzT54ytHUgonVTUD3
|
||||
yiN4gFP6l+TVtKo6XRQObrpV3dhD7genJgdphXqbChMS6Uv1C6TxEtrtNHBC7c8u
|
||||
LejnrtGFUivHxne6a5Zv1RTMl9IUfThTL34HLgIkKohBdTgGhwsgoeEiQ2OYBD19
|
||||
Y7WFnzOY8dIR++5apr8Ngoa8XQwmvgueSb+wpR3473in2y6/Azo4YgOJsAKIpEXY
|
||||
f4luBAtsDnzOQu8Me1c6qwIvsfGI41fxUOtCM3/ro2H2iaT8HHYRzwipqmke9Z/v
|
||||
rwdIfv3H/OdqJqVURX7orSA/6+gkd+cDXSVpYwIDAQABAoIBAAGLY5L1GFRzLkSx
|
||||
3j5kA7dODV5RyC2CBtmhnt8+2DffwmiDFOLRfrzM5+B9+j0WCLhpzOqANuQqIesS
|
||||
1+7so5xIIiPjnYN393qNWuNgFe0O5xRXP+1OGWg3ZqQIfdFBXYYxcs3ZCPAoxctn
|
||||
wQteFcP+dDR3MrkpIrOqHCfhR5foieOMP+9k5kCjk+aZqhEmFyko+X+xVO/32xs+
|
||||
+3qXhUrHt3Op5on30QMOFguniQlYwLJkd9qVjGuGMIrVPxoUz0rya4SKrGKgkAr8
|
||||
mvQe2+sZo7cc6zC2ceaGMJU7z1RalTrCObbg5mynlu+Vf0E/YiES0abkQhSbcSB9
|
||||
mAkJC7ECgYEA/H1NDEiO164yYK9ji4HM/8CmHegWS4qsgrzAs8lU0yAcgdg9e19A
|
||||
mNi8yssfIBCw62RRE4UGWS5F82myhmvq/mXbf8eCJ2CMgdCHQh1rT7WFD/Uc5Pe/
|
||||
8Lv2jNMQ61POguPyq6D0qcf8iigKIMHa1MIgAOmrgWrxulfbSUhm370CgYEAzHBu
|
||||
J9p4dAqW32+Hrtv2XE0KUjH72TXr13WErosgeGTfsIW2exXByvLasxOJSY4Wb8oS
|
||||
OLZ7bgp/EBchAc7my+nF8n5uOJxipWQUB5BoeB9aUJZ9AnWF4RDl94Jlm5PYBG/J
|
||||
lRXrMtSTTIgmSw3Ft2A1vRMOQaHX89lNwOZL758CgYAXOT84/gOFexRPKFKzpkDA
|
||||
1WtyHMLQN/UeIVZoMwCGWtHEb6tYCa7bYDQdQwmd3Wsoe5WpgfbPhR4SAYrWKl72
|
||||
/09tNWCXVp4V4qRORH52Wm/ew+Dgfpk8/0zyLwfDXXYFPAo6Fxfp9ecYng4wbSQ/
|
||||
pYtkChooUTniteoJl4s+0QKBgHbFEpoAqF3yEPi52L/TdmrlLwvVkhT86IkB8xVc
|
||||
Kn8HS5VH+V3EpBN9x2SmAupCq/JCGRftnAOwAWWdqkVcqGTq6V8Z6HrnD8A6RhCm
|
||||
6qpuvI94/iNBl4fLw25pyRH7cFITh68fTsb3DKQ3rNeJpsYEFPRFb9Ddb5JxOmTI
|
||||
5nDNAoGBAM+SyOhUGU+0Uw2WJaGWzmEutjeMRr5Z+cZ8keC/ZJNdji/faaQoeOQR
|
||||
OXI8O6RBTBwVNQMyDyttT8J8BkISwfAhSdPkjgPw9GZ1pGREl53uCFDIlX2nvtQM
|
||||
ioNzG5WHB7Gd7eUUTA91kRF9MZJTHPqNiNGR0Udj/trGyGqJebni
|
||||
-----END RSA PRIVATE KEY-----
|
19
vendor/github.com/siddontang/go-mysql/docker/resources/server-cert.pem
generated
vendored
Normal file
19
vendor/github.com/siddontang/go-mysql/docker/resources/server-cert.pem
generated
vendored
Normal file
@ -0,0 +1,19 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDBjCCAe4CCQDg06wCf7hcuDANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB
|
||||
VTETMBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0
|
||||
cyBQdHkgTHRkMB4XDTE4MDgxOTA4NDUyNVoXDTI4MDgxNjA4NDUyNVowRTELMAkG
|
||||
A1UEBhMCQVUxEzARBgNVBAgTClNvbWUtU3RhdGUxITAfBgNVBAoTGEludGVybmV0
|
||||
IFdpZGdpdHMgUHR5IEx0ZDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
|
||||
ALK2gqK4uvTlxJANO2JKdibvmh899z6oCo9Km0mz5unj4dpnq9hljsQuKtcHUcM4
|
||||
HXcE06knaJ4TOF7lcsjaqoDO7r/SaFgjjXCqNvHD0Su4B+7qe52BZZTRV1AANP10
|
||||
PvebarXSEzaZUCyHHhSF8+Qb4vX04XKX/TOqinTVGtlnduKzP5+qsaFBtpLAw1V0
|
||||
At9EQB5BgnTYtdIsmvD4/2WhBvOjVxab75yx0R4oof4F3u528tbEegcWhBtmy2Xd
|
||||
HI3S+TLljj3kOOdB+pgrVUl+KaDavWK3T+F1vTNDe56HEVNKeWlLy1scul61E0j9
|
||||
IkZAu6aRDxtKdl7bKu0BkzMCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAma3yFqR7
|
||||
xkeaZBg4/1I3jSlaNe5+2JB4iybAkMOu77fG5zytLomTbzdhewsuBwpTVMJdga8T
|
||||
IdPeIFCin1U+5SkbjSMlpKf+krE+5CyrNJ5jAgO9ATIqx66oCTYXfGlNapGRLfSE
|
||||
sa0iMqCe/dr4GPU+flW2DZFWiyJVDSF1JjReQnfrWY+SD2SpP/lmlgltnY8MJngd
|
||||
xBLG5nsZCpUXGB713Q8ZyIm2ThVAMiskcxBleIZDDghLuhGvY/9eFJhZpvOkjWa6
|
||||
XGEi4E1G/SA+zVKFl41nHKCdqXdmIOnpcLlFBUVloQok5a95Kqc1TYw3f+WbdFff
|
||||
99dAgk3gWwWZQA==
|
||||
-----END CERTIFICATE-----
|
27
vendor/github.com/siddontang/go-mysql/docker/resources/server-key.pem
generated
vendored
Normal file
27
vendor/github.com/siddontang/go-mysql/docker/resources/server-key.pem
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEogIBAAKCAQEAsraCori69OXEkA07Ykp2Ju+aHz33PqgKj0qbSbPm6ePh2mer
|
||||
2GWOxC4q1wdRwzgddwTTqSdonhM4XuVyyNqqgM7uv9JoWCONcKo28cPRK7gH7up7
|
||||
nYFllNFXUAA0/XQ+95tqtdITNplQLIceFIXz5Bvi9fThcpf9M6qKdNUa2Wd24rM/
|
||||
n6qxoUG2ksDDVXQC30RAHkGCdNi10iya8Pj/ZaEG86NXFpvvnLHRHiih/gXe7nby
|
||||
1sR6BxaEG2bLZd0cjdL5MuWOPeQ450H6mCtVSX4poNq9YrdP4XW9M0N7nocRU0p5
|
||||
aUvLWxy6XrUTSP0iRkC7ppEPG0p2Xtsq7QGTMwIDAQABAoIBAGh1m8hHWCg7gXh9
|
||||
838RbRx3IswuKS27hWiaQEiFWmzOIb7KqDy1qAxtu+ayRY1paHegH6QY/+Kd824s
|
||||
ibpzbgQacJ04/HrAVTVMmQ8Z2VLHoAN7lcPL1bd14aZGaLLZVtDeTDJ413grhxxv
|
||||
4ho27gcgcbo4Z+rWgk7H2WRPCAGYqWYAycm3yF5vy9QaO6edU+T588YsEQOos5iy
|
||||
5pVFSGDGZkcUp1ukL3BJYR+jvygn6WPCobQ/LScUdi+ucitaI9i+UdlLokZARVRG
|
||||
M/msqcTM73thR8yVRcexU6NUDxRBfZ/f7moSAEbBmGDXuxDcIyH9KGMQ2rMtN1X3
|
||||
lK8UNwkCgYEA2STJq/IUQHjdqd3Dqh/Q7Zm8/pMWFqLJSkqpnFtFsXPyUOx9zDOy
|
||||
KqkIfGeyKwvsj9X9BcZ0FUKj9zoct1/WpPY+h7i7+z0MIujBh4AMjAcDrt4o76yK
|
||||
UHuVmG2xKTdJoAbqOdToQeX6E82Ioal5pbB2W7AbCQScNBPZ52jxgtcCgYEA0rE7
|
||||
2dFiRm0YmuszFYxft2+GP6NgP3R2TQNEooi1uCXG2xgwObie1YCHzpZ5CfSqJIxP
|
||||
XB7DXpIWi7PxJoeai2F83LnmdFz6F1BPRobwDoSFNdaSKLg4Yf856zpgYNKhL1fE
|
||||
OoOXj4VBWBZh1XDfZV44fgwlMIf7edOF1XOagwUCgYAw953O+7FbdKYwF0V3iOM5
|
||||
oZDAK/UwN5eC/GFRVDfcM5RycVJRCVtlSWcTfuLr2C2Jpiz/72fgH34QU3eEVsV1
|
||||
v94MBznFB1hESw7ReqvZq/9FoO3EVrl+OtBaZmosLD6bKtQJJJ0Xtz/01UW5hxla
|
||||
pveZ55XBK9v51nwuNjk4UwKBgHD8fJUllSchUCWb5cwzeAz98Kdl7LJ6uQo5q2/i
|
||||
EllLYOWThiEeIYdrIuklholRPIDXAaPsF2c6vn5yo+q+o6EFSZlw0+YpCjDAb5Lp
|
||||
wAh5BprFk6HkkM/0t9Guf4rMyYWC8odSlE9x7YXYkuSMYDCTI4Zs6vCoq7I8PbQn
|
||||
B4AlAoGAZ6Ee5m/ph5UVp/3+cR6jCY7aHBUU/M3pbJSkVjBW+ymEBVJ6sUdz8k3P
|
||||
x8BiPEQggNN7faWBqRWP7KXPnDYHh6shYUgPJwI5HX6NE/ZDnnXjeysHRyf0oCo5
|
||||
S6tHXwHNKB5HS1c/KDyyNGjP2oi/MF4o/MGWNWEcK6TJA3RGOYM=
|
||||
-----END RSA PRIVATE KEY-----
|
8
vendor/github.com/siddontang/go-mysql/driver/dirver_test.go
generated
vendored
8
vendor/github.com/siddontang/go-mysql/driver/dirver_test.go
generated
vendored
@ -11,6 +11,11 @@ import (
|
||||
|
||||
// Use docker mysql to test, mysql is 3306
|
||||
var testHost = flag.String("host", "127.0.0.1", "MySQL master host")
|
||||
// possible choices for different MySQL versions are: 5561,5641,3306,5722,8003,8012
|
||||
var testPort = flag.Int("port", 3306, "MySQL server port")
|
||||
var testUser = flag.String("user", "root", "MySQL user")
|
||||
var testPassword = flag.String("pass", "", "MySQL password")
|
||||
var testDB = flag.String("db", "test", "MySQL test database")
|
||||
|
||||
func TestDriver(t *testing.T) {
|
||||
TestingT(t)
|
||||
@ -23,7 +28,8 @@ type testDriverSuite struct {
|
||||
var _ = Suite(&testDriverSuite{})
|
||||
|
||||
func (s *testDriverSuite) SetUpSuite(c *C) {
|
||||
dsn := fmt.Sprintf("root@%s:3306?test", *testHost)
|
||||
addr := fmt.Sprintf("%s:%d", *testHost, *testPort)
|
||||
dsn := fmt.Sprintf("%s:%s@%s?%s", *testUser, *testPassword, addr, *testDB)
|
||||
|
||||
var err error
|
||||
s.db, err = sqlx.Open("mysql", dsn)
|
||||
|
3
vendor/github.com/siddontang/go-mysql/driver/driver.go
generated
vendored
3
vendor/github.com/siddontang/go-mysql/driver/driver.go
generated
vendored
@ -20,7 +20,8 @@ type driver struct {
|
||||
|
||||
// DSN user:password@addr[?db]
|
||||
func (d driver) Open(dsn string) (sqldriver.Conn, error) {
|
||||
seps := strings.Split(dsn, "@")
|
||||
lastIndex := strings.LastIndex(dsn, "@")
|
||||
seps := []string{dsn[:lastIndex], dsn[lastIndex+1:]}
|
||||
if len(seps) != 2 {
|
||||
return nil, errors.Errorf("invalid dsn, must user:password@addr[?db]")
|
||||
}
|
||||
|
71
vendor/github.com/siddontang/go-mysql/dump/dump.go
generated
vendored
71
vendor/github.com/siddontang/go-mysql/dump/dump.go
generated
vendored
@ -8,6 +8,8 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/siddontang/go-log/log"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
// Unlick mysqldump, Dumper is designed for parsing and syning data easily.
|
||||
@ -25,9 +27,16 @@ type Dumper struct {
|
||||
|
||||
Databases []string
|
||||
|
||||
Where string
|
||||
Charset string
|
||||
|
||||
IgnoreTables map[string][]string
|
||||
|
||||
ErrOut io.Writer
|
||||
|
||||
masterDataSkipped bool
|
||||
maxAllowedPacket int
|
||||
hexBlob bool
|
||||
}
|
||||
|
||||
func NewDumper(executionPath string, addr string, user string, password string) (*Dumper, error) {
|
||||
@ -47,17 +56,40 @@ func NewDumper(executionPath string, addr string, user string, password string)
|
||||
d.Password = password
|
||||
d.Tables = make([]string, 0, 16)
|
||||
d.Databases = make([]string, 0, 16)
|
||||
d.Charset = DEFAULT_CHARSET
|
||||
d.IgnoreTables = make(map[string][]string)
|
||||
d.masterDataSkipped = false
|
||||
|
||||
d.ErrOut = os.Stderr
|
||||
|
||||
return d, nil
|
||||
}
|
||||
|
||||
func (d *Dumper) SetCharset(charset string) {
|
||||
d.Charset = charset
|
||||
}
|
||||
|
||||
func (d *Dumper) SetWhere(where string) {
|
||||
d.Where = where
|
||||
}
|
||||
|
||||
func (d *Dumper) SetErrOut(o io.Writer) {
|
||||
d.ErrOut = o
|
||||
}
|
||||
|
||||
// In some cloud MySQL, we have no privilege to use `--master-data`.
|
||||
func (d *Dumper) SkipMasterData(v bool) {
|
||||
d.masterDataSkipped = v
|
||||
}
|
||||
|
||||
func (d *Dumper) SetMaxAllowedPacket(i int) {
|
||||
d.maxAllowedPacket = i
|
||||
}
|
||||
|
||||
func (d *Dumper) SetHexBlob(v bool) {
|
||||
d.hexBlob = v
|
||||
}
|
||||
|
||||
func (d *Dumper) AddDatabases(dbs ...string) {
|
||||
d.Databases = append(d.Databases, dbs...)
|
||||
}
|
||||
@ -82,22 +114,35 @@ func (d *Dumper) Reset() {
|
||||
d.TableDB = ""
|
||||
d.IgnoreTables = make(map[string][]string)
|
||||
d.Databases = d.Databases[0:0]
|
||||
d.Where = ""
|
||||
}
|
||||
|
||||
func (d *Dumper) Dump(w io.Writer) error {
|
||||
args := make([]string, 0, 16)
|
||||
|
||||
// Common args
|
||||
seps := strings.Split(d.Addr, ":")
|
||||
args = append(args, fmt.Sprintf("--host=%s", seps[0]))
|
||||
if len(seps) > 1 {
|
||||
args = append(args, fmt.Sprintf("--port=%s", seps[1]))
|
||||
if strings.Contains(d.Addr, "/") {
|
||||
args = append(args, fmt.Sprintf("--socket=%s", d.Addr))
|
||||
} else {
|
||||
seps := strings.SplitN(d.Addr, ":", 2)
|
||||
args = append(args, fmt.Sprintf("--host=%s", seps[0]))
|
||||
if len(seps) > 1 {
|
||||
args = append(args, fmt.Sprintf("--port=%s", seps[1]))
|
||||
}
|
||||
}
|
||||
|
||||
args = append(args, fmt.Sprintf("--user=%s", d.User))
|
||||
args = append(args, fmt.Sprintf("--password=%s", d.Password))
|
||||
|
||||
args = append(args, "--master-data")
|
||||
if !d.masterDataSkipped {
|
||||
args = append(args, "--master-data")
|
||||
}
|
||||
|
||||
if d.maxAllowedPacket > 0 {
|
||||
// mysqldump param should be --max-allowed-packet=%dM not be --max_allowed_packet=%dM
|
||||
args = append(args, fmt.Sprintf("--max-allowed-packet=%dM", d.maxAllowedPacket))
|
||||
}
|
||||
|
||||
args = append(args, "--single-transaction")
|
||||
args = append(args, "--skip-lock-tables")
|
||||
|
||||
@ -112,12 +157,25 @@ func (d *Dumper) Dump(w io.Writer) error {
|
||||
// Multi row is easy for us to parse the data
|
||||
args = append(args, "--skip-extended-insert")
|
||||
|
||||
if d.hexBlob {
|
||||
// Use hex for the binary type
|
||||
args = append(args, "--hex-blob")
|
||||
}
|
||||
|
||||
for db, tables := range d.IgnoreTables {
|
||||
for _, table := range tables {
|
||||
args = append(args, fmt.Sprintf("--ignore-table=%s.%s", db, table))
|
||||
}
|
||||
}
|
||||
|
||||
if len(d.Charset) != 0 {
|
||||
args = append(args, fmt.Sprintf("--default-character-set=%s", d.Charset))
|
||||
}
|
||||
|
||||
if len(d.Where) != 0 {
|
||||
args = append(args, fmt.Sprintf("--where=%s", d.Where))
|
||||
}
|
||||
|
||||
if len(d.Tables) == 0 && len(d.Databases) == 0 {
|
||||
args = append(args, "--all-databases")
|
||||
} else if len(d.Tables) == 0 {
|
||||
@ -133,6 +191,7 @@ func (d *Dumper) Dump(w io.Writer) error {
|
||||
w.Write([]byte(fmt.Sprintf("USE `%s`;\n", d.TableDB)))
|
||||
}
|
||||
|
||||
log.Infof("exec mysqldump with %v", args)
|
||||
cmd := exec.Command(d.ExecutionPath, args...)
|
||||
|
||||
cmd.Stderr = d.ErrOut
|
||||
@ -147,7 +206,7 @@ func (d *Dumper) DumpAndParse(h ParseHandler) error {
|
||||
|
||||
done := make(chan error, 1)
|
||||
go func() {
|
||||
err := Parse(r, h)
|
||||
err := Parse(r, h, !d.masterDataSkipped)
|
||||
r.CloseWithError(err)
|
||||
done <- err
|
||||
}()
|
||||
|
30
vendor/github.com/siddontang/go-mysql/dump/dump_test.go
generated
vendored
30
vendor/github.com/siddontang/go-mysql/dump/dump_test.go
generated
vendored
@ -6,6 +6,7 @@ import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
. "github.com/pingcap/check"
|
||||
@ -38,6 +39,7 @@ func (s *schemaTestSuite) SetUpSuite(c *C) {
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(s.d, NotNil)
|
||||
|
||||
s.d.SetCharset("utf8")
|
||||
s.d.SetErrOut(os.Stderr)
|
||||
|
||||
_, err = s.conn.Execute("CREATE DATABASE IF NOT EXISTS test1")
|
||||
@ -177,7 +179,7 @@ func (s *schemaTestSuite) TestParse(c *C) {
|
||||
err := s.d.Dump(&buf)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
err = Parse(&buf, new(testParseHandler))
|
||||
err = Parse(&buf, new(testParseHandler), true)
|
||||
c.Assert(err, IsNil)
|
||||
}
|
||||
|
||||
@ -196,3 +198,29 @@ func (s *parserTestSuite) TestParseValue(c *C) {
|
||||
values, err = parseValues(str)
|
||||
c.Assert(err, NotNil)
|
||||
}
|
||||
|
||||
func (s *parserTestSuite) TestParseLine(c *C) {
|
||||
lines := []struct {
|
||||
line string
|
||||
expected string
|
||||
}{
|
||||
{line: "INSERT INTO `test` VALUES (1, 'first', 'hello mysql; 2', 'e1', 'a,b');",
|
||||
expected: "1, 'first', 'hello mysql; 2', 'e1', 'a,b'"},
|
||||
{line: "INSERT INTO `test` VALUES (0x22270073646661736661736466, 'first', 'hello mysql; 2', 'e1', 'a,b');",
|
||||
expected: "0x22270073646661736661736466, 'first', 'hello mysql; 2', 'e1', 'a,b'"},
|
||||
}
|
||||
|
||||
f := func(c rune) bool {
|
||||
return c == '\r' || c == '\n'
|
||||
}
|
||||
|
||||
for _, t := range lines {
|
||||
l := strings.TrimRightFunc(t.line, f)
|
||||
|
||||
m := valuesExp.FindAllStringSubmatch(l, -1)
|
||||
|
||||
c.Assert(m, HasLen, 1)
|
||||
c.Assert(m[0][1], Matches, "test")
|
||||
c.Assert(m[0][2], Matches, t.expected)
|
||||
}
|
||||
}
|
||||
|
10
vendor/github.com/siddontang/go-mysql/dump/parser.go
generated
vendored
10
vendor/github.com/siddontang/go-mysql/dump/parser.go
generated
vendored
@ -6,6 +6,7 @@ import (
|
||||
"io"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
@ -34,7 +35,7 @@ func init() {
|
||||
|
||||
// Parse the dump data with Dumper generate.
|
||||
// It can not parse all the data formats with mysqldump outputs
|
||||
func Parse(r io.Reader, h ParseHandler) error {
|
||||
func Parse(r io.Reader, h ParseHandler, parseBinlogPos bool) error {
|
||||
rb := bufio.NewReaderSize(r, 1024*16)
|
||||
|
||||
var db string
|
||||
@ -48,9 +49,12 @@ func Parse(r io.Reader, h ParseHandler) error {
|
||||
break
|
||||
}
|
||||
|
||||
line = line[0 : len(line)-1]
|
||||
// Ignore '\n' on Linux or '\r\n' on Windows
|
||||
line = strings.TrimRightFunc(line, func(c rune) bool {
|
||||
return c == '\r' || c == '\n'
|
||||
})
|
||||
|
||||
if !binlogParsed {
|
||||
if parseBinlogPos && !binlogParsed {
|
||||
if m := binlogExp.FindAllStringSubmatch(line, -1); len(m) == 1 {
|
||||
name := m[0][1]
|
||||
pos, err := strconv.ParseUint(m[0][2], 10, 64)
|
||||
|
6
vendor/github.com/siddontang/go-mysql/failover/mariadb_gtid_handler.go
generated
vendored
6
vendor/github.com/siddontang/go-mysql/failover/mariadb_gtid_handler.go
generated
vendored
@ -46,12 +46,12 @@ func (h *MariadbGTIDHandler) FindBestSlaves(slaves []*Server) ([]*Server, error)
|
||||
if len(str) == 0 {
|
||||
seq = 0
|
||||
} else {
|
||||
g, err := ParseMariadbGTIDSet(str)
|
||||
g, err := ParseMariadbGTID(str)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
seq = g.(MariadbGTID).SequenceNumber
|
||||
seq = g.SequenceNumber
|
||||
}
|
||||
|
||||
ps[i] = seq
|
||||
@ -118,7 +118,7 @@ func (h *MariadbGTIDHandler) WaitRelayLogDone(s *Server) error {
|
||||
fname, _ := r.GetStringByName(0, "Master_Log_File")
|
||||
pos, _ := r.GetIntByName(0, "Read_Master_Log_Pos")
|
||||
|
||||
return s.MasterPosWait(Position{fname, uint32(pos)}, 0)
|
||||
return s.MasterPosWait(Position{Name: fname, Pos: uint32(pos)}, 0)
|
||||
}
|
||||
|
||||
func (h *MariadbGTIDHandler) WaitCatchMaster(s *Server, m *Server) error {
|
||||
|
4
vendor/github.com/siddontang/go-mysql/failover/server.go
generated
vendored
4
vendor/github.com/siddontang/go-mysql/failover/server.go
generated
vendored
@ -152,7 +152,7 @@ func (s *Server) FetchSlaveReadPos() (Position, error) {
|
||||
fname, _ := r.GetStringByName(0, "Master_Log_File")
|
||||
pos, _ := r.GetIntByName(0, "Read_Master_Log_Pos")
|
||||
|
||||
return Position{fname, uint32(pos)}, nil
|
||||
return Position{Name: fname, Pos: uint32(pos)}, nil
|
||||
}
|
||||
|
||||
// Get current executed binlog filename and position from master
|
||||
@ -165,7 +165,7 @@ func (s *Server) FetchSlaveExecutePos() (Position, error) {
|
||||
fname, _ := r.GetStringByName(0, "Relay_Master_Log_File")
|
||||
pos, _ := r.GetIntByName(0, "Exec_Master_Log_Pos")
|
||||
|
||||
return Position{fname, uint32(pos)}, nil
|
||||
return Position{Name: fname, Pos: uint32(pos)}, nil
|
||||
}
|
||||
|
||||
func (s *Server) MasterPosWait(pos Position, timeout int) error {
|
||||
|
30
vendor/github.com/siddontang/go-mysql/glide.lock
generated
vendored
30
vendor/github.com/siddontang/go-mysql/glide.lock
generated
vendored
@ -1,30 +0,0 @@
|
||||
hash: 1a3d05afef96cd7601a004e573b128db9051eecd1b5d0a3d69d3fa1ee1a3e3b8
|
||||
updated: 2016-09-03T12:30:00.028685232+08:00
|
||||
imports:
|
||||
- name: github.com/BurntSushi/toml
|
||||
version: 056c9bc7be7190eaa7715723883caffa5f8fa3e4
|
||||
- name: github.com/go-sql-driver/mysql
|
||||
version: 3654d25ec346ee8ce71a68431025458d52a38ac0
|
||||
- name: github.com/jmoiron/sqlx
|
||||
version: 54aec3fd91a2b2129ffaca0d652b8a9223ee2d9e
|
||||
subpackages:
|
||||
- reflectx
|
||||
- name: github.com/juju/errors
|
||||
version: 6f54ff6318409d31ff16261533ce2c8381a4fd5d
|
||||
- name: github.com/ngaut/log
|
||||
version: cec23d3e10b016363780d894a0eb732a12c06e02
|
||||
- name: github.com/pingcap/check
|
||||
version: ce8a2f822ab1e245a4eefcef2996531c79c943f1
|
||||
- name: github.com/satori/go.uuid
|
||||
version: 879c5887cd475cd7864858769793b2ceb0d44feb
|
||||
- name: github.com/siddontang/go
|
||||
version: 354e14e6c093c661abb29fd28403b3c19cff5514
|
||||
subpackages:
|
||||
- hack
|
||||
- ioutil2
|
||||
- sync2
|
||||
- name: golang.org/x/net
|
||||
version: 6acef71eb69611914f7a30939ea9f6e194c78172
|
||||
subpackages:
|
||||
- context
|
||||
testImports: []
|
26
vendor/github.com/siddontang/go-mysql/glide.yaml
generated
vendored
26
vendor/github.com/siddontang/go-mysql/glide.yaml
generated
vendored
@ -1,26 +0,0 @@
|
||||
package: github.com/siddontang/go-mysql
|
||||
import:
|
||||
- package: github.com/BurntSushi/toml
|
||||
version: 056c9bc7be7190eaa7715723883caffa5f8fa3e4
|
||||
- package: github.com/go-sql-driver/mysql
|
||||
version: 3654d25ec346ee8ce71a68431025458d52a38ac0
|
||||
- package: github.com/jmoiron/sqlx
|
||||
version: 54aec3fd91a2b2129ffaca0d652b8a9223ee2d9e
|
||||
subpackages:
|
||||
- reflectx
|
||||
- package: github.com/juju/errors
|
||||
version: 6f54ff6318409d31ff16261533ce2c8381a4fd5d
|
||||
- package: github.com/ngaut/log
|
||||
version: cec23d3e10b016363780d894a0eb732a12c06e02
|
||||
- package: github.com/pingcap/check
|
||||
version: ce8a2f822ab1e245a4eefcef2996531c79c943f1
|
||||
- package: github.com/satori/go.uuid
|
||||
version: ^1.1.0
|
||||
- package: github.com/siddontang/go
|
||||
version: 354e14e6c093c661abb29fd28403b3c19cff5514
|
||||
subpackages:
|
||||
- hack
|
||||
- ioutil2
|
||||
- sync2
|
||||
- package: golang.org/x/net
|
||||
version: 6acef71eb69611914f7a30939ea9f6e194c78172
|
17
vendor/github.com/siddontang/go-mysql/mysql/const.go
generated
vendored
17
vendor/github.com/siddontang/go-mysql/mysql/const.go
generated
vendored
@ -6,16 +6,22 @@ const (
|
||||
TimeFormat string = "2006-01-02 15:04:05"
|
||||
)
|
||||
|
||||
var (
|
||||
// maybe you can change for your specified name
|
||||
ServerVersion string = "go-mysql-0.1"
|
||||
)
|
||||
|
||||
const (
|
||||
OK_HEADER byte = 0x00
|
||||
MORE_DATE_HEADER byte = 0x01
|
||||
ERR_HEADER byte = 0xff
|
||||
EOF_HEADER byte = 0xfe
|
||||
LocalInFile_HEADER byte = 0xfb
|
||||
|
||||
CACHE_SHA2_FAST_AUTH byte = 0x03
|
||||
CACHE_SHA2_FULL_AUTH byte = 0x04
|
||||
)
|
||||
|
||||
const (
|
||||
AUTH_MYSQL_OLD_PASSWORD = "mysql_old_password"
|
||||
AUTH_NATIVE_PASSWORD = "mysql_native_password"
|
||||
AUTH_CACHING_SHA2_PASSWORD = "caching_sha2_password"
|
||||
AUTH_SHA256_PASSWORD = "sha256_password"
|
||||
)
|
||||
|
||||
const (
|
||||
@ -151,7 +157,6 @@ const (
|
||||
)
|
||||
|
||||
const (
|
||||
AUTH_NAME = "mysql_native_password"
|
||||
DEFAULT_CHARSET = "utf8"
|
||||
DEFAULT_COLLATION_ID uint8 = 33
|
||||
DEFAULT_COLLATION_NAME string = "utf8_general_ci"
|
||||
|
7
vendor/github.com/siddontang/go-mysql/mysql/error.go
generated
vendored
7
vendor/github.com/siddontang/go-mysql/mysql/error.go
generated
vendored
@ -57,3 +57,10 @@ func NewError(errCode uint16, message string) *MyError {
|
||||
|
||||
return e
|
||||
}
|
||||
|
||||
func ErrorCode(errMsg string) (code int) {
|
||||
var tmpStr string
|
||||
// golang scanf doesn't support %*,so I used a temporary variable
|
||||
fmt.Sscanf(errMsg, "%s%d", &tmpStr, &code)
|
||||
return
|
||||
}
|
||||
|
12
vendor/github.com/siddontang/go-mysql/mysql/field.go
generated
vendored
12
vendor/github.com/siddontang/go-mysql/mysql/field.go
generated
vendored
@ -31,42 +31,42 @@ func (p FieldData) Parse() (f *Field, err error) {
|
||||
var n int
|
||||
pos := 0
|
||||
//skip catelog, always def
|
||||
n, err = SkipLengthEnodedString(p)
|
||||
n, err = SkipLengthEncodedString(p)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pos += n
|
||||
|
||||
//schema
|
||||
f.Schema, _, n, err = LengthEnodedString(p[pos:])
|
||||
f.Schema, _, n, err = LengthEncodedString(p[pos:])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pos += n
|
||||
|
||||
//table
|
||||
f.Table, _, n, err = LengthEnodedString(p[pos:])
|
||||
f.Table, _, n, err = LengthEncodedString(p[pos:])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pos += n
|
||||
|
||||
//org_table
|
||||
f.OrgTable, _, n, err = LengthEnodedString(p[pos:])
|
||||
f.OrgTable, _, n, err = LengthEncodedString(p[pos:])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pos += n
|
||||
|
||||
//name
|
||||
f.Name, _, n, err = LengthEnodedString(p[pos:])
|
||||
f.Name, _, n, err = LengthEncodedString(p[pos:])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
pos += n
|
||||
|
||||
//org_name
|
||||
f.OrgName, _, n, err = LengthEnodedString(p[pos:])
|
||||
f.OrgName, _, n, err = LengthEncodedString(p[pos:])
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
4
vendor/github.com/siddontang/go-mysql/mysql/gtid.go
generated
vendored
4
vendor/github.com/siddontang/go-mysql/mysql/gtid.go
generated
vendored
@ -11,6 +11,10 @@ type GTIDSet interface {
|
||||
Equal(o GTIDSet) bool
|
||||
|
||||
Contain(o GTIDSet) bool
|
||||
|
||||
Update(GTIDStr string) error
|
||||
|
||||
Clone() GTIDSet
|
||||
}
|
||||
|
||||
func ParseGTIDSet(flavor string, s string) (GTIDSet, error) {
|
||||
|
202
vendor/github.com/siddontang/go-mysql/mysql/mariadb_gtid.go
generated
vendored
202
vendor/github.com/siddontang/go-mysql/mysql/mariadb_gtid.go
generated
vendored
@ -1,28 +1,32 @@
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/siddontang/go-log/log"
|
||||
"github.com/siddontang/go/hack"
|
||||
)
|
||||
|
||||
// MariadbGTID represent mariadb gtid, [domain ID]-[server-id]-[sequence]
|
||||
type MariadbGTID struct {
|
||||
DomainID uint32
|
||||
ServerID uint32
|
||||
SequenceNumber uint64
|
||||
}
|
||||
|
||||
// We don't support multi source replication, so the mariadb gtid set may have only domain-server-sequence
|
||||
func ParseMariadbGTIDSet(str string) (GTIDSet, error) {
|
||||
// ParseMariadbGTID parses mariadb gtid, [domain ID]-[server-id]-[sequence]
|
||||
func ParseMariadbGTID(str string) (*MariadbGTID, error) {
|
||||
if len(str) == 0 {
|
||||
return MariadbGTID{0, 0, 0}, nil
|
||||
return &MariadbGTID{0, 0, 0}, nil
|
||||
}
|
||||
|
||||
seps := strings.Split(str, "-")
|
||||
|
||||
var gtid MariadbGTID
|
||||
gtid := new(MariadbGTID)
|
||||
|
||||
if len(seps) != 3 {
|
||||
return gtid, errors.Errorf("invalid Mariadb GTID %v, must domain-server-sequence", str)
|
||||
@ -43,13 +47,13 @@ func ParseMariadbGTIDSet(str string) (GTIDSet, error) {
|
||||
return gtid, errors.Errorf("invalid MariaDB GTID Sequence number (%v): %v", seps[2], err)
|
||||
}
|
||||
|
||||
return MariadbGTID{
|
||||
return &MariadbGTID{
|
||||
DomainID: uint32(domainID),
|
||||
ServerID: uint32(serverID),
|
||||
SequenceNumber: sequenceID}, nil
|
||||
}
|
||||
|
||||
func (gtid MariadbGTID) String() string {
|
||||
func (gtid *MariadbGTID) String() string {
|
||||
if gtid.DomainID == 0 && gtid.ServerID == 0 && gtid.SequenceNumber == 0 {
|
||||
return ""
|
||||
}
|
||||
@ -57,24 +61,172 @@ func (gtid MariadbGTID) String() string {
|
||||
return fmt.Sprintf("%d-%d-%d", gtid.DomainID, gtid.ServerID, gtid.SequenceNumber)
|
||||
}
|
||||
|
||||
func (gtid MariadbGTID) Encode() []byte {
|
||||
return []byte(gtid.String())
|
||||
}
|
||||
|
||||
func (gtid MariadbGTID) Equal(o GTIDSet) bool {
|
||||
other, ok := o.(MariadbGTID)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
return gtid == other
|
||||
}
|
||||
|
||||
func (gtid MariadbGTID) Contain(o GTIDSet) bool {
|
||||
other, ok := o.(MariadbGTID)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
// Contain return whether one mariadb gtid covers another mariadb gtid
|
||||
func (gtid *MariadbGTID) Contain(other *MariadbGTID) bool {
|
||||
return gtid.DomainID == other.DomainID && gtid.SequenceNumber >= other.SequenceNumber
|
||||
}
|
||||
|
||||
// Clone clones a mariadb gtid
|
||||
func (gtid *MariadbGTID) Clone() *MariadbGTID {
|
||||
o := new(MariadbGTID)
|
||||
*o = *gtid
|
||||
return o
|
||||
}
|
||||
|
||||
func (gtid *MariadbGTID) forward(newer *MariadbGTID) error {
|
||||
if newer.DomainID != gtid.DomainID {
|
||||
return errors.Errorf("%s is not same with doamin of %s", newer, gtid)
|
||||
}
|
||||
|
||||
/*
|
||||
Here's a simplified example of binlog events.
|
||||
Although I think one domain should have only one update at same time, we can't limit the user's usage.
|
||||
we just output a warn log and let it go on
|
||||
| mysqld-bin.000001 | 1453 | Gtid | 112 | 1495 | BEGIN GTID 0-112-6 |
|
||||
| mysqld-bin.000001 | 1624 | Xid | 112 | 1655 | COMMIT xid=74 |
|
||||
| mysqld-bin.000001 | 1655 | Gtid | 112 | 1697 | BEGIN GTID 0-112-7 |
|
||||
| mysqld-bin.000001 | 1826 | Xid | 112 | 1857 | COMMIT xid=75 |
|
||||
| mysqld-bin.000001 | 1857 | Gtid | 111 | 1899 | BEGIN GTID 0-111-5 |
|
||||
| mysqld-bin.000001 | 1981 | Xid | 111 | 2012 | COMMIT xid=77 |
|
||||
| mysqld-bin.000001 | 2012 | Gtid | 112 | 2054 | BEGIN GTID 0-112-8 |
|
||||
| mysqld-bin.000001 | 2184 | Xid | 112 | 2215 | COMMIT xid=116 |
|
||||
| mysqld-bin.000001 | 2215 | Gtid | 111 | 2257 | BEGIN GTID 0-111-6 |
|
||||
*/
|
||||
if newer.SequenceNumber <= gtid.SequenceNumber {
|
||||
log.Warnf("out of order binlog appears with gtid %s vs current position gtid %s", newer, gtid)
|
||||
}
|
||||
|
||||
gtid.ServerID = newer.ServerID
|
||||
gtid.SequenceNumber = newer.SequenceNumber
|
||||
return nil
|
||||
}
|
||||
|
||||
// MariadbGTIDSet is a set of mariadb gtid
|
||||
type MariadbGTIDSet struct {
|
||||
Sets map[uint32]*MariadbGTID
|
||||
}
|
||||
|
||||
// ParseMariadbGTIDSet parses str into mariadb gtid sets
|
||||
func ParseMariadbGTIDSet(str string) (GTIDSet, error) {
|
||||
s := new(MariadbGTIDSet)
|
||||
s.Sets = make(map[uint32]*MariadbGTID)
|
||||
if str == "" {
|
||||
return s, nil
|
||||
}
|
||||
|
||||
sp := strings.Split(str, ",")
|
||||
|
||||
//todo, handle redundant same uuid
|
||||
for i := 0; i < len(sp); i++ {
|
||||
err := s.Update(sp[i])
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// AddSet adds mariadb gtid into mariadb gtid set
|
||||
func (s *MariadbGTIDSet) AddSet(gtid *MariadbGTID) error {
|
||||
if gtid == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
o, ok := s.Sets[gtid.DomainID]
|
||||
if ok {
|
||||
err := o.forward(gtid)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
} else {
|
||||
s.Sets[gtid.DomainID] = gtid
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update updates mariadb gtid set
|
||||
func (s *MariadbGTIDSet) Update(GTIDStr string) error {
|
||||
gtid, err := ParseMariadbGTID(GTIDStr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = s.AddSet(gtid)
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
func (s *MariadbGTIDSet) String() string {
|
||||
return hack.String(s.Encode())
|
||||
}
|
||||
|
||||
// Encode encodes mariadb gtid set
|
||||
func (s *MariadbGTIDSet) Encode() []byte {
|
||||
var buf bytes.Buffer
|
||||
sep := ""
|
||||
for _, gtid := range s.Sets {
|
||||
buf.WriteString(sep)
|
||||
buf.WriteString(gtid.String())
|
||||
sep = ","
|
||||
}
|
||||
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
// Clone clones a mariadb gtid set
|
||||
func (s *MariadbGTIDSet) Clone() GTIDSet {
|
||||
clone := &MariadbGTIDSet{
|
||||
Sets: make(map[uint32]*MariadbGTID),
|
||||
}
|
||||
for domainID, gtid := range s.Sets {
|
||||
clone.Sets[domainID] = gtid.Clone()
|
||||
}
|
||||
|
||||
return clone
|
||||
}
|
||||
|
||||
// Equal returns true if two mariadb gtid set is same, otherwise return false
|
||||
func (s *MariadbGTIDSet) Equal(o GTIDSet) bool {
|
||||
other, ok := o.(*MariadbGTIDSet)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
if len(other.Sets) != len(s.Sets) {
|
||||
return false
|
||||
}
|
||||
|
||||
for domainID, gtid := range other.Sets {
|
||||
o, ok := s.Sets[domainID]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
if *gtid != *o {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Contain return whether one mariadb gtid set covers another mariadb gtid set
|
||||
func (s *MariadbGTIDSet) Contain(o GTIDSet) bool {
|
||||
other, ok := o.(*MariadbGTIDSet)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
for doaminID, gtid := range other.Sets {
|
||||
o, ok := s.Sets[doaminID]
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
|
||||
if !o.Contain(gtid) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
234
vendor/github.com/siddontang/go-mysql/mysql/mariadb_gtid_test.go
generated
vendored
Normal file
234
vendor/github.com/siddontang/go-mysql/mysql/mariadb_gtid_test.go
generated
vendored
Normal file
@ -0,0 +1,234 @@
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"github.com/pingcap/check"
|
||||
)
|
||||
|
||||
type mariaDBTestSuite struct {
|
||||
}
|
||||
|
||||
var _ = check.Suite(&mariaDBTestSuite{})
|
||||
|
||||
func (t *mariaDBTestSuite) SetUpSuite(c *check.C) {
|
||||
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TearDownSuite(c *check.C) {
|
||||
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestParseMariaDBGTID(c *check.C) {
|
||||
cases := []struct {
|
||||
gtidStr string
|
||||
hashError bool
|
||||
}{
|
||||
{"0-1-1", false},
|
||||
{"", false},
|
||||
{"0-1-1-1", true},
|
||||
{"1", true},
|
||||
{"0-1-seq", true},
|
||||
}
|
||||
|
||||
for _, cs := range cases {
|
||||
gtid, err := ParseMariadbGTID(cs.gtidStr)
|
||||
if cs.hashError {
|
||||
c.Assert(err, check.NotNil)
|
||||
} else {
|
||||
c.Assert(err, check.IsNil)
|
||||
c.Assert(gtid.String(), check.Equals, cs.gtidStr)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestMariaDBGTIDConatin(c *check.C) {
|
||||
cases := []struct {
|
||||
originGTIDStr, otherGTIDStr string
|
||||
contain bool
|
||||
}{
|
||||
{"0-1-1", "0-1-2", false},
|
||||
{"0-1-1", "", true},
|
||||
{"2-1-1", "1-1-1", false},
|
||||
{"1-2-1", "1-1-1", true},
|
||||
{"1-2-2", "1-1-1", true},
|
||||
}
|
||||
|
||||
for _, cs := range cases {
|
||||
originGTID, err := ParseMariadbGTID(cs.originGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
otherGTID, err := ParseMariadbGTID(cs.otherGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
c.Assert(originGTID.Contain(otherGTID), check.Equals, cs.contain)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestMariaDBGTIDClone(c *check.C) {
|
||||
gtid, err := ParseMariadbGTID("1-1-1")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
clone := gtid.Clone()
|
||||
c.Assert(gtid, check.DeepEquals, clone)
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestMariaDBForward(c *check.C) {
|
||||
cases := []struct {
|
||||
currentGTIDStr, newerGTIDStr string
|
||||
hashError bool
|
||||
}{
|
||||
{"0-1-1", "0-1-2", false},
|
||||
{"0-1-1", "", false},
|
||||
{"2-1-1", "1-1-1", true},
|
||||
{"1-2-1", "1-1-1", false},
|
||||
{"1-2-2", "1-1-1", false},
|
||||
}
|
||||
|
||||
for _, cs := range cases {
|
||||
currentGTID, err := ParseMariadbGTID(cs.currentGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
newerGTID, err := ParseMariadbGTID(cs.newerGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
err = currentGTID.forward(newerGTID)
|
||||
if cs.hashError {
|
||||
c.Assert(err, check.NotNil)
|
||||
c.Assert(currentGTID.String(), check.Equals, cs.currentGTIDStr)
|
||||
} else {
|
||||
c.Assert(err, check.IsNil)
|
||||
c.Assert(currentGTID.String(), check.Equals, cs.newerGTIDStr)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestParseMariaDBGTIDSet(c *check.C) {
|
||||
cases := []struct {
|
||||
gtidStr string
|
||||
subGTIDs map[uint32]string //domain ID => gtid string
|
||||
expectedStr []string // test String()
|
||||
hasError bool
|
||||
}{
|
||||
{"0-1-1", map[uint32]string{0: "0-1-1"}, []string{"0-1-1"}, false},
|
||||
{"", nil, []string{""}, false},
|
||||
{"0-1-1,1-2-3", map[uint32]string{0: "0-1-1", 1: "1-2-3"}, []string{"0-1-1,1-2-3", "1-2-3,0-1-1"}, false},
|
||||
{"0-1--1", nil, nil, true},
|
||||
}
|
||||
|
||||
for _, cs := range cases {
|
||||
gtidSet, err := ParseMariadbGTIDSet(cs.gtidStr)
|
||||
if cs.hasError {
|
||||
c.Assert(err, check.NotNil)
|
||||
} else {
|
||||
c.Assert(err, check.IsNil)
|
||||
mariadbGTIDSet, ok := gtidSet.(*MariadbGTIDSet)
|
||||
c.Assert(ok, check.IsTrue)
|
||||
|
||||
// check sub gtid
|
||||
c.Assert(mariadbGTIDSet.Sets, check.HasLen, len(cs.subGTIDs))
|
||||
for domainID, gtid := range mariadbGTIDSet.Sets {
|
||||
c.Assert(mariadbGTIDSet.Sets, check.HasKey, domainID)
|
||||
c.Assert(gtid.String(), check.Equals, cs.subGTIDs[domainID])
|
||||
}
|
||||
|
||||
// check String() function
|
||||
inExpectedResult := false
|
||||
actualStr := mariadbGTIDSet.String()
|
||||
for _, str := range cs.expectedStr {
|
||||
if str == actualStr {
|
||||
inExpectedResult = true
|
||||
break
|
||||
}
|
||||
}
|
||||
c.Assert(inExpectedResult, check.IsTrue)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestMariaDBGTIDSetUpdate(c *check.C) {
|
||||
cases := []struct {
|
||||
isNilGTID bool
|
||||
gtidStr string
|
||||
subGTIDs map[uint32]string
|
||||
}{
|
||||
{true, "", map[uint32]string{1: "1-1-1", 2: "2-2-2"}},
|
||||
{false, "1-2-2", map[uint32]string{1: "1-2-2", 2: "2-2-2"}},
|
||||
{false, "1-2-1", map[uint32]string{1: "1-2-1", 2: "2-2-2"}},
|
||||
{false, "3-2-1", map[uint32]string{1: "1-1-1", 2: "2-2-2", 3: "3-2-1"}},
|
||||
}
|
||||
|
||||
for _, cs := range cases {
|
||||
gtidSet, err := ParseMariadbGTIDSet("1-1-1,2-2-2")
|
||||
c.Assert(err, check.IsNil)
|
||||
mariadbGTIDSet, ok := gtidSet.(*MariadbGTIDSet)
|
||||
c.Assert(ok, check.IsTrue)
|
||||
|
||||
if cs.isNilGTID {
|
||||
c.Assert(mariadbGTIDSet.AddSet(nil), check.IsNil)
|
||||
} else {
|
||||
err := gtidSet.Update(cs.gtidStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
}
|
||||
// check sub gtid
|
||||
c.Assert(mariadbGTIDSet.Sets, check.HasLen, len(cs.subGTIDs))
|
||||
for domainID, gtid := range mariadbGTIDSet.Sets {
|
||||
c.Assert(mariadbGTIDSet.Sets, check.HasKey, domainID)
|
||||
c.Assert(gtid.String(), check.Equals, cs.subGTIDs[domainID])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestMariaDBGTIDSetEqual(c *check.C) {
|
||||
cases := []struct {
|
||||
originGTIDStr, otherGTIDStr string
|
||||
equals bool
|
||||
}{
|
||||
{"", "", true},
|
||||
{"1-1-1", "1-1-1,2-2-2", false},
|
||||
{"1-1-1,2-2-2", "1-1-1", false},
|
||||
{"1-1-1,2-2-2", "1-1-1,2-2-2", true},
|
||||
{"1-1-1,2-2-2", "1-1-1,2-2-3", false},
|
||||
}
|
||||
|
||||
for _, cs := range cases {
|
||||
originGTID, err := ParseMariadbGTIDSet(cs.originGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
otherGTID, err := ParseMariadbGTIDSet(cs.otherGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
c.Assert(originGTID.Equal(otherGTID), check.Equals, cs.equals)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestMariaDBGTIDSetContain(c *check.C) {
|
||||
cases := []struct {
|
||||
originGTIDStr, otherGTIDStr string
|
||||
contain bool
|
||||
}{
|
||||
{"", "", true},
|
||||
{"1-1-1", "1-1-1,2-2-2", false},
|
||||
{"1-1-1,2-2-2", "1-1-1", true},
|
||||
{"1-1-1,2-2-2", "1-1-1,2-2-2", true},
|
||||
{"1-1-1,2-2-2", "1-1-1,2-2-1", true},
|
||||
{"1-1-1,2-2-2", "1-1-1,2-2-3", false},
|
||||
}
|
||||
|
||||
for _, cs := range cases {
|
||||
originGTIDSet, err := ParseMariadbGTIDSet(cs.originGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
otherGTIDSet, err := ParseMariadbGTIDSet(cs.otherGTIDStr)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
c.Assert(originGTIDSet.Contain(otherGTIDSet), check.Equals, cs.contain)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mariaDBTestSuite) TestMariaDBGTIDSetClone(c *check.C) {
|
||||
cases := []string{"", "1-1-1", "1-1-1,2-2-2"}
|
||||
|
||||
for _, str := range cases {
|
||||
gtidSet, err := ParseMariadbGTIDSet(str)
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
c.Assert(gtidSet.Clone(), check.DeepEquals, gtidSet)
|
||||
}
|
||||
}
|
46
vendor/github.com/siddontang/go-mysql/mysql/mysql_gtid.go
generated
vendored
46
vendor/github.com/siddontang/go-mysql/mysql/mysql_gtid.go
generated
vendored
@ -97,7 +97,11 @@ func (s IntervalSlice) Normalize() IntervalSlice {
|
||||
n = append(n, s[i])
|
||||
continue
|
||||
} else {
|
||||
n[len(n)-1] = Interval{last.Start, s[i].Stop}
|
||||
stop := s[i].Stop
|
||||
if last.Stop > stop {
|
||||
stop = last.Stop
|
||||
}
|
||||
n[len(n)-1] = Interval{last.Start, stop}
|
||||
}
|
||||
}
|
||||
|
||||
@ -285,17 +289,28 @@ func (s *UUIDSet) Decode(data []byte) error {
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *UUIDSet) Clone() *UUIDSet {
|
||||
clone := new(UUIDSet)
|
||||
|
||||
clone.SID, _ = uuid.FromString(s.SID.String())
|
||||
clone.Intervals = s.Intervals.Normalize()
|
||||
|
||||
return clone
|
||||
}
|
||||
|
||||
type MysqlGTIDSet struct {
|
||||
Sets map[string]*UUIDSet
|
||||
}
|
||||
|
||||
func ParseMysqlGTIDSet(str string) (GTIDSet, error) {
|
||||
s := new(MysqlGTIDSet)
|
||||
s.Sets = make(map[string]*UUIDSet)
|
||||
if str == "" {
|
||||
return s, nil
|
||||
}
|
||||
|
||||
sp := strings.Split(str, ",")
|
||||
|
||||
s.Sets = make(map[string]*UUIDSet, len(sp))
|
||||
|
||||
//todo, handle redundant same uuid
|
||||
for i := 0; i < len(sp); i++ {
|
||||
if set, err := ParseUUIDSet(sp[i]); err != nil {
|
||||
@ -334,6 +349,9 @@ func DecodeMysqlGTIDSet(data []byte) (*MysqlGTIDSet, error) {
|
||||
}
|
||||
|
||||
func (s *MysqlGTIDSet) AddSet(set *UUIDSet) {
|
||||
if set == nil {
|
||||
return
|
||||
}
|
||||
sid := set.SID.String()
|
||||
o, ok := s.Sets[sid]
|
||||
if ok {
|
||||
@ -343,6 +361,17 @@ func (s *MysqlGTIDSet) AddSet(set *UUIDSet) {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *MysqlGTIDSet) Update(GTIDStr string) error {
|
||||
uuidSet, err := ParseUUIDSet(GTIDStr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.AddSet(uuidSet)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *MysqlGTIDSet) Contain(o GTIDSet) bool {
|
||||
sub, ok := o.(*MysqlGTIDSet)
|
||||
if !ok {
|
||||
@ -407,3 +436,14 @@ func (s *MysqlGTIDSet) Encode() []byte {
|
||||
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
func (gtid *MysqlGTIDSet) Clone() GTIDSet {
|
||||
clone := &MysqlGTIDSet{
|
||||
Sets: make(map[string]*UUIDSet),
|
||||
}
|
||||
for sid, uuidSet := range gtid.Sets {
|
||||
clone.Sets[sid] = uuidSet.Clone()
|
||||
}
|
||||
|
||||
return clone
|
||||
}
|
||||
|
43
vendor/github.com/siddontang/go-mysql/mysql/mysql_test.go
generated
vendored
43
vendor/github.com/siddontang/go-mysql/mysql/mysql_test.go
generated
vendored
@ -1,6 +1,7 @@
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/pingcap/check"
|
||||
@ -15,11 +16,11 @@ type mysqlTestSuite struct {
|
||||
|
||||
var _ = check.Suite(&mysqlTestSuite{})
|
||||
|
||||
func (s *mysqlTestSuite) SetUpSuite(c *check.C) {
|
||||
func (t *mysqlTestSuite) SetUpSuite(c *check.C) {
|
||||
|
||||
}
|
||||
|
||||
func (s *mysqlTestSuite) TearDownSuite(c *check.C) {
|
||||
func (t *mysqlTestSuite) TearDownSuite(c *check.C) {
|
||||
|
||||
}
|
||||
|
||||
@ -59,6 +60,12 @@ func (t *mysqlTestSuite) TestMysqlGTIDIntervalSlice(c *check.C) {
|
||||
n = i.Normalize()
|
||||
c.Assert(n, check.DeepEquals, IntervalSlice{Interval{1, 3}, Interval{4, 5}})
|
||||
|
||||
i = IntervalSlice{Interval{1, 4}, Interval{2, 3}}
|
||||
i.Sort()
|
||||
c.Assert(i, check.DeepEquals, IntervalSlice{Interval{1, 4}, Interval{2, 3}})
|
||||
n = i.Normalize()
|
||||
c.Assert(n, check.DeepEquals, IntervalSlice{Interval{1, 4}})
|
||||
|
||||
n1 := IntervalSlice{Interval{1, 3}, Interval{4, 5}}
|
||||
n2 := IntervalSlice{Interval{1, 2}}
|
||||
|
||||
@ -91,6 +98,15 @@ func (t *mysqlTestSuite) TestMysqlGTIDCodec(c *check.C) {
|
||||
c.Assert(gs, check.DeepEquals, o)
|
||||
}
|
||||
|
||||
func (t *mysqlTestSuite) TestMysqlUpdate(c *check.C) {
|
||||
g1, err := ParseMysqlGTIDSet("3E11FA47-71CA-11E1-9E33-C80AA9429562:21-57")
|
||||
c.Assert(err, check.IsNil)
|
||||
|
||||
g1.Update("3E11FA47-71CA-11E1-9E33-C80AA9429562:21-58")
|
||||
|
||||
c.Assert(strings.ToUpper(g1.String()), check.Equals, "3E11FA47-71CA-11E1-9E33-C80AA9429562:21-58")
|
||||
}
|
||||
|
||||
func (t *mysqlTestSuite) TestMysqlGTIDContain(c *check.C) {
|
||||
g1, err := ParseMysqlGTIDSet("3E11FA47-71CA-11E1-9E33-C80AA9429562:23")
|
||||
c.Assert(err, check.IsNil)
|
||||
@ -151,3 +167,26 @@ func (t *mysqlTestSuite) TestMysqlParseBinaryUint64(c *check.C) {
|
||||
u64 := ParseBinaryUint64([]byte{1, 2, 3, 4, 5, 6, 7, 128})
|
||||
c.Assert(u64, check.Equals, 128*uint64(72057594037927936)+7*uint64(281474976710656)+6*uint64(1099511627776)+5*uint64(4294967296)+4*16777216+3*65536+2*256+1)
|
||||
}
|
||||
|
||||
func (t *mysqlTestSuite) TestErrorCode(c *check.C) {
|
||||
tbls := []struct {
|
||||
msg string
|
||||
code int
|
||||
}{
|
||||
{"ERROR 1094 (HY000): Unknown thread id: 1094", 1094},
|
||||
{"error string", 0},
|
||||
{"abcdefg", 0},
|
||||
{"123455 ks094", 0},
|
||||
{"ERROR 1046 (3D000): Unknown error 1046", 1046},
|
||||
}
|
||||
for _, v := range tbls {
|
||||
c.Assert(ErrorCode(v.msg), check.Equals, v.code)
|
||||
}
|
||||
}
|
||||
|
||||
func (t *mysqlTestSuite) TestMysqlNullDecode(c *check.C) {
|
||||
_, isNull, n := LengthEncodedInt([]byte{0xfb})
|
||||
|
||||
c.Assert(isNull, check.IsTrue)
|
||||
c.Assert(n, check.Equals, 1)
|
||||
}
|
||||
|
7
vendor/github.com/siddontang/go-mysql/mysql/resultset.go
generated
vendored
7
vendor/github.com/siddontang/go-mysql/mysql/resultset.go
generated
vendored
@ -28,7 +28,7 @@ func (p RowData) ParseText(f []*Field) ([]interface{}, error) {
|
||||
var n int = 0
|
||||
|
||||
for i := range f {
|
||||
v, isNull, n, err = LengthEnodedString(p[pos:])
|
||||
v, isNull, n, err = LengthEncodedString(p[pos:])
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
@ -115,7 +115,8 @@ func (p RowData) ParseBinary(f []*Field) ([]interface{}, error) {
|
||||
} else {
|
||||
data[i] = ParseBinaryInt24(p[pos : pos+3])
|
||||
}
|
||||
pos += 4
|
||||
//3 byte
|
||||
pos += 3
|
||||
continue
|
||||
|
||||
case MYSQL_TYPE_LONG:
|
||||
@ -150,7 +151,7 @@ func (p RowData) ParseBinary(f []*Field) ([]interface{}, error) {
|
||||
MYSQL_TYPE_BIT, MYSQL_TYPE_ENUM, MYSQL_TYPE_SET, MYSQL_TYPE_TINY_BLOB,
|
||||
MYSQL_TYPE_MEDIUM_BLOB, MYSQL_TYPE_LONG_BLOB, MYSQL_TYPE_BLOB,
|
||||
MYSQL_TYPE_VAR_STRING, MYSQL_TYPE_STRING, MYSQL_TYPE_GEOMETRY:
|
||||
v, isNull, n, err = LengthEnodedString(p[pos:])
|
||||
v, isNull, n, err = LengthEncodedString(p[pos:])
|
||||
pos += n
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
|
73
vendor/github.com/siddontang/go-mysql/mysql/resultset_helper.go
generated
vendored
73
vendor/github.com/siddontang/go-mysql/mysql/resultset_helper.go
generated
vendored
@ -38,6 +38,8 @@ func formatTextValue(value interface{}) ([]byte, error) {
|
||||
return v, nil
|
||||
case string:
|
||||
return hack.Slice(v), nil
|
||||
case nil:
|
||||
return nil, nil
|
||||
default:
|
||||
return nil, errors.Errorf("invalid type %T", value)
|
||||
}
|
||||
@ -77,23 +79,40 @@ func formatBinaryValue(value interface{}) ([]byte, error) {
|
||||
return nil, errors.Errorf("invalid type %T", value)
|
||||
}
|
||||
}
|
||||
|
||||
func fieldType(value interface{}) (typ uint8, err error) {
|
||||
switch value.(type) {
|
||||
case int8, int16, int32, int64, int:
|
||||
typ = MYSQL_TYPE_LONGLONG
|
||||
case uint8, uint16, uint32, uint64, uint:
|
||||
typ = MYSQL_TYPE_LONGLONG
|
||||
case float32, float64:
|
||||
typ = MYSQL_TYPE_DOUBLE
|
||||
case string, []byte:
|
||||
typ = MYSQL_TYPE_VAR_STRING
|
||||
case nil:
|
||||
typ = MYSQL_TYPE_NULL
|
||||
default:
|
||||
err = errors.Errorf("unsupport type %T for resultset", value)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func formatField(field *Field, value interface{}) error {
|
||||
switch value.(type) {
|
||||
case int8, int16, int32, int64, int:
|
||||
field.Charset = 63
|
||||
field.Type = MYSQL_TYPE_LONGLONG
|
||||
field.Flag = BINARY_FLAG | NOT_NULL_FLAG
|
||||
case uint8, uint16, uint32, uint64, uint:
|
||||
field.Charset = 63
|
||||
field.Type = MYSQL_TYPE_LONGLONG
|
||||
field.Flag = BINARY_FLAG | NOT_NULL_FLAG | UNSIGNED_FLAG
|
||||
case float32, float64:
|
||||
field.Charset = 63
|
||||
field.Type = MYSQL_TYPE_DOUBLE
|
||||
field.Flag = BINARY_FLAG | NOT_NULL_FLAG
|
||||
case string, []byte:
|
||||
field.Charset = 33
|
||||
field.Type = MYSQL_TYPE_VAR_STRING
|
||||
case nil:
|
||||
field.Charset = 33
|
||||
default:
|
||||
return errors.Errorf("unsupport type %T for resultset", value)
|
||||
}
|
||||
@ -106,7 +125,13 @@ func BuildSimpleTextResultset(names []string, values [][]interface{}) (*Resultse
|
||||
r.Fields = make([]*Field, len(names))
|
||||
|
||||
var b []byte
|
||||
var err error
|
||||
|
||||
if len(values) == 0 {
|
||||
for i, name := range names {
|
||||
r.Fields[i] = &Field{Name: hack.Slice(name), Charset: 33, Type: MYSQL_TYPE_NULL}
|
||||
}
|
||||
return r, nil
|
||||
}
|
||||
|
||||
for i, vs := range values {
|
||||
if len(vs) != len(r.Fields) {
|
||||
@ -115,13 +140,23 @@ func BuildSimpleTextResultset(names []string, values [][]interface{}) (*Resultse
|
||||
|
||||
var row []byte
|
||||
for j, value := range vs {
|
||||
if i == 0 {
|
||||
field := &Field{}
|
||||
r.Fields[j] = field
|
||||
field.Name = hack.Slice(names[j])
|
||||
|
||||
if err = formatField(field, value); err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
typ, err := fieldType(value)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
if r.Fields[j] == nil {
|
||||
r.Fields[j] = &Field{Name: hack.Slice(names[j]), Type: typ}
|
||||
formatField(r.Fields[j], value)
|
||||
} else if typ != r.Fields[j].Type {
|
||||
// we got another type in the same column. in general, we treat it as an error, except
|
||||
// the case, when old value was null, and the new one isn't null, so we can update
|
||||
// type info for fields.
|
||||
oldIsNull, newIsNull := r.Fields[j].Type == MYSQL_TYPE_NULL, typ == MYSQL_TYPE_NULL
|
||||
if oldIsNull && !newIsNull { // old is null, new isn't, update type info.
|
||||
r.Fields[j].Type = typ
|
||||
formatField(r.Fields[j], value)
|
||||
} else if !oldIsNull && !newIsNull { // different non-null types, that's an error.
|
||||
return nil, errors.Errorf("row types aren't consistent")
|
||||
}
|
||||
}
|
||||
b, err = formatTextValue(value)
|
||||
@ -130,7 +165,12 @@ func BuildSimpleTextResultset(names []string, values [][]interface{}) (*Resultse
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
row = append(row, PutLengthEncodedString(b)...)
|
||||
if b == nil {
|
||||
// NULL value is encoded as 0xfb here (without additional info about length)
|
||||
row = append(row, 0xfb)
|
||||
} else {
|
||||
row = append(row, PutLengthEncodedString(b)...)
|
||||
}
|
||||
}
|
||||
|
||||
r.RowDatas = append(r.RowDatas, row)
|
||||
@ -145,7 +185,6 @@ func BuildSimpleBinaryResultset(names []string, values [][]interface{}) (*Result
|
||||
r.Fields = make([]*Field, len(names))
|
||||
|
||||
var b []byte
|
||||
var err error
|
||||
|
||||
bitmapLen := ((len(names) + 7 + 2) >> 3)
|
||||
|
||||
@ -161,8 +200,12 @@ func BuildSimpleBinaryResultset(names []string, values [][]interface{}) (*Result
|
||||
row = append(row, nullBitmap...)
|
||||
|
||||
for j, value := range vs {
|
||||
typ, err := fieldType(value)
|
||||
if err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
if i == 0 {
|
||||
field := &Field{}
|
||||
field := &Field{Type: typ}
|
||||
r.Fields[j] = field
|
||||
field.Name = hack.Slice(names[j])
|
||||
|
||||
|
103
vendor/github.com/siddontang/go-mysql/mysql/util.go
generated
vendored
103
vendor/github.com/siddontang/go-mysql/mysql/util.go
generated
vendored
@ -11,6 +11,8 @@ import (
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/siddontang/go/hack"
|
||||
"crypto/sha256"
|
||||
"crypto/rsa"
|
||||
)
|
||||
|
||||
func Pstack() string {
|
||||
@ -48,6 +50,62 @@ func CalcPassword(scramble, password []byte) []byte {
|
||||
return scramble
|
||||
}
|
||||
|
||||
// Hash password using MySQL 8+ method (SHA256)
|
||||
func CalcCachingSha2Password(scramble []byte, password string) []byte {
|
||||
if len(password) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
// XOR(SHA256(password), SHA256(SHA256(SHA256(password)), scramble))
|
||||
|
||||
crypt := sha256.New()
|
||||
crypt.Write([]byte(password))
|
||||
message1 := crypt.Sum(nil)
|
||||
|
||||
crypt.Reset()
|
||||
crypt.Write(message1)
|
||||
message1Hash := crypt.Sum(nil)
|
||||
|
||||
crypt.Reset()
|
||||
crypt.Write(message1Hash)
|
||||
crypt.Write(scramble)
|
||||
message2 := crypt.Sum(nil)
|
||||
|
||||
for i := range message1 {
|
||||
message1[i] ^= message2[i]
|
||||
}
|
||||
|
||||
return message1
|
||||
}
|
||||
|
||||
|
||||
func EncryptPassword(password string, seed []byte, pub *rsa.PublicKey) ([]byte, error) {
|
||||
plain := make([]byte, len(password)+1)
|
||||
copy(plain, password)
|
||||
for i := range plain {
|
||||
j := i % len(seed)
|
||||
plain[i] ^= seed[j]
|
||||
}
|
||||
sha1v := sha1.New()
|
||||
return rsa.EncryptOAEP(sha1v, rand.Reader, pub, plain, nil)
|
||||
}
|
||||
|
||||
// encodes a uint64 value and appends it to the given bytes slice
|
||||
func AppendLengthEncodedInteger(b []byte, n uint64) []byte {
|
||||
switch {
|
||||
case n <= 250:
|
||||
return append(b, byte(n))
|
||||
|
||||
case n <= 0xffff:
|
||||
return append(b, 0xfc, byte(n), byte(n>>8))
|
||||
|
||||
case n <= 0xffffff:
|
||||
return append(b, 0xfd, byte(n), byte(n>>8), byte(n>>16))
|
||||
}
|
||||
return append(b, 0xfe, byte(n), byte(n>>8), byte(n>>16), byte(n>>24),
|
||||
byte(n>>32), byte(n>>40), byte(n>>48), byte(n>>56))
|
||||
}
|
||||
|
||||
func RandomBuf(size int) ([]byte, error) {
|
||||
buf := make([]byte, size)
|
||||
|
||||
@ -84,39 +142,33 @@ func BFixedLengthInt(buf []byte) uint64 {
|
||||
}
|
||||
|
||||
func LengthEncodedInt(b []byte) (num uint64, isNull bool, n int) {
|
||||
switch b[0] {
|
||||
if len(b) == 0 {
|
||||
return 0, true, 1
|
||||
}
|
||||
|
||||
switch b[0] {
|
||||
// 251: NULL
|
||||
case 0xfb:
|
||||
n = 1
|
||||
isNull = true
|
||||
return
|
||||
return 0, true, 1
|
||||
|
||||
// 252: value of following 2
|
||||
// 252: value of following 2
|
||||
case 0xfc:
|
||||
num = uint64(b[1]) | uint64(b[2])<<8
|
||||
n = 3
|
||||
return
|
||||
return uint64(b[1]) | uint64(b[2])<<8, false, 3
|
||||
|
||||
// 253: value of following 3
|
||||
// 253: value of following 3
|
||||
case 0xfd:
|
||||
num = uint64(b[1]) | uint64(b[2])<<8 | uint64(b[3])<<16
|
||||
n = 4
|
||||
return
|
||||
return uint64(b[1]) | uint64(b[2])<<8 | uint64(b[3])<<16, false, 4
|
||||
|
||||
// 254: value of following 8
|
||||
// 254: value of following 8
|
||||
case 0xfe:
|
||||
num = uint64(b[1]) | uint64(b[2])<<8 | uint64(b[3])<<16 |
|
||||
return uint64(b[1]) | uint64(b[2])<<8 | uint64(b[3])<<16 |
|
||||
uint64(b[4])<<24 | uint64(b[5])<<32 | uint64(b[6])<<40 |
|
||||
uint64(b[7])<<48 | uint64(b[8])<<56
|
||||
n = 9
|
||||
return
|
||||
uint64(b[7])<<48 | uint64(b[8])<<56,
|
||||
false, 9
|
||||
}
|
||||
|
||||
// 0-250: value of first byte
|
||||
num = uint64(b[0])
|
||||
n = 1
|
||||
return
|
||||
return uint64(b[0]), false, 1
|
||||
}
|
||||
|
||||
func PutLengthEncodedInt(n uint64) []byte {
|
||||
@ -137,23 +189,26 @@ func PutLengthEncodedInt(n uint64) []byte {
|
||||
return nil
|
||||
}
|
||||
|
||||
func LengthEnodedString(b []byte) ([]byte, bool, int, error) {
|
||||
// returns the string read as a bytes slice, whether the value is NULL,
|
||||
// the number of bytes read and an error, in case the string is longer than
|
||||
// the input slice
|
||||
func LengthEncodedString(b []byte) ([]byte, bool, int, error) {
|
||||
// Get length
|
||||
num, isNull, n := LengthEncodedInt(b)
|
||||
if num < 1 {
|
||||
return nil, isNull, n, nil
|
||||
return b[n:n], isNull, n, nil
|
||||
}
|
||||
|
||||
n += int(num)
|
||||
|
||||
// Check data length
|
||||
if len(b) >= n {
|
||||
return b[n-int(num) : n], false, n, nil
|
||||
return b[n-int(num) : n : n], false, n, nil
|
||||
}
|
||||
return nil, false, n, io.EOF
|
||||
}
|
||||
|
||||
func SkipLengthEnodedString(b []byte) (int, error) {
|
||||
func SkipLengthEncodedString(b []byte) (int, error) {
|
||||
// Get length
|
||||
num, _, n := LengthEncodedInt(b)
|
||||
if num < 1 {
|
||||
|
131
vendor/github.com/siddontang/go-mysql/packet/conn.go
generated
vendored
131
vendor/github.com/siddontang/go-mysql/packet/conn.go
generated
vendored
@ -1,11 +1,17 @@
|
||||
package packet
|
||||
|
||||
import "C"
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"io"
|
||||
"net"
|
||||
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha1"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
|
||||
"github.com/juju/errors"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
@ -15,7 +21,9 @@ import (
|
||||
*/
|
||||
type Conn struct {
|
||||
net.Conn
|
||||
br *bufio.Reader
|
||||
|
||||
// we removed the buffer reader because it will cause the SSLRequest to block (tls connection handshake won't be
|
||||
// able to read the "Client Hello" data since it has been buffered into the buffer reader)
|
||||
|
||||
Sequence uint8
|
||||
}
|
||||
@ -23,7 +31,6 @@ type Conn struct {
|
||||
func NewConn(conn net.Conn) *Conn {
|
||||
c := new(Conn)
|
||||
|
||||
c.br = bufio.NewReaderSize(conn, 4096)
|
||||
c.Conn = conn
|
||||
|
||||
return c
|
||||
@ -37,55 +44,20 @@ func (c *Conn) ReadPacket() ([]byte, error) {
|
||||
} else {
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// header := []byte{0, 0, 0, 0}
|
||||
|
||||
// if _, err := io.ReadFull(c.br, header); err != nil {
|
||||
// return nil, ErrBadConn
|
||||
// }
|
||||
|
||||
// length := int(uint32(header[0]) | uint32(header[1])<<8 | uint32(header[2])<<16)
|
||||
// if length < 1 {
|
||||
// return nil, fmt.Errorf("invalid payload length %d", length)
|
||||
// }
|
||||
|
||||
// sequence := uint8(header[3])
|
||||
|
||||
// if sequence != c.Sequence {
|
||||
// return nil, fmt.Errorf("invalid sequence %d != %d", sequence, c.Sequence)
|
||||
// }
|
||||
|
||||
// c.Sequence++
|
||||
|
||||
// data := make([]byte, length)
|
||||
// if _, err := io.ReadFull(c.br, data); err != nil {
|
||||
// return nil, ErrBadConn
|
||||
// } else {
|
||||
// if length < MaxPayloadLen {
|
||||
// return data, nil
|
||||
// }
|
||||
|
||||
// var buf []byte
|
||||
// buf, err = c.ReadPacket()
|
||||
// if err != nil {
|
||||
// return nil, ErrBadConn
|
||||
// } else {
|
||||
// return append(data, buf...), nil
|
||||
// }
|
||||
// }
|
||||
}
|
||||
|
||||
func (c *Conn) ReadPacketTo(w io.Writer) error {
|
||||
header := []byte{0, 0, 0, 0}
|
||||
|
||||
if _, err := io.ReadFull(c.br, header); err != nil {
|
||||
if _, err := io.ReadFull(c.Conn, header); err != nil {
|
||||
return ErrBadConn
|
||||
}
|
||||
|
||||
length := int(uint32(header[0]) | uint32(header[1])<<8 | uint32(header[2])<<16)
|
||||
if length < 1 {
|
||||
return errors.Errorf("invalid payload length %d", length)
|
||||
}
|
||||
// bug fixed: caching_sha2_password will send 0-length payload (the unscrambled password) when the password is empty
|
||||
//if length < 1 {
|
||||
// return errors.Errorf("invalid payload length %d", length)
|
||||
//}
|
||||
|
||||
sequence := uint8(header[3])
|
||||
|
||||
@ -95,7 +67,7 @@ func (c *Conn) ReadPacketTo(w io.Writer) error {
|
||||
|
||||
c.Sequence++
|
||||
|
||||
if n, err := io.CopyN(w, c.br, int64(length)); err != nil {
|
||||
if n, err := io.CopyN(w, c.Conn, int64(length)); err != nil {
|
||||
return ErrBadConn
|
||||
} else if n != int64(length) {
|
||||
return ErrBadConn
|
||||
@ -150,6 +122,77 @@ func (c *Conn) WritePacket(data []byte) error {
|
||||
}
|
||||
}
|
||||
|
||||
// Client clear text authentication packet
|
||||
// http://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::AuthSwitchResponse
|
||||
func (c *Conn) WriteClearAuthPacket(password string) error {
|
||||
// Calculate the packet length and add a tailing 0
|
||||
pktLen := len(password) + 1
|
||||
data := make([]byte, 4 + pktLen)
|
||||
|
||||
// Add the clear password [null terminated string]
|
||||
copy(data[4:], password)
|
||||
data[4+pktLen-1] = 0x00
|
||||
|
||||
return c.WritePacket(data)
|
||||
}
|
||||
|
||||
// Caching sha2 authentication. Public key request and send encrypted password
|
||||
// http://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::AuthSwitchResponse
|
||||
func (c *Conn) WritePublicKeyAuthPacket(password string, cipher []byte) error {
|
||||
// request public key
|
||||
data := make([]byte, 4 + 1)
|
||||
data[4] = 2 // cachingSha2PasswordRequestPublicKey
|
||||
c.WritePacket(data)
|
||||
|
||||
data, err := c.ReadPacket()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
block, _ := pem.Decode(data[1:])
|
||||
pub, err := x509.ParsePKIXPublicKey(block.Bytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
plain := make([]byte, len(password)+1)
|
||||
copy(plain, password)
|
||||
for i := range plain {
|
||||
j := i % len(cipher)
|
||||
plain[i] ^= cipher[j]
|
||||
}
|
||||
sha1v := sha1.New()
|
||||
enc, _ := rsa.EncryptOAEP(sha1v, rand.Reader, pub.(*rsa.PublicKey), plain, nil)
|
||||
data = make([]byte, 4 + len(enc))
|
||||
copy(data[4:], enc)
|
||||
return c.WritePacket(data)
|
||||
}
|
||||
|
||||
func (c *Conn) WriteEncryptedPassword(password string, seed []byte, pub *rsa.PublicKey) error {
|
||||
enc, err := EncryptPassword(password, seed, pub)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return c.WriteAuthSwitchPacket(enc, false)
|
||||
}
|
||||
|
||||
// http://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::AuthSwitchResponse
|
||||
func (c *Conn) WriteAuthSwitchPacket(authData []byte, addNUL bool) error {
|
||||
pktLen := 4 + len(authData)
|
||||
if addNUL {
|
||||
pktLen++
|
||||
}
|
||||
data := make([]byte, pktLen)
|
||||
|
||||
// Add the auth data [EOF]
|
||||
copy(data[4:], authData)
|
||||
if addNUL {
|
||||
data[pktLen-1] = 0x00
|
||||
}
|
||||
|
||||
return c.WritePacket(data)
|
||||
}
|
||||
|
||||
func (c *Conn) ResetSequence() {
|
||||
c.Sequence = 0
|
||||
}
|
||||
|
5
vendor/github.com/siddontang/go-mysql/replication/backup.go
generated
vendored
5
vendor/github.com/siddontang/go-mysql/replication/backup.go
generated
vendored
@ -1,13 +1,12 @@
|
||||
package replication
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"os"
|
||||
"path"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/juju/errors"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
@ -41,7 +40,7 @@ func (b *BinlogSyncer) StartBackup(backupDir string, p Position, timeout time.Du
|
||||
}()
|
||||
|
||||
for {
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
ctx, cancel := context.WithTimeout(context.Background(), timeout)
|
||||
e, err := s.GetEvent(ctx)
|
||||
cancel()
|
||||
|
||||
|
50
vendor/github.com/siddontang/go-mysql/replication/backup_test.go
generated
vendored
Normal file
50
vendor/github.com/siddontang/go-mysql/replication/backup_test.go
generated
vendored
Normal file
@ -0,0 +1,50 @@
|
||||
package replication
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/juju/errors"
|
||||
. "github.com/pingcap/check"
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
"os"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
func (t *testSyncerSuite) TestStartBackupEndInGivenTime(c *C) {
|
||||
t.setupTest(c, mysql.MySQLFlavor)
|
||||
|
||||
t.testExecute(c, "RESET MASTER")
|
||||
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
defer wg.Wait()
|
||||
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
|
||||
t.testSync(c, nil)
|
||||
|
||||
t.testExecute(c, "FLUSH LOGS")
|
||||
|
||||
t.testSync(c, nil)
|
||||
}()
|
||||
|
||||
os.RemoveAll("./var")
|
||||
timeout := 2 * time.Second
|
||||
|
||||
done := make(chan bool)
|
||||
|
||||
go func() {
|
||||
err := t.b.StartBackup("./var", mysql.Position{Name: "", Pos: uint32(0)}, timeout)
|
||||
c.Assert(err, IsNil)
|
||||
done <- true
|
||||
}()
|
||||
failTimeout := 5 * timeout
|
||||
ctx, _ := context.WithTimeout(context.Background(), failTimeout)
|
||||
select {
|
||||
case <-done:
|
||||
return
|
||||
case <-ctx.Done():
|
||||
c.Assert(errors.New("time out error"), IsNil)
|
||||
}
|
||||
}
|
36
vendor/github.com/siddontang/go-mysql/replication/binlogstreamer.go
generated
vendored
36
vendor/github.com/siddontang/go-mysql/replication/binlogstreamer.go
generated
vendored
@ -1,10 +1,10 @@
|
||||
package replication
|
||||
|
||||
import (
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"context"
|
||||
"time"
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/siddontang/go-log/log"
|
||||
)
|
||||
|
||||
var (
|
||||
@ -36,6 +36,36 @@ func (s *BinlogStreamer) GetEvent(ctx context.Context) (*BinlogEvent, error) {
|
||||
}
|
||||
}
|
||||
|
||||
// Get the binlog event with starttime, if current binlog event timestamp smaller than specify starttime
|
||||
// return nil event
|
||||
func (s *BinlogStreamer) GetEventWithStartTime(ctx context.Context,startTime time.Time) (*BinlogEvent, error) {
|
||||
if s.err != nil {
|
||||
return nil, ErrNeedSyncAgain
|
||||
}
|
||||
startUnix := startTime.Unix()
|
||||
select {
|
||||
case c := <-s.ch:
|
||||
if int64(c.Header.Timestamp) >= startUnix {
|
||||
return c, nil
|
||||
}
|
||||
return nil,nil
|
||||
case s.err = <-s.ech:
|
||||
return nil, s.err
|
||||
case <-ctx.Done():
|
||||
return nil, ctx.Err()
|
||||
}
|
||||
}
|
||||
|
||||
// DumpEvents dumps all left events
|
||||
func (s *BinlogStreamer) DumpEvents() []*BinlogEvent {
|
||||
count := len(s.ch)
|
||||
events := make([]*BinlogEvent, 0, count)
|
||||
for i := 0; i < count; i++ {
|
||||
events = append(events, <-s.ch)
|
||||
}
|
||||
return events
|
||||
}
|
||||
|
||||
func (s *BinlogStreamer) close() {
|
||||
s.closeWithError(ErrSyncClosed)
|
||||
}
|
||||
|
250
vendor/github.com/siddontang/go-mysql/replication/binlogsyncer.go
generated
vendored
250
vendor/github.com/siddontang/go-mysql/replication/binlogsyncer.go
generated
vendored
@ -1,17 +1,19 @@
|
||||
package replication
|
||||
|
||||
import (
|
||||
"context"
|
||||
"crypto/tls"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"net"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/satori/go.uuid"
|
||||
"github.com/siddontang/go-log/log"
|
||||
"github.com/siddontang/go-mysql/client"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
@ -40,6 +42,9 @@ type BinlogSyncerConfig struct {
|
||||
// If not set, use os.Hostname() instead.
|
||||
Localhost string
|
||||
|
||||
// Charset is for MySQL client character set
|
||||
Charset string
|
||||
|
||||
// SemiSyncEnabled enables semi-sync or not.
|
||||
SemiSyncEnabled bool
|
||||
|
||||
@ -48,6 +53,46 @@ type BinlogSyncerConfig struct {
|
||||
|
||||
// If not nil, use the provided tls.Config to connect to the database using TLS/SSL.
|
||||
TLSConfig *tls.Config
|
||||
|
||||
// Use replication.Time structure for timestamp and datetime.
|
||||
// We will use Local location for timestamp and UTC location for datatime.
|
||||
ParseTime bool
|
||||
|
||||
// If ParseTime is false, convert TIMESTAMP into this specified timezone. If
|
||||
// ParseTime is true, this option will have no effect and TIMESTAMP data will
|
||||
// be parsed into the local timezone and a full time.Time struct will be
|
||||
// returned.
|
||||
//
|
||||
// Note that MySQL TIMESTAMP columns are offset from the machine local
|
||||
// timezone while DATETIME columns are offset from UTC. This is consistent
|
||||
// with documented MySQL behaviour as it return TIMESTAMP in local timezone
|
||||
// and DATETIME in UTC.
|
||||
//
|
||||
// Setting this to UTC effectively equalizes the TIMESTAMP and DATETIME time
|
||||
// strings obtained from MySQL.
|
||||
TimestampStringLocation *time.Location
|
||||
|
||||
// Use decimal.Decimal structure for decimals.
|
||||
UseDecimal bool
|
||||
|
||||
// RecvBufferSize sets the size in bytes of the operating system's receive buffer associated with the connection.
|
||||
RecvBufferSize int
|
||||
|
||||
// master heartbeat period
|
||||
HeartbeatPeriod time.Duration
|
||||
|
||||
// read timeout
|
||||
ReadTimeout time.Duration
|
||||
|
||||
// maximum number of attempts to re-establish a broken connection
|
||||
MaxReconnectAttempts int
|
||||
|
||||
// Only works when MySQL/MariaDB variable binlog_checksum=CRC32.
|
||||
// For MySQL, binlog_checksum was introduced since 5.6.2, but CRC32 was set as default value since 5.6.6 .
|
||||
// https://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html#option_mysqld_binlog-checksum
|
||||
// For MariaDB, binlog_checksum was introduced since MariaDB 5.3, but CRC32 was set as default value since MariaDB 10.2.1 .
|
||||
// https://mariadb.com/kb/en/library/replication-and-binary-log-server-system-variables/#binlog_checksum
|
||||
VerifyChecksum bool
|
||||
}
|
||||
|
||||
// BinlogSyncer syncs binlog event from server.
|
||||
@ -64,14 +109,23 @@ type BinlogSyncer struct {
|
||||
|
||||
nextPos Position
|
||||
|
||||
gset GTIDSet
|
||||
|
||||
running bool
|
||||
|
||||
ctx context.Context
|
||||
cancel context.CancelFunc
|
||||
|
||||
lastConnectionID uint32
|
||||
|
||||
retryCount int
|
||||
}
|
||||
|
||||
// NewBinlogSyncer creates the BinlogSyncer with cfg.
|
||||
func NewBinlogSyncer(cfg BinlogSyncerConfig) *BinlogSyncer {
|
||||
if cfg.ServerID == 0 {
|
||||
log.Fatal("can't use 0 as the server ID")
|
||||
}
|
||||
|
||||
// Clear the Password to avoid outputing it in log.
|
||||
pass := cfg.Password
|
||||
@ -84,7 +138,10 @@ func NewBinlogSyncer(cfg BinlogSyncerConfig) *BinlogSyncer {
|
||||
b.cfg = cfg
|
||||
b.parser = NewBinlogParser()
|
||||
b.parser.SetRawMode(b.cfg.RawModeEnabled)
|
||||
|
||||
b.parser.SetParseTime(b.cfg.ParseTime)
|
||||
b.parser.SetTimestampStringLocation(b.cfg.TimestampStringLocation)
|
||||
b.parser.SetUseDecimal(b.cfg.UseDecimal)
|
||||
b.parser.SetVerifyChecksum(b.cfg.VerifyChecksum)
|
||||
b.running = false
|
||||
b.ctx, b.cancel = context.WithCancel(context.Background())
|
||||
|
||||
@ -136,15 +193,53 @@ func (b *BinlogSyncer) registerSlave() error {
|
||||
b.c.Close()
|
||||
}
|
||||
|
||||
log.Infof("register slave for master server %s:%d", b.cfg.Host, b.cfg.Port)
|
||||
addr := ""
|
||||
if strings.Contains(b.cfg.Host, "/") {
|
||||
addr = b.cfg.Host
|
||||
} else {
|
||||
addr = fmt.Sprintf("%s:%d", b.cfg.Host, b.cfg.Port)
|
||||
}
|
||||
|
||||
log.Infof("register slave for master server %s", addr)
|
||||
var err error
|
||||
b.c, err = client.Connect(fmt.Sprintf("%s:%d", b.cfg.Host, b.cfg.Port), b.cfg.User, b.cfg.Password, "", func(c *client.Conn) {
|
||||
c.TLSConfig = b.cfg.TLSConfig
|
||||
b.c, err = client.Connect(addr, b.cfg.User, b.cfg.Password, "", func(c *client.Conn) {
|
||||
c.SetTLSConfig(b.cfg.TLSConfig)
|
||||
})
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
if len(b.cfg.Charset) != 0 {
|
||||
b.c.SetCharset(b.cfg.Charset)
|
||||
}
|
||||
|
||||
//set read timeout
|
||||
if b.cfg.ReadTimeout > 0 {
|
||||
b.c.SetReadDeadline(time.Now().Add(b.cfg.ReadTimeout))
|
||||
}
|
||||
|
||||
if b.cfg.RecvBufferSize > 0 {
|
||||
if tcp, ok := b.c.Conn.Conn.(*net.TCPConn); ok {
|
||||
tcp.SetReadBuffer(b.cfg.RecvBufferSize)
|
||||
}
|
||||
}
|
||||
|
||||
// kill last connection id
|
||||
if b.lastConnectionID > 0 {
|
||||
cmd := fmt.Sprintf("KILL %d", b.lastConnectionID)
|
||||
if _, err := b.c.Execute(cmd); err != nil {
|
||||
log.Errorf("kill connection %d error %v", b.lastConnectionID, err)
|
||||
// Unknown thread id
|
||||
if code := ErrorCode(err.Error()); code != ER_NO_SUCH_THREAD {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
}
|
||||
log.Infof("kill last connection id %d", b.lastConnectionID)
|
||||
}
|
||||
|
||||
// save last last connection id for kill
|
||||
b.lastConnectionID = b.c.GetConnectionID()
|
||||
|
||||
//for mysql 5.6+, binlog has a crc32 checksum
|
||||
//before mysql 5.6, this will not work, don't matter.:-)
|
||||
if r, err := b.c.Execute("SHOW GLOBAL VARIABLES LIKE 'BINLOG_CHECKSUM'"); err != nil {
|
||||
@ -180,6 +275,14 @@ func (b *BinlogSyncer) registerSlave() error {
|
||||
}
|
||||
}
|
||||
|
||||
if b.cfg.HeartbeatPeriod > 0 {
|
||||
_, err = b.c.Execute(fmt.Sprintf("SET @master_heartbeat_period=%d;", b.cfg.HeartbeatPeriod))
|
||||
if err != nil {
|
||||
log.Errorf("failed to set @master_heartbeat_period=%d, err: %v", b.cfg.HeartbeatPeriod, err)
|
||||
return errors.Trace(err)
|
||||
}
|
||||
}
|
||||
|
||||
if err = b.writeRegisterSlaveCommand(); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
@ -191,7 +294,7 @@ func (b *BinlogSyncer) registerSlave() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *BinlogSyncer) enalbeSemiSync() error {
|
||||
func (b *BinlogSyncer) enableSemiSync() error {
|
||||
if !b.cfg.SemiSyncEnabled {
|
||||
return nil
|
||||
}
|
||||
@ -224,7 +327,7 @@ func (b *BinlogSyncer) prepare() error {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
if err := b.enalbeSemiSync(); err != nil {
|
||||
if err := b.enableSemiSync(); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
@ -241,6 +344,11 @@ func (b *BinlogSyncer) startDumpStream() *BinlogStreamer {
|
||||
return s
|
||||
}
|
||||
|
||||
// GetNextPosition returns the next position of the syncer
|
||||
func (b *BinlogSyncer) GetNextPosition() Position {
|
||||
return b.nextPos
|
||||
}
|
||||
|
||||
// StartSync starts syncing from the `pos` position.
|
||||
func (b *BinlogSyncer) StartSync(pos Position) (*BinlogStreamer, error) {
|
||||
log.Infof("begin to sync binlog from position %s", pos)
|
||||
@ -261,7 +369,9 @@ func (b *BinlogSyncer) StartSync(pos Position) (*BinlogStreamer, error) {
|
||||
|
||||
// StartSyncGTID starts syncing from the `gset` GTIDSet.
|
||||
func (b *BinlogSyncer) StartSyncGTID(gset GTIDSet) (*BinlogStreamer, error) {
|
||||
log.Infof("begin to sync binlog from GTID %s", gset)
|
||||
log.Infof("begin to sync binlog from GTID set %s", gset)
|
||||
|
||||
b.gset = gset
|
||||
|
||||
b.m.Lock()
|
||||
defer b.m.Unlock()
|
||||
@ -275,11 +385,12 @@ func (b *BinlogSyncer) StartSyncGTID(gset GTIDSet) (*BinlogStreamer, error) {
|
||||
}
|
||||
|
||||
var err error
|
||||
if b.cfg.Flavor != MariaDBFlavor {
|
||||
switch b.cfg.Flavor {
|
||||
case MariaDBFlavor:
|
||||
err = b.writeBinlogDumpMariadbGTIDCommand(gset)
|
||||
default:
|
||||
// default use MySQL
|
||||
err = b.writeBinlogDumpMysqlGTIDCommand(gset)
|
||||
} else {
|
||||
err = b.writeBinlogDumpMariadbGTIDCommand(gset)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
@ -289,7 +400,7 @@ func (b *BinlogSyncer) StartSyncGTID(gset GTIDSet) (*BinlogStreamer, error) {
|
||||
return b.startDumpStream(), nil
|
||||
}
|
||||
|
||||
func (b *BinlogSyncer) writeBinglogDumpCommand(p Position) error {
|
||||
func (b *BinlogSyncer) writeBinlogDumpCommand(p Position) error {
|
||||
b.c.ResetSequence()
|
||||
|
||||
data := make([]byte, 4+1+4+2+4+len(p.Name))
|
||||
@ -313,7 +424,7 @@ func (b *BinlogSyncer) writeBinglogDumpCommand(p Position) error {
|
||||
}
|
||||
|
||||
func (b *BinlogSyncer) writeBinlogDumpMysqlGTIDCommand(gset GTIDSet) error {
|
||||
p := Position{"", 4}
|
||||
p := Position{Name: "", Pos: 4}
|
||||
gtidData := gset.Encode()
|
||||
|
||||
b.c.ResetSequence()
|
||||
@ -369,7 +480,7 @@ func (b *BinlogSyncer) writeBinlogDumpMariadbGTIDCommand(gset GTIDSet) error {
|
||||
}
|
||||
|
||||
// Since we use @slave_connect_state, the file and position here are ignored.
|
||||
return b.writeBinglogDumpCommand(Position{"", 0})
|
||||
return b.writeBinlogDumpCommand(Position{Name: "", Pos: 0})
|
||||
}
|
||||
|
||||
// localHostname returns the hostname that register slave would register as.
|
||||
@ -444,21 +555,25 @@ func (b *BinlogSyncer) replySemiSyncACK(p Position) error {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
_, err = b.c.ReadOKPacket()
|
||||
if err != nil {
|
||||
}
|
||||
return errors.Trace(err)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *BinlogSyncer) retrySync() error {
|
||||
b.m.Lock()
|
||||
defer b.m.Unlock()
|
||||
|
||||
log.Infof("begin to re-sync from %s", b.nextPos)
|
||||
|
||||
b.parser.Reset()
|
||||
if err := b.prepareSyncPos(b.nextPos); err != nil {
|
||||
return errors.Trace(err)
|
||||
|
||||
if b.gset != nil {
|
||||
log.Infof("begin to re-sync from %s", b.gset.String())
|
||||
if err := b.prepareSyncGTID(b.gset); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
} else {
|
||||
log.Infof("begin to re-sync from %s", b.nextPos)
|
||||
if err := b.prepareSyncPos(b.nextPos); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
@ -474,13 +589,34 @@ func (b *BinlogSyncer) prepareSyncPos(pos Position) error {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
if err := b.writeBinglogDumpCommand(pos); err != nil {
|
||||
if err := b.writeBinlogDumpCommand(pos); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *BinlogSyncer) prepareSyncGTID(gset GTIDSet) error {
|
||||
var err error
|
||||
|
||||
if err = b.prepare(); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
switch b.cfg.Flavor {
|
||||
case MariaDBFlavor:
|
||||
err = b.writeBinlogDumpMariadbGTIDCommand(gset)
|
||||
default:
|
||||
// default use MySQL
|
||||
err = b.writeBinlogDumpMysqlGTIDCommand(gset)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *BinlogSyncer) onStream(s *BinlogStreamer) {
|
||||
defer func() {
|
||||
if e := recover(); e != nil {
|
||||
@ -495,21 +631,27 @@ func (b *BinlogSyncer) onStream(s *BinlogStreamer) {
|
||||
log.Error(err)
|
||||
|
||||
// we meet connection error, should re-connect again with
|
||||
// last nextPos we got.
|
||||
if len(b.nextPos.Name) == 0 {
|
||||
// last nextPos or nextGTID we got.
|
||||
if len(b.nextPos.Name) == 0 && b.gset == nil {
|
||||
// we can't get the correct position, close.
|
||||
s.closeWithError(err)
|
||||
return
|
||||
}
|
||||
|
||||
// TODO: add a max retry count.
|
||||
for {
|
||||
select {
|
||||
case <-b.ctx.Done():
|
||||
s.close()
|
||||
return
|
||||
case <-time.After(time.Second):
|
||||
b.retryCount++
|
||||
if err = b.retrySync(); err != nil {
|
||||
if b.cfg.MaxReconnectAttempts > 0 && b.retryCount >= b.cfg.MaxReconnectAttempts {
|
||||
log.Errorf("retry sync err: %v, exceeded max retries (%d)", err, b.cfg.MaxReconnectAttempts)
|
||||
s.closeWithError(err)
|
||||
return
|
||||
}
|
||||
|
||||
log.Errorf("retry sync err: %v, wait 1s and retry again", err)
|
||||
continue
|
||||
}
|
||||
@ -522,6 +664,14 @@ func (b *BinlogSyncer) onStream(s *BinlogStreamer) {
|
||||
continue
|
||||
}
|
||||
|
||||
//set read timeout
|
||||
if b.cfg.ReadTimeout > 0 {
|
||||
b.c.SetReadDeadline(time.Now().Add(b.cfg.ReadTimeout))
|
||||
}
|
||||
|
||||
// Reset retry count on successful packet receieve
|
||||
b.retryCount = 0
|
||||
|
||||
switch data[0] {
|
||||
case OK_HEADER:
|
||||
if err = b.parseEvent(s, data); err != nil {
|
||||
@ -557,7 +707,7 @@ func (b *BinlogSyncer) parseEvent(s *BinlogStreamer, data []byte) error {
|
||||
data = data[2:]
|
||||
}
|
||||
|
||||
e, err := b.parser.parse(data)
|
||||
e, err := b.parser.Parse(data)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
@ -566,11 +716,33 @@ func (b *BinlogSyncer) parseEvent(s *BinlogStreamer, data []byte) error {
|
||||
// Some events like FormatDescriptionEvent return 0, ignore.
|
||||
b.nextPos.Pos = e.Header.LogPos
|
||||
}
|
||||
|
||||
if re, ok := e.Event.(*RotateEvent); ok {
|
||||
b.nextPos.Name = string(re.NextLogName)
|
||||
b.nextPos.Pos = uint32(re.Position)
|
||||
switch event := e.Event.(type) {
|
||||
case *RotateEvent:
|
||||
b.nextPos.Name = string(event.NextLogName)
|
||||
b.nextPos.Pos = uint32(event.Position)
|
||||
log.Infof("rotate to %s", b.nextPos)
|
||||
case *GTIDEvent:
|
||||
if b.gset == nil {
|
||||
break
|
||||
}
|
||||
u, _ := uuid.FromBytes(event.SID)
|
||||
err := b.gset.Update(fmt.Sprintf("%s:%d", u.String(), event.GNO))
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
case *MariadbGTIDEvent:
|
||||
if b.gset == nil {
|
||||
break
|
||||
}
|
||||
GTID := event.GTID
|
||||
err := b.gset.Update(fmt.Sprintf("%d-%d-%d", GTID.DomainID, GTID.ServerID, GTID.SequenceNumber))
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
case *XIDEvent:
|
||||
event.GSet = b.getGtidSet()
|
||||
case *QueryEvent:
|
||||
event.GSet = b.getGtidSet()
|
||||
}
|
||||
|
||||
needStop := false
|
||||
@ -593,3 +765,15 @@ func (b *BinlogSyncer) parseEvent(s *BinlogStreamer, data []byte) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (b *BinlogSyncer) getGtidSet() GTIDSet {
|
||||
if b.gset == nil {
|
||||
return nil
|
||||
}
|
||||
return b.gset.Clone()
|
||||
}
|
||||
|
||||
// LastConnectionID returns last connectionID.
|
||||
func (b *BinlogSyncer) LastConnectionID() uint32 {
|
||||
return b.lastConnectionID
|
||||
}
|
||||
|
61
vendor/github.com/siddontang/go-mysql/replication/event.go
generated
vendored
61
vendor/github.com/siddontang/go-mysql/replication/event.go
generated
vendored
@ -2,7 +2,6 @@ package replication
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
//"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"strconv"
|
||||
@ -16,11 +15,15 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
EventHeaderSize = 19
|
||||
EventHeaderSize = 19
|
||||
SidLength = 16
|
||||
LogicalTimestampTypeCode = 2
|
||||
PartLogicalTimestampLength = 8
|
||||
BinlogChecksumLength = 4
|
||||
)
|
||||
|
||||
type BinlogEvent struct {
|
||||
// raw binlog data, including crc32 checksum if exists
|
||||
// raw binlog data which contains all data, including binlog header and event body, and including crc32 checksum if exists
|
||||
RawData []byte
|
||||
|
||||
Header *EventHeader
|
||||
@ -50,7 +53,7 @@ type EventError struct {
|
||||
}
|
||||
|
||||
func (e *EventError) Error() string {
|
||||
return e.Err
|
||||
return fmt.Sprintf("Header %#v, Data %q, Err: %v", e.Header, e.Data, e.Err)
|
||||
}
|
||||
|
||||
type EventHeader struct {
|
||||
@ -216,6 +219,9 @@ func (e *RotateEvent) Dump(w io.Writer) {
|
||||
|
||||
type XIDEvent struct {
|
||||
XID uint64
|
||||
|
||||
// in fact XIDEvent dosen't have the GTIDSet information, just for beneficial to use
|
||||
GSet GTIDSet
|
||||
}
|
||||
|
||||
func (e *XIDEvent) Decode(data []byte) error {
|
||||
@ -225,6 +231,9 @@ func (e *XIDEvent) Decode(data []byte) error {
|
||||
|
||||
func (e *XIDEvent) Dump(w io.Writer) {
|
||||
fmt.Fprintf(w, "XID: %d\n", e.XID)
|
||||
if e.GSet != nil {
|
||||
fmt.Fprintf(w, "GTIDSet: %s\n", e.GSet.String())
|
||||
}
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
@ -235,6 +244,9 @@ type QueryEvent struct {
|
||||
StatusVars []byte
|
||||
Schema []byte
|
||||
Query []byte
|
||||
|
||||
// in fact QueryEvent dosen't have the GTIDSet information, just for beneficial to use
|
||||
GSet GTIDSet
|
||||
}
|
||||
|
||||
func (e *QueryEvent) Decode(data []byte) error {
|
||||
@ -275,21 +287,36 @@ func (e *QueryEvent) Dump(w io.Writer) {
|
||||
//fmt.Fprintf(w, "Status vars: \n%s", hex.Dump(e.StatusVars))
|
||||
fmt.Fprintf(w, "Schema: %s\n", e.Schema)
|
||||
fmt.Fprintf(w, "Query: %s\n", e.Query)
|
||||
if e.GSet != nil {
|
||||
fmt.Fprintf(w, "GTIDSet: %s\n", e.GSet.String())
|
||||
}
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
type GTIDEvent struct {
|
||||
CommitFlag uint8
|
||||
SID []byte
|
||||
GNO int64
|
||||
CommitFlag uint8
|
||||
SID []byte
|
||||
GNO int64
|
||||
LastCommitted int64
|
||||
SequenceNumber int64
|
||||
}
|
||||
|
||||
func (e *GTIDEvent) Decode(data []byte) error {
|
||||
e.CommitFlag = uint8(data[0])
|
||||
|
||||
e.SID = data[1:17]
|
||||
|
||||
e.GNO = int64(binary.LittleEndian.Uint64(data[17:]))
|
||||
pos := 0
|
||||
e.CommitFlag = uint8(data[pos])
|
||||
pos++
|
||||
e.SID = data[pos : pos+SidLength]
|
||||
pos += SidLength
|
||||
e.GNO = int64(binary.LittleEndian.Uint64(data[pos:]))
|
||||
pos += 8
|
||||
if len(data) >= 42 {
|
||||
if uint8(data[pos]) == LogicalTimestampTypeCode {
|
||||
pos++
|
||||
e.LastCommitted = int64(binary.LittleEndian.Uint64(data[pos:]))
|
||||
pos += PartLogicalTimestampLength
|
||||
e.SequenceNumber = int64(binary.LittleEndian.Uint64(data[pos:]))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -297,6 +324,8 @@ func (e *GTIDEvent) Dump(w io.Writer) {
|
||||
fmt.Fprintf(w, "Commit flag: %d\n", e.CommitFlag)
|
||||
u, _ := uuid.FromBytes(e.SID)
|
||||
fmt.Fprintf(w, "GTID_NEXT: %s:%d\n", u.String(), e.GNO)
|
||||
fmt.Fprintf(w, "LAST_COMMITTED: %d\n", e.LastCommitted)
|
||||
fmt.Fprintf(w, "SEQUENCE_NUMBER: %d\n", e.SequenceNumber)
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
@ -382,16 +411,16 @@ func (e *ExecuteLoadQueryEvent) Dump(w io.Writer) {
|
||||
// case MARIADB_ANNOTATE_ROWS_EVENT:
|
||||
// return "MariadbAnnotateRowsEvent"
|
||||
|
||||
type MariadbAnnotaeRowsEvent struct {
|
||||
type MariadbAnnotateRowsEvent struct {
|
||||
Query []byte
|
||||
}
|
||||
|
||||
func (e *MariadbAnnotaeRowsEvent) Decode(data []byte) error {
|
||||
func (e *MariadbAnnotateRowsEvent) Decode(data []byte) error {
|
||||
e.Query = data
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *MariadbAnnotaeRowsEvent) Dump(w io.Writer) {
|
||||
func (e *MariadbAnnotateRowsEvent) Dump(w io.Writer) {
|
||||
fmt.Fprintf(w, "Query: %s\n", e.Query)
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
@ -424,7 +453,7 @@ func (e *MariadbGTIDEvent) Decode(data []byte) error {
|
||||
}
|
||||
|
||||
func (e *MariadbGTIDEvent) Dump(w io.Writer) {
|
||||
fmt.Fprintf(w, "GTID: %s\n", e.GTID)
|
||||
fmt.Fprintf(w, "GTID: %v\n", e.GTID)
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
|
||||
|
16
vendor/github.com/siddontang/go-mysql/replication/json_binary.go
generated
vendored
16
vendor/github.com/siddontang/go-mysql/replication/json_binary.go
generated
vendored
@ -70,8 +70,13 @@ func jsonbGetValueEntrySize(isSmall bool) int {
|
||||
|
||||
// decodeJsonBinary decodes the JSON binary encoding data and returns
|
||||
// the common JSON encoding data.
|
||||
func decodeJsonBinary(data []byte) ([]byte, error) {
|
||||
d := new(jsonBinaryDecoder)
|
||||
func (e *RowsEvent) decodeJsonBinary(data []byte) ([]byte, error) {
|
||||
// Sometimes, we can insert a NULL JSON even we set the JSON field as NOT NULL.
|
||||
// If we meet this case, we can return an empty slice.
|
||||
if len(data) == 0 {
|
||||
return []byte{}, nil
|
||||
}
|
||||
d := jsonBinaryDecoder{useDecimal: e.useDecimal}
|
||||
|
||||
if d.isDataShort(data, 1) {
|
||||
return nil, d.err
|
||||
@ -86,7 +91,8 @@ func decodeJsonBinary(data []byte) ([]byte, error) {
|
||||
}
|
||||
|
||||
type jsonBinaryDecoder struct {
|
||||
err error
|
||||
useDecimal bool
|
||||
err error
|
||||
}
|
||||
|
||||
func (d *jsonBinaryDecoder) decodeValue(tp byte, data []byte) interface{} {
|
||||
@ -382,7 +388,7 @@ func (d *jsonBinaryDecoder) decodeDecimal(data []byte) interface{} {
|
||||
precision := int(data[0])
|
||||
scale := int(data[1])
|
||||
|
||||
v, _, err := decodeDecimal(data[2:], precision, scale)
|
||||
v, _, err := decodeDecimal(data[2:], precision, scale, d.useDecimal)
|
||||
d.err = err
|
||||
|
||||
return v
|
||||
@ -463,7 +469,7 @@ func (d *jsonBinaryDecoder) decodeVariableLength(data []byte) (int, int) {
|
||||
|
||||
if v&0x80 == 0 {
|
||||
if length > math.MaxUint32 {
|
||||
d.err = errors.Errorf("variable length %d must <= %d", length, math.MaxUint32)
|
||||
d.err = errors.Errorf("variable length %d must <= %d", length, int64(math.MaxUint32))
|
||||
return 0, 0
|
||||
}
|
||||
|
||||
|
207
vendor/github.com/siddontang/go-mysql/replication/parser.go
generated
vendored
207
vendor/github.com/siddontang/go-mysql/replication/parser.go
generated
vendored
@ -2,13 +2,22 @@ package replication
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"hash/crc32"
|
||||
"io"
|
||||
"os"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/juju/errors"
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrChecksumMismatch indicates binlog checksum mismatch.
|
||||
ErrChecksumMismatch = errors.New("binlog checksum mismatch, data may be corrupted")
|
||||
)
|
||||
|
||||
type BinlogParser struct {
|
||||
format *FormatDescriptionEvent
|
||||
|
||||
@ -16,6 +25,15 @@ type BinlogParser struct {
|
||||
|
||||
// for rawMode, we only parse FormatDescriptionEvent and RotateEvent
|
||||
rawMode bool
|
||||
|
||||
parseTime bool
|
||||
timestampStringLocation *time.Location
|
||||
|
||||
// used to start/stop processing
|
||||
stopProcessing uint32
|
||||
|
||||
useDecimal bool
|
||||
verifyChecksum bool
|
||||
}
|
||||
|
||||
func NewBinlogParser() *BinlogParser {
|
||||
@ -26,6 +44,14 @@ func NewBinlogParser() *BinlogParser {
|
||||
return p
|
||||
}
|
||||
|
||||
func (p *BinlogParser) Stop() {
|
||||
atomic.StoreUint32(&p.stopProcessing, 1)
|
||||
}
|
||||
|
||||
func (p *BinlogParser) Resume() {
|
||||
atomic.StoreUint32(&p.stopProcessing, 0)
|
||||
}
|
||||
|
||||
func (p *BinlogParser) Reset() {
|
||||
p.format = nil
|
||||
}
|
||||
@ -48,64 +74,102 @@ func (p *BinlogParser) ParseFile(name string, offset int64, onEvent OnEventFunc)
|
||||
|
||||
if offset < 4 {
|
||||
offset = 4
|
||||
} else if offset > 4 {
|
||||
// FORMAT_DESCRIPTION event should be read by default always (despite that fact passed offset may be higher than 4)
|
||||
if _, err = f.Seek(4, os.SEEK_SET); err != nil {
|
||||
return errors.Errorf("seek %s to %d error %v", name, offset, err)
|
||||
}
|
||||
|
||||
if err = p.parseFormatDescriptionEvent(f, onEvent); err != nil {
|
||||
return errors.Annotatef(err, "parse FormatDescriptionEvent")
|
||||
}
|
||||
}
|
||||
|
||||
if _, err = f.Seek(offset, os.SEEK_SET); err != nil {
|
||||
return errors.Errorf("seek %s to %d error %v", name, offset, err)
|
||||
}
|
||||
|
||||
return p.parseReader(f, onEvent)
|
||||
return p.ParseReader(f, onEvent)
|
||||
}
|
||||
|
||||
func (p *BinlogParser) parseReader(r io.Reader, onEvent OnEventFunc) error {
|
||||
p.Reset()
|
||||
func (p *BinlogParser) parseFormatDescriptionEvent(r io.Reader, onEvent OnEventFunc) error {
|
||||
_, err := p.parseSingleEvent(r, onEvent)
|
||||
return err
|
||||
}
|
||||
|
||||
// ParseSingleEvent parses single binlog event and passes the event to onEvent function.
|
||||
func (p *BinlogParser) ParseSingleEvent(r io.Reader, onEvent OnEventFunc) (bool, error) {
|
||||
return p.parseSingleEvent(r, onEvent)
|
||||
}
|
||||
|
||||
func (p *BinlogParser) parseSingleEvent(r io.Reader, onEvent OnEventFunc) (bool, error) {
|
||||
var err error
|
||||
var n int64
|
||||
|
||||
var buf bytes.Buffer
|
||||
if n, err = io.CopyN(&buf, r, EventHeaderSize); err == io.EOF {
|
||||
return true, nil
|
||||
} else if err != nil {
|
||||
return false, errors.Errorf("get event header err %v, need %d but got %d", err, EventHeaderSize, n)
|
||||
}
|
||||
|
||||
var h *EventHeader
|
||||
h, err = p.parseHeader(buf.Bytes())
|
||||
if err != nil {
|
||||
return false, errors.Trace(err)
|
||||
}
|
||||
|
||||
if h.EventSize <= uint32(EventHeaderSize) {
|
||||
return false, errors.Errorf("invalid event header, event size is %d, too small", h.EventSize)
|
||||
}
|
||||
if n, err = io.CopyN(&buf, r, int64(h.EventSize-EventHeaderSize)); err != nil {
|
||||
return false, errors.Errorf("get event err %v, need %d but got %d", err, h.EventSize, n)
|
||||
}
|
||||
if buf.Len() != int(h.EventSize) {
|
||||
return false, errors.Errorf("invalid raw data size in event %s, need %d but got %d", h.EventType, h.EventSize, buf.Len())
|
||||
}
|
||||
|
||||
rawData := buf.Bytes()
|
||||
bodyLen := int(h.EventSize) - EventHeaderSize
|
||||
body := rawData[EventHeaderSize:]
|
||||
if len(body) != bodyLen {
|
||||
return false, errors.Errorf("invalid body data size in event %s, need %d but got %d", h.EventType, bodyLen, len(body))
|
||||
}
|
||||
|
||||
var e Event
|
||||
e, err = p.parseEvent(h, body, rawData)
|
||||
if err != nil {
|
||||
if err == errMissingTableMapEvent {
|
||||
return false, nil
|
||||
}
|
||||
return false, errors.Trace(err)
|
||||
}
|
||||
|
||||
if err = onEvent(&BinlogEvent{RawData: rawData, Header: h, Event: e}); err != nil {
|
||||
return false, errors.Trace(err)
|
||||
}
|
||||
|
||||
return false, nil
|
||||
}
|
||||
|
||||
func (p *BinlogParser) ParseReader(r io.Reader, onEvent OnEventFunc) error {
|
||||
|
||||
for {
|
||||
headBuf := make([]byte, EventHeaderSize)
|
||||
|
||||
if _, err = io.ReadFull(r, headBuf); err == io.EOF {
|
||||
return nil
|
||||
} else if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
var h *EventHeader
|
||||
h, err = p.parseHeader(headBuf)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
if h.EventSize <= uint32(EventHeaderSize) {
|
||||
return errors.Errorf("invalid event header, event size is %d, too small", h.EventSize)
|
||||
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
if n, err = io.CopyN(&buf, r, int64(h.EventSize)-int64(EventHeaderSize)); err != nil {
|
||||
return errors.Errorf("get event body err %v, need %d - %d, but got %d", err, h.EventSize, EventHeaderSize, n)
|
||||
}
|
||||
|
||||
data := buf.Bytes()
|
||||
rawData := data
|
||||
|
||||
eventLen := int(h.EventSize) - EventHeaderSize
|
||||
|
||||
if len(data) != eventLen {
|
||||
return errors.Errorf("invalid data size %d in event %s, less event length %d", len(data), h.EventType, eventLen)
|
||||
}
|
||||
|
||||
var e Event
|
||||
e, err = p.parseEvent(h, data)
|
||||
if err != nil {
|
||||
if atomic.LoadUint32(&p.stopProcessing) == 1 {
|
||||
break
|
||||
}
|
||||
|
||||
if err = onEvent(&BinlogEvent{rawData, h, e}); err != nil {
|
||||
done, err := p.parseSingleEvent(r, onEvent)
|
||||
if err != nil {
|
||||
if err == errMissingTableMapEvent {
|
||||
continue
|
||||
}
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
if done {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
@ -115,6 +179,22 @@ func (p *BinlogParser) SetRawMode(mode bool) {
|
||||
p.rawMode = mode
|
||||
}
|
||||
|
||||
func (p *BinlogParser) SetParseTime(parseTime bool) {
|
||||
p.parseTime = parseTime
|
||||
}
|
||||
|
||||
func (p *BinlogParser) SetTimestampStringLocation(timestampStringLocation *time.Location) {
|
||||
p.timestampStringLocation = timestampStringLocation
|
||||
}
|
||||
|
||||
func (p *BinlogParser) SetUseDecimal(useDecimal bool) {
|
||||
p.useDecimal = useDecimal
|
||||
}
|
||||
|
||||
func (p *BinlogParser) SetVerifyChecksum(verify bool) {
|
||||
p.verifyChecksum = verify
|
||||
}
|
||||
|
||||
func (p *BinlogParser) parseHeader(data []byte) (*EventHeader, error) {
|
||||
h := new(EventHeader)
|
||||
err := h.Decode(data)
|
||||
@ -125,7 +205,7 @@ func (p *BinlogParser) parseHeader(data []byte) (*EventHeader, error) {
|
||||
return h, nil
|
||||
}
|
||||
|
||||
func (p *BinlogParser) parseEvent(h *EventHeader, data []byte) (Event, error) {
|
||||
func (p *BinlogParser) parseEvent(h *EventHeader, data []byte, rawData []byte) (Event, error) {
|
||||
var e Event
|
||||
|
||||
if h.EventType == FORMAT_DESCRIPTION_EVENT {
|
||||
@ -133,7 +213,11 @@ func (p *BinlogParser) parseEvent(h *EventHeader, data []byte) (Event, error) {
|
||||
e = p.format
|
||||
} else {
|
||||
if p.format != nil && p.format.ChecksumAlgorithm == BINLOG_CHECKSUM_ALG_CRC32 {
|
||||
data = data[0 : len(data)-4]
|
||||
err := p.verifyCrc32Checksum(rawData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
data = data[0 : len(data)-BinlogChecksumLength]
|
||||
}
|
||||
|
||||
if h.EventType == ROTATE_EVENT {
|
||||
@ -166,12 +250,14 @@ func (p *BinlogParser) parseEvent(h *EventHeader, data []byte) (Event, error) {
|
||||
e = &RowsQueryEvent{}
|
||||
case GTID_EVENT:
|
||||
e = >IDEvent{}
|
||||
case ANONYMOUS_GTID_EVENT:
|
||||
e = >IDEvent{}
|
||||
case BEGIN_LOAD_QUERY_EVENT:
|
||||
e = &BeginLoadQueryEvent{}
|
||||
case EXECUTE_LOAD_QUERY_EVENT:
|
||||
e = &ExecuteLoadQueryEvent{}
|
||||
case MARIADB_ANNOTATE_ROWS_EVENT:
|
||||
e = &MariadbAnnotaeRowsEvent{}
|
||||
e = &MariadbAnnotateRowsEvent{}
|
||||
case MARIADB_BINLOG_CHECKPOINT_EVENT:
|
||||
e = &MariadbBinlogCheckPointEvent{}
|
||||
case MARIADB_GTID_LIST_EVENT:
|
||||
@ -206,7 +292,13 @@ func (p *BinlogParser) parseEvent(h *EventHeader, data []byte) (Event, error) {
|
||||
return e, nil
|
||||
}
|
||||
|
||||
func (p *BinlogParser) parse(data []byte) (*BinlogEvent, error) {
|
||||
// Given the bytes for a a binary log event: return the decoded event.
|
||||
// With the exception of the FORMAT_DESCRIPTION_EVENT event type
|
||||
// there must have previously been passed a FORMAT_DESCRIPTION_EVENT
|
||||
// into the parser for this to work properly on any given event.
|
||||
// Passing a new FORMAT_DESCRIPTION_EVENT into the parser will replace
|
||||
// an existing one.
|
||||
func (p *BinlogParser) Parse(data []byte) (*BinlogEvent, error) {
|
||||
rawData := data
|
||||
|
||||
h, err := p.parseHeader(data)
|
||||
@ -222,12 +314,32 @@ func (p *BinlogParser) parse(data []byte) (*BinlogEvent, error) {
|
||||
return nil, fmt.Errorf("invalid data size %d in event %s, less event length %d", len(data), h.EventType, eventLen)
|
||||
}
|
||||
|
||||
e, err := p.parseEvent(h, data)
|
||||
e, err := p.parseEvent(h, data, rawData)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &BinlogEvent{rawData, h, e}, nil
|
||||
return &BinlogEvent{RawData: rawData, Header: h, Event: e}, nil
|
||||
}
|
||||
|
||||
func (p *BinlogParser) verifyCrc32Checksum(rawData []byte) error {
|
||||
if !p.verifyChecksum {
|
||||
return nil
|
||||
}
|
||||
|
||||
calculatedPart := rawData[0 : len(rawData)-BinlogChecksumLength]
|
||||
expectedChecksum := rawData[len(rawData)-BinlogChecksumLength:]
|
||||
|
||||
// mysql use zlib's CRC32 implementation, which uses polynomial 0xedb88320UL.
|
||||
// reference: https://github.com/madler/zlib/blob/master/crc32.c
|
||||
// https://github.com/madler/zlib/blob/master/doc/rfc1952.txt#L419
|
||||
checksum := crc32.ChecksumIEEE(calculatedPart)
|
||||
computed := make([]byte, BinlogChecksumLength)
|
||||
binary.LittleEndian.PutUint32(computed, checksum)
|
||||
if !bytes.Equal(expectedChecksum, computed) {
|
||||
return ErrChecksumMismatch
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *BinlogParser) newRowsEvent(h *EventHeader) *RowsEvent {
|
||||
@ -240,6 +352,9 @@ func (p *BinlogParser) newRowsEvent(h *EventHeader) *RowsEvent {
|
||||
|
||||
e.needBitmap2 = false
|
||||
e.tables = p.tables
|
||||
e.parseTime = p.parseTime
|
||||
e.timestampStringLocation = p.timestampStringLocation
|
||||
e.useDecimal = p.useDecimal
|
||||
|
||||
switch h.EventType {
|
||||
case WRITE_ROWS_EVENTv0:
|
||||
|
2
vendor/github.com/siddontang/go-mysql/replication/parser_test.go
generated
vendored
2
vendor/github.com/siddontang/go-mysql/replication/parser_test.go
generated
vendored
@ -29,7 +29,7 @@ func (t *testSyncerSuite) TestIndexOutOfRange(c *C) {
|
||||
0x3065f: &TableMapEvent{tableIDSize: 6, TableID: 0x3065f, Flags: 0x1, Schema: []uint8{0x73, 0x65, 0x69, 0x75, 0x6d, 0x61, 0x73, 0x74, 0x65, 0x72}, Table: []uint8{0x63, 0x6f, 0x6e, 0x73, 0x5f, 0x61, 0x63, 0x74, 0x69, 0x6f, 0x6e, 0x5f, 0x73, 0x70, 0x65, 0x61, 0x6b, 0x6f, 0x75, 0x74, 0x5f, 0x6c, 0x65, 0x74, 0x74, 0x65, 0x72}, ColumnCount: 0xd, ColumnType: []uint8{0x3, 0x3, 0x3, 0x3, 0x1, 0x12, 0xf, 0xf, 0x12, 0xf, 0xf, 0x3, 0xf}, ColumnMeta: []uint16{0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x180, 0x180, 0x0, 0x180, 0x180, 0x0, 0x2fd}, NullBitmap: []uint8{0xe0, 0x17}},
|
||||
}
|
||||
|
||||
_, err := parser.parse([]byte{
|
||||
_, err := parser.Parse([]byte{
|
||||
/* 0x00, */ 0xc1, 0x86, 0x8e, 0x55, 0x1e, 0xa5, 0x14, 0x80, 0xa, 0x55, 0x0, 0x0, 0x0, 0x7, 0xc,
|
||||
0xbf, 0xe, 0x0, 0x0, 0x5f, 0x6, 0x3, 0x0, 0x0, 0x0, 0x1, 0x0, 0x2, 0x0, 0xd, 0xff,
|
||||
0x0, 0x0, 0x19, 0x63, 0x7, 0x0, 0xca, 0x61, 0x5, 0x0, 0x5e, 0xf7, 0xc, 0x0, 0xf5, 0x7,
|
||||
|
64
vendor/github.com/siddontang/go-mysql/replication/replication_test.go
generated
vendored
64
vendor/github.com/siddontang/go-mysql/replication/replication_test.go
generated
vendored
@ -1,15 +1,15 @@
|
||||
package replication
|
||||
|
||||
import (
|
||||
"context"
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"path"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"golang.org/x/net/context"
|
||||
|
||||
. "github.com/pingcap/check"
|
||||
uuid "github.com/satori/go.uuid"
|
||||
"github.com/siddontang/go-mysql/client"
|
||||
@ -158,8 +158,8 @@ func (t *testSyncerSuite) testSync(c *C, s *BinlogStreamer) {
|
||||
t.testExecute(c, "DROP TABLE IF EXISTS test_json_v2")
|
||||
|
||||
str = `CREATE TABLE test_json_v2 (
|
||||
id INT,
|
||||
c JSON,
|
||||
id INT,
|
||||
c JSON,
|
||||
PRIMARY KEY (id)
|
||||
) ENGINE=InnoDB`
|
||||
|
||||
@ -230,6 +230,28 @@ func (t *testSyncerSuite) testSync(c *C, s *BinlogStreamer) {
|
||||
}
|
||||
}
|
||||
|
||||
str = `DROP TABLE IF EXISTS test_parse_time`
|
||||
t.testExecute(c, str)
|
||||
|
||||
// Must allow zero time.
|
||||
t.testExecute(c, `SET sql_mode=''`)
|
||||
str = `CREATE TABLE test_parse_time (
|
||||
a1 DATETIME,
|
||||
a2 DATETIME(3),
|
||||
a3 DATETIME(6),
|
||||
b1 TIMESTAMP,
|
||||
b2 TIMESTAMP(3) ,
|
||||
b3 TIMESTAMP(6))`
|
||||
t.testExecute(c, str)
|
||||
|
||||
t.testExecute(c, `INSERT INTO test_parse_time VALUES
|
||||
("2014-09-08 17:51:04.123456", "2014-09-08 17:51:04.123456", "2014-09-08 17:51:04.123456",
|
||||
"2014-09-08 17:51:04.123456","2014-09-08 17:51:04.123456","2014-09-08 17:51:04.123456"),
|
||||
("0000-00-00 00:00:00.000000", "0000-00-00 00:00:00.000000", "0000-00-00 00:00:00.000000",
|
||||
"0000-00-00 00:00:00.000000", "0000-00-00 00:00:00.000000", "0000-00-00 00:00:00.000000"),
|
||||
("2014-09-08 17:51:04.000456", "2014-09-08 17:51:04.000456", "2014-09-08 17:51:04.000456",
|
||||
"2014-09-08 17:51:04.000456","2014-09-08 17:51:04.000456","2014-09-08 17:51:04.000456")`)
|
||||
|
||||
t.wg.Wait()
|
||||
}
|
||||
|
||||
@ -263,12 +285,13 @@ func (t *testSyncerSuite) setupTest(c *C, flavor string) {
|
||||
}
|
||||
|
||||
cfg := BinlogSyncerConfig{
|
||||
ServerID: 100,
|
||||
Flavor: flavor,
|
||||
Host: *testHost,
|
||||
Port: port,
|
||||
User: "root",
|
||||
Password: "",
|
||||
ServerID: 100,
|
||||
Flavor: flavor,
|
||||
Host: *testHost,
|
||||
Port: port,
|
||||
User: "root",
|
||||
Password: "",
|
||||
UseDecimal: true,
|
||||
}
|
||||
|
||||
t.b = NewBinlogSyncer(cfg)
|
||||
@ -281,7 +304,7 @@ func (t *testSyncerSuite) testPositionSync(c *C) {
|
||||
binFile, _ := r.GetString(0, 0)
|
||||
binPos, _ := r.GetInt(0, 1)
|
||||
|
||||
s, err := t.b.StartSync(mysql.Position{binFile, uint32(binPos)})
|
||||
s, err := t.b.StartSync(mysql.Position{Name: binFile, Pos: uint32(binPos)})
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
// Test re-sync.
|
||||
@ -373,12 +396,15 @@ func (t *testSyncerSuite) TestMysqlBinlogCodec(c *C) {
|
||||
t.testSync(c, nil)
|
||||
}()
|
||||
|
||||
os.RemoveAll("./var")
|
||||
binlogDir := "./var"
|
||||
|
||||
err := t.b.StartBackup("./var", mysql.Position{"", uint32(0)}, 2*time.Second)
|
||||
os.RemoveAll(binlogDir)
|
||||
|
||||
err := t.b.StartBackup(binlogDir, mysql.Position{Name: "", Pos: uint32(0)}, 2*time.Second)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
p := NewBinlogParser()
|
||||
p.SetVerifyChecksum(true)
|
||||
|
||||
f := func(e *BinlogEvent) error {
|
||||
if *testOutputLogs {
|
||||
@ -388,9 +414,15 @@ func (t *testSyncerSuite) TestMysqlBinlogCodec(c *C) {
|
||||
return nil
|
||||
}
|
||||
|
||||
err = p.ParseFile("./var/mysql.000001", 0, f)
|
||||
dir, err := os.Open(binlogDir)
|
||||
c.Assert(err, IsNil)
|
||||
defer dir.Close()
|
||||
|
||||
files, err := dir.Readdirnames(-1)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
err = p.ParseFile("./var/mysql.000002", 0, f)
|
||||
c.Assert(err, IsNil)
|
||||
for _, file := range files {
|
||||
err = p.ParseFile(path.Join(binlogDir, file), 0, f)
|
||||
c.Assert(err, IsNil)
|
||||
}
|
||||
}
|
||||
|
110
vendor/github.com/siddontang/go-mysql/replication/row_event.go
generated
vendored
110
vendor/github.com/siddontang/go-mysql/replication/row_event.go
generated
vendored
@ -10,11 +10,14 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/juju/errors"
|
||||
"github.com/ngaut/log"
|
||||
"github.com/shopspring/decimal"
|
||||
"github.com/siddontang/go-log/log"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
"github.com/siddontang/go/hack"
|
||||
)
|
||||
|
||||
var errMissingTableMapEvent = errors.New("invalid table id, no corresponding table map event")
|
||||
|
||||
type TableMapEvent struct {
|
||||
tableIDSize int
|
||||
|
||||
@ -68,7 +71,7 @@ func (e *TableMapEvent) Decode(data []byte) error {
|
||||
|
||||
var err error
|
||||
var metaData []byte
|
||||
if metaData, _, n, err = LengthEnodedString(data[pos:]); err != nil {
|
||||
if metaData, _, n, err = LengthEncodedString(data[pos:]); err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
@ -78,11 +81,14 @@ func (e *TableMapEvent) Decode(data []byte) error {
|
||||
|
||||
pos += n
|
||||
|
||||
if len(data[pos:]) != bitmapByteSize(int(e.ColumnCount)) {
|
||||
nullBitmapSize := bitmapByteSize(int(e.ColumnCount))
|
||||
if len(data[pos:]) < nullBitmapSize {
|
||||
return io.EOF
|
||||
}
|
||||
|
||||
e.NullBitmap = data[pos:]
|
||||
e.NullBitmap = data[pos : pos+nullBitmapSize]
|
||||
|
||||
// TODO: handle optional field meta
|
||||
|
||||
return nil
|
||||
}
|
||||
@ -223,6 +229,10 @@ type RowsEvent struct {
|
||||
|
||||
//rows: invalid: int64, float64, bool, []byte, string
|
||||
Rows [][]interface{}
|
||||
|
||||
parseTime bool
|
||||
timestampStringLocation *time.Location
|
||||
useDecimal bool
|
||||
}
|
||||
|
||||
func (e *RowsEvent) Decode(data []byte) error {
|
||||
@ -257,7 +267,11 @@ func (e *RowsEvent) Decode(data []byte) error {
|
||||
var ok bool
|
||||
e.Table, ok = e.tables[e.TableID]
|
||||
if !ok {
|
||||
return errors.Errorf("invalid table id %d, no correspond table map event", e.TableID)
|
||||
if len(e.tables) > 0 {
|
||||
return errors.Errorf("invalid table id %d, no corresponding table map event", e.TableID)
|
||||
} else {
|
||||
return errors.Annotatef(errMissingTableMapEvent, "table id %d", e.TableID)
|
||||
}
|
||||
}
|
||||
|
||||
var err error
|
||||
@ -336,6 +350,21 @@ func (e *RowsEvent) decodeRows(data []byte, table *TableMapEvent, bitmap []byte)
|
||||
return pos, nil
|
||||
}
|
||||
|
||||
func (e *RowsEvent) parseFracTime(t interface{}) interface{} {
|
||||
v, ok := t.(fracTime)
|
||||
if !ok {
|
||||
return t
|
||||
}
|
||||
|
||||
if !e.parseTime {
|
||||
// Don't parse time, return string directly
|
||||
return v.String()
|
||||
}
|
||||
|
||||
// return Golang time directly
|
||||
return v.Time
|
||||
}
|
||||
|
||||
// see mysql sql/log_event.cc log_event_print_value
|
||||
func (e *RowsEvent) decodeValue(data []byte, tp byte, meta uint16) (v interface{}, n int, err error) {
|
||||
var length int = 0
|
||||
@ -378,7 +407,7 @@ func (e *RowsEvent) decodeValue(data []byte, tp byte, meta uint16) (v interface{
|
||||
case MYSQL_TYPE_NEWDECIMAL:
|
||||
prec := uint8(meta >> 8)
|
||||
scale := uint8(meta & 0xFF)
|
||||
v, n, err = decodeDecimal(data, int(prec), int(scale))
|
||||
v, n, err = decodeDecimal(data, int(prec), int(scale), e.useDecimal)
|
||||
case MYSQL_TYPE_FLOAT:
|
||||
n = 4
|
||||
v = ParseBinaryFloat32(data)
|
||||
@ -396,7 +425,8 @@ func (e *RowsEvent) decodeValue(data []byte, tp byte, meta uint16) (v interface{
|
||||
t := binary.LittleEndian.Uint32(data)
|
||||
v = time.Unix(int64(t), 0)
|
||||
case MYSQL_TYPE_TIMESTAMP2:
|
||||
v, n, err = decodeTimestamp2(data, meta)
|
||||
v, n, err = decodeTimestamp2(data, meta, e.timestampStringLocation)
|
||||
//v = e.parseFracTime(v)
|
||||
case MYSQL_TYPE_DATETIME:
|
||||
n = 8
|
||||
i64 := binary.LittleEndian.Uint64(data)
|
||||
@ -416,6 +446,7 @@ func (e *RowsEvent) decodeValue(data []byte, tp byte, meta uint16) (v interface{
|
||||
time.UTC).Format(TimeFormat)
|
||||
case MYSQL_TYPE_DATETIME2:
|
||||
v, n, err = decodeDatetime2(data, meta)
|
||||
v = e.parseFracTime(v)
|
||||
case MYSQL_TYPE_TIME:
|
||||
n = 3
|
||||
i32 := uint32(FixedLengthInt(data[0:3]))
|
||||
@ -449,7 +480,7 @@ func (e *RowsEvent) decodeValue(data []byte, tp byte, meta uint16) (v interface{
|
||||
v = int64(data[0])
|
||||
n = 1
|
||||
case 2:
|
||||
v = int64(binary.BigEndian.Uint16(data))
|
||||
v = int64(binary.LittleEndian.Uint16(data))
|
||||
n = 2
|
||||
default:
|
||||
err = fmt.Errorf("Unknown ENUM packlen=%d", l)
|
||||
@ -458,7 +489,7 @@ func (e *RowsEvent) decodeValue(data []byte, tp byte, meta uint16) (v interface{
|
||||
n = int(meta & 0xFF)
|
||||
nbits := n * 8
|
||||
|
||||
v, err = decodeBit(data, nbits, n)
|
||||
v, err = littleDecodeBit(data, nbits, n)
|
||||
case MYSQL_TYPE_BLOB:
|
||||
v, n, err = decodeBlob(data, meta)
|
||||
case MYSQL_TYPE_VARCHAR,
|
||||
@ -468,10 +499,10 @@ func (e *RowsEvent) decodeValue(data []byte, tp byte, meta uint16) (v interface{
|
||||
case MYSQL_TYPE_STRING:
|
||||
v, n = decodeString(data, length)
|
||||
case MYSQL_TYPE_JSON:
|
||||
// Refer https://github.com/shyiko/mysql-binlog-connector-java/blob/8f9132ee773317e00313204beeae8ddcaa43c1b4/src/main/java/com/github/shyiko/mysql/binlog/event/deserialization/AbstractRowsEventDataDeserializer.java#L344
|
||||
length = int(binary.LittleEndian.Uint32(data[0:]))
|
||||
// Refer: https://github.com/shyiko/mysql-binlog-connector-java/blob/master/src/main/java/com/github/shyiko/mysql/binlog/event/deserialization/AbstractRowsEventDataDeserializer.java#L404
|
||||
length = int(FixedLengthInt(data[0:meta]))
|
||||
n = length + int(meta)
|
||||
v, err = decodeJsonBinary(data[meta:n])
|
||||
v, err = e.decodeJsonBinary(data[meta:n])
|
||||
case MYSQL_TYPE_GEOMETRY:
|
||||
// MySQL saves Geometry as Blob in binlog
|
||||
// Seem that the binary format is SRID (4 bytes) + WKB, outer can use
|
||||
@ -515,7 +546,7 @@ func decodeDecimalDecompressValue(compIndx int, data []byte, mask uint8) (size i
|
||||
return
|
||||
}
|
||||
|
||||
func decodeDecimal(data []byte, precision int, decimals int) (float64, int, error) {
|
||||
func decodeDecimal(data []byte, precision int, decimals int, useDecimal bool) (interface{}, int, error) {
|
||||
//see python mysql replication and https://github.com/jeremycole/mysql_binlog
|
||||
integral := (precision - decimals)
|
||||
uncompIntegral := int(integral / digitsPerInteger)
|
||||
@ -568,6 +599,11 @@ func decodeDecimal(data []byte, precision int, decimals int) (float64, int, erro
|
||||
pos += size
|
||||
}
|
||||
|
||||
if useDecimal {
|
||||
f, err := decimal.NewFromString(hack.String(res.Bytes()))
|
||||
return f, pos, err
|
||||
}
|
||||
|
||||
f, err := strconv.ParseFloat(hack.String(res.Bytes()), 64)
|
||||
return f, pos, err
|
||||
}
|
||||
@ -604,7 +640,39 @@ func decodeBit(data []byte, nbits int, length int) (value int64, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func decodeTimestamp2(data []byte, dec uint16) (interface{}, int, error) {
|
||||
func littleDecodeBit(data []byte, nbits int, length int) (value int64, err error) {
|
||||
if nbits > 1 {
|
||||
switch length {
|
||||
case 1:
|
||||
value = int64(data[0])
|
||||
case 2:
|
||||
value = int64(binary.LittleEndian.Uint16(data))
|
||||
case 3:
|
||||
value = int64(FixedLengthInt(data[0:3]))
|
||||
case 4:
|
||||
value = int64(binary.LittleEndian.Uint32(data))
|
||||
case 5:
|
||||
value = int64(FixedLengthInt(data[0:5]))
|
||||
case 6:
|
||||
value = int64(FixedLengthInt(data[0:6]))
|
||||
case 7:
|
||||
value = int64(FixedLengthInt(data[0:7]))
|
||||
case 8:
|
||||
value = int64(binary.LittleEndian.Uint64(data))
|
||||
default:
|
||||
err = fmt.Errorf("invalid bit length %d", length)
|
||||
}
|
||||
} else {
|
||||
if length != 1 {
|
||||
err = fmt.Errorf("invalid bit length %d", length)
|
||||
} else {
|
||||
value = int64(data[0])
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func decodeTimestamp2(data []byte, dec uint16, timestampStringLocation *time.Location) (interface{}, int, error) {
|
||||
//get timestamp binary length
|
||||
n := int(4 + (dec+1)/2)
|
||||
sec := int64(binary.BigEndian.Uint32(data[0:4]))
|
||||
@ -619,7 +687,7 @@ func decodeTimestamp2(data []byte, dec uint16) (interface{}, int, error) {
|
||||
}
|
||||
|
||||
if sec == 0 {
|
||||
return "0000-00-00 00:00:00", n, nil
|
||||
return formatZeroTime(int(usec), int(dec)), n, nil
|
||||
}
|
||||
|
||||
t := time.Unix(sec, usec*1000)
|
||||
@ -645,7 +713,7 @@ func decodeDatetime2(data []byte, dec uint16) (interface{}, int, error) {
|
||||
}
|
||||
|
||||
if intPart == 0 {
|
||||
return "0000-00-00 00:00:00", n, nil
|
||||
return formatZeroTime(int(frac), int(dec)), n, nil
|
||||
}
|
||||
|
||||
tmp := intPart<<24 + frac
|
||||
@ -654,7 +722,7 @@ func decodeDatetime2(data []byte, dec uint16) (interface{}, int, error) {
|
||||
tmp = -tmp
|
||||
}
|
||||
|
||||
var secPart int64 = tmp % (1 << 24)
|
||||
// var secPart int64 = tmp % (1 << 24)
|
||||
ymdhms := tmp >> 24
|
||||
|
||||
ymd := ymdhms >> 17
|
||||
@ -669,10 +737,14 @@ func decodeDatetime2(data []byte, dec uint16) (interface{}, int, error) {
|
||||
minute := int((hms >> 6) % (1 << 6))
|
||||
hour := int((hms >> 12))
|
||||
|
||||
if secPart != 0 {
|
||||
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d.%06d", year, month, day, hour, minute, second, secPart), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
||||
if frac != 0 {
|
||||
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d.%06d", year, month, day, hour, minute, second, frac), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
||||
}
|
||||
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d", year, month, day, hour, minute, second), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
||||
// return fracTime{
|
||||
// Time: time.Date(year, time.Month(month), day, hour, minute, second, int(frac*1000), time.UTC),
|
||||
// Dec: int(dec),
|
||||
// }, n, nil
|
||||
}
|
||||
|
||||
const TIMEF_OFS int64 = 0x800000000000
|
||||
|
610
vendor/github.com/siddontang/go-mysql/replication/row_event_test.go
generated
vendored
610
vendor/github.com/siddontang/go-mysql/replication/row_event_test.go
generated
vendored
@ -2,8 +2,10 @@ package replication
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
|
||||
. "github.com/pingcap/check"
|
||||
"github.com/shopspring/decimal"
|
||||
)
|
||||
|
||||
type testDecodeSuite struct{}
|
||||
@ -17,10 +19,10 @@ type decodeDecimalChecker struct {
|
||||
func (_ *decodeDecimalChecker) Check(params []interface{}, names []string) (bool, string) {
|
||||
var test int
|
||||
val := struct {
|
||||
Value float64
|
||||
Value decimal.Decimal
|
||||
Pos int
|
||||
Err error
|
||||
EValue float64
|
||||
EValue decimal.Decimal
|
||||
EPos int
|
||||
EErr error
|
||||
}{}
|
||||
@ -28,13 +30,13 @@ func (_ *decodeDecimalChecker) Check(params []interface{}, names []string) (bool
|
||||
for i, name := range names {
|
||||
switch name {
|
||||
case "obtainedValue":
|
||||
val.Value, _ = params[i].(float64)
|
||||
val.Value, _ = params[i].(decimal.Decimal)
|
||||
case "obtainedPos":
|
||||
val.Pos, _ = params[i].(int)
|
||||
case "obtainedErr":
|
||||
val.Err, _ = params[i].(error)
|
||||
case "expectedValue":
|
||||
val.EValue, _ = params[i].(float64)
|
||||
val.EValue, _ = params[i].(decimal.Decimal)
|
||||
case "expectedPos":
|
||||
val.EPos, _ = params[i].(int)
|
||||
case "expectedErr":
|
||||
@ -50,7 +52,7 @@ func (_ *decodeDecimalChecker) Check(params []interface{}, names []string) (bool
|
||||
if val.Pos != val.EPos {
|
||||
return false, fmt.Sprintf(errorMsgFmt, "position", val.EPos, val.Pos)
|
||||
}
|
||||
if val.Value != val.EValue {
|
||||
if !val.Value.Equal(val.EValue) {
|
||||
return false, fmt.Sprintf(errorMsgFmt, "value", val.EValue, val.Value)
|
||||
}
|
||||
return true, ""
|
||||
@ -66,7 +68,7 @@ func (_ *testDecodeSuite) TestDecodeDecimal(c *C) {
|
||||
Data []byte
|
||||
Precision int
|
||||
Decimals int
|
||||
Expected float64
|
||||
Expected string
|
||||
ExpectedPos int
|
||||
ExpectedErr error
|
||||
}{
|
||||
@ -133,197 +135,202 @@ func (_ *testDecodeSuite) TestDecodeDecimal(c *C) {
|
||||
| 17 | -99.99 | -1948 | -1948.140 | -1948.14 | -1948.140 | -1948.14 | -9.99999999999999 | -1948.1400000000 | -1948.14000 | -1948.14000000000000000000 | -1948.1400000000000000000000000 | 13 | 2 |
|
||||
+----+--------+-------+-----------+-------------+-------------+----------------+-------------------+-----------------------+---------------------+---------------------------------+---------------------------------+------+-------+
|
||||
*/
|
||||
{[]byte{117, 200, 127, 255}, 4, 2, float64(-10.55), 2, nil},
|
||||
{[]byte{127, 255, 244, 127, 245}, 5, 0, float64(-11), 3, nil},
|
||||
{[]byte{127, 245, 253, 217, 127, 255}, 7, 3, float64(-10.550), 4, nil},
|
||||
{[]byte{127, 255, 255, 245, 200, 127, 255}, 10, 2, float64(-10.55), 5, nil},
|
||||
{[]byte{127, 255, 255, 245, 253, 217, 127, 255}, 10, 3, float64(-10.550), 6, nil},
|
||||
{[]byte{127, 255, 255, 255, 245, 200, 118, 196}, 13, 2, float64(-10.55), 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, float64(-9.99999999999999), 8, nil},
|
||||
{[]byte{127, 255, 255, 255, 245, 223, 55, 170, 127, 255, 127, 255}, 20, 10, float64(-10.5500000000), 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 245, 255, 41, 39, 127, 255}, 30, 5, float64(-10.55000), 15, nil},
|
||||
{[]byte{127, 255, 255, 255, 245, 223, 55, 170, 127, 255, 255, 255, 255, 255, 127, 255}, 30, 20, float64(-10.55000000000000000000), 14, nil},
|
||||
{[]byte{127, 255, 245, 223, 55, 170, 127, 255, 255, 255, 255, 255, 255, 255, 255, 4, 0}, 30, 25, float64(-10.5500000000000000000000000), 15, nil},
|
||||
{[]byte{128, 1, 128, 0}, 4, 2, float64(0.01), 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, float64(0), 3, nil},
|
||||
{[]byte{128, 0, 0, 12, 128, 0}, 7, 3, float64(0.012), 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 128, 0}, 10, 2, float64(0.01), 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 12, 128, 0}, 10, 3, float64(0.012), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 1, 128, 0}, 13, 2, float64(0.01), 6, nil},
|
||||
{[]byte{128, 0, 188, 97, 78, 1, 96, 11, 128, 0}, 15, 14, float64(0.01234567890123), 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 188, 97, 78, 9, 128, 0}, 20, 10, float64(0.0123456789), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 211, 128, 0}, 30, 5, float64(0.01235), 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 188, 97, 78, 53, 183, 191, 135, 89, 128, 0}, 30, 20, float64(0.01234567890123456789), 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 188, 97, 78, 53, 183, 191, 135, 0, 135, 253, 217, 30, 0}, 30, 25, float64(0.0123456789012345678912345), 15, nil},
|
||||
{[]byte{227, 99, 128, 48}, 4, 2, float64(99.99), 2, nil},
|
||||
{[]byte{128, 48, 57, 167, 15}, 5, 0, float64(12345), 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 128, 0}, 7, 3, float64(9999.999), 4, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 128, 0}, 10, 2, float64(12345.00), 5, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 0, 128, 0}, 10, 3, float64(12345.000), 6, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 137, 59}, 13, 2, float64(12345.00), 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, float64(9.99999999999999), 8, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 128, 0}, 20, 10, float64(12345.0000000000), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 57, 0, 0, 0, 128, 0}, 30, 5, float64(12345.00000), 15, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 48}, 30, 20, float64(12345.00000000000000000000), 14, nil},
|
||||
{[]byte{128, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0}, 30, 25, float64(12345.0000000000000000000000000), 15, nil},
|
||||
{[]byte{227, 99, 128, 48}, 4, 2, float64(99.99), 2, nil},
|
||||
{[]byte{128, 48, 57, 167, 15}, 5, 0, float64(12345), 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 128, 0}, 7, 3, float64(9999.999), 4, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 128, 0}, 10, 2, float64(12345.00), 5, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 0, 128, 0}, 10, 3, float64(12345.000), 6, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 137, 59}, 13, 2, float64(12345.00), 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, float64(9.99999999999999), 8, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 128, 0}, 20, 10, float64(12345.0000000000), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 57, 0, 0, 0, 128, 0}, 30, 5, float64(12345.00000), 15, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 48}, 30, 20, float64(12345.00000000000000000000), 14, nil},
|
||||
{[]byte{128, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0}, 30, 25, float64(12345.0000000000000000000000000), 15, nil},
|
||||
{[]byte{227, 99, 128, 0}, 4, 2, float64(99.99), 2, nil},
|
||||
{[]byte{128, 0, 123, 128, 123}, 5, 0, float64(123), 3, nil},
|
||||
{[]byte{128, 123, 1, 194, 128, 0}, 7, 3, float64(123.450), 4, nil},
|
||||
{[]byte{128, 0, 0, 123, 45, 128, 0}, 10, 2, float64(123.45), 5, nil},
|
||||
{[]byte{128, 0, 0, 123, 1, 194, 128, 0}, 10, 3, float64(123.450), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 45, 137, 59}, 13, 2, float64(123.45), 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, float64(9.99999999999999), 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 26, 210, 116, 128, 0, 128, 0}, 20, 10, float64(123.4500000000), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 123, 0, 175, 200, 128, 0}, 30, 5, float64(123.45000), 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 26, 210, 116, 128, 0, 0, 0, 0, 0, 128, 0}, 30, 20, float64(123.45000000000000000000), 14, nil},
|
||||
{[]byte{128, 0, 123, 26, 210, 116, 128, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0}, 30, 25, float64(123.4500000000000000000000000), 15, nil},
|
||||
{[]byte{28, 156, 127, 255}, 4, 2, float64(-99.99), 2, nil},
|
||||
{[]byte{127, 255, 132, 127, 132}, 5, 0, float64(-123), 3, nil},
|
||||
{[]byte{127, 132, 254, 61, 127, 255}, 7, 3, float64(-123.450), 4, nil},
|
||||
{[]byte{127, 255, 255, 132, 210, 127, 255}, 10, 2, float64(-123.45), 5, nil},
|
||||
{[]byte{127, 255, 255, 132, 254, 61, 127, 255}, 10, 3, float64(-123.450), 6, nil},
|
||||
{[]byte{127, 255, 255, 255, 132, 210, 118, 196}, 13, 2, float64(-123.45), 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, float64(-9.99999999999999), 8, nil},
|
||||
{[]byte{127, 255, 255, 255, 132, 229, 45, 139, 127, 255, 127, 255}, 20, 10, float64(-123.4500000000), 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 132, 255, 80, 55, 127, 255}, 30, 5, float64(-123.45000), 15, nil},
|
||||
{[]byte{127, 255, 255, 255, 132, 229, 45, 139, 127, 255, 255, 255, 255, 255, 127, 255}, 30, 20, float64(-123.45000000000000000000), 14, nil},
|
||||
{[]byte{127, 255, 132, 229, 45, 139, 127, 255, 255, 255, 255, 255, 255, 255, 255, 20, 0}, 30, 25, float64(-123.4500000000000000000000000), 15, nil},
|
||||
{[]byte{128, 0, 128, 0}, 4, 2, float64(0.00), 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, float64(0), 3, nil},
|
||||
{[]byte{128, 0, 0, 0, 128, 0}, 7, 3, float64(0.000), 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 128, 0}, 10, 2, float64(0.00), 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 10, 3, float64(0.000), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 13, 2, float64(0.00), 6, nil},
|
||||
{[]byte{128, 0, 1, 226, 58, 0, 0, 99, 128, 0}, 15, 14, float64(0.00012345000099), 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 128, 0}, 20, 10, float64(0.0001234500), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 128, 0}, 30, 5, float64(0.00012), 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 128, 0}, 30, 20, float64(0.00012345000098765000), 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 0, 0, 0, 15, 0}, 30, 25, float64(0.0001234500009876500000000), 15, nil},
|
||||
{[]byte{128, 0, 128, 0}, 4, 2, float64(0.00), 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, float64(0), 3, nil},
|
||||
{[]byte{128, 0, 0, 0, 128, 0}, 7, 3, float64(0.000), 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 128, 0}, 10, 2, float64(0.00), 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 10, 3, float64(0.000), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 13, 2, float64(0.00), 6, nil},
|
||||
{[]byte{128, 0, 1, 226, 58, 0, 0, 99, 128, 0}, 15, 14, float64(0.00012345000099), 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 128, 0}, 20, 10, float64(0.0001234500), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 128, 0}, 30, 5, float64(0.00012), 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 128, 0}, 30, 20, float64(0.00012345000098765000), 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 0, 0, 0, 22, 0}, 30, 25, float64(0.0001234500009876500000000), 15, nil},
|
||||
{[]byte{128, 12, 128, 0}, 4, 2, float64(0.12), 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, float64(0), 3, nil},
|
||||
{[]byte{128, 0, 0, 123, 128, 0}, 7, 3, float64(0.123), 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 12, 128, 0}, 10, 2, float64(0.12), 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 123, 128, 0}, 10, 3, float64(0.123), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 12, 128, 7}, 13, 2, float64(0.12), 6, nil},
|
||||
{[]byte{128, 7, 91, 178, 144, 1, 129, 205, 128, 0}, 15, 14, float64(0.12345000098765), 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 7, 91, 178, 145, 0, 128, 0}, 20, 10, float64(0.1234500010), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 57, 128, 0}, 30, 5, float64(0.12345), 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 7, 91, 178, 144, 58, 222, 87, 208, 0, 128, 0}, 30, 20, float64(0.12345000098765000000), 14, nil},
|
||||
{[]byte{128, 0, 0, 7, 91, 178, 144, 58, 222, 87, 208, 0, 0, 0, 0, 30, 0}, 30, 25, float64(0.1234500009876500000000000), 15, nil},
|
||||
{[]byte{128, 0, 128, 0}, 4, 2, float64(0.00), 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, float64(0), 3, nil},
|
||||
{[]byte{128, 0, 0, 0, 128, 0}, 7, 3, float64(0.000), 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 128, 0}, 10, 2, float64(0.00), 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 10, 3, float64(0.000), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 127, 255}, 13, 2, float64(0.00), 6, nil},
|
||||
{[]byte{127, 255, 255, 255, 243, 255, 121, 59, 127, 255}, 15, 14, float64(-0.00000001234500), 8, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 243, 252, 128, 0}, 20, 10, float64(-0.0000000123), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 127, 255}, 30, 5, float64(0.00000), 15, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 243, 235, 111, 183, 93, 178, 127, 255}, 30, 20, float64(-0.00000001234500009877), 14, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 243, 235, 111, 183, 93, 255, 139, 69, 47, 30, 0}, 30, 25, float64(-0.0000000123450000987650000), 15, nil},
|
||||
{[]byte{227, 99, 129, 134}, 4, 2, float64(99.99), 2, nil},
|
||||
{[]byte{129, 134, 159, 167, 15}, 5, 0, float64(99999), 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 133, 245}, 7, 3, float64(9999.999), 4, nil},
|
||||
{[]byte{133, 245, 224, 255, 99, 128, 152}, 10, 2, float64(99999999.99), 5, nil},
|
||||
{[]byte{128, 152, 150, 127, 3, 231, 227, 59}, 10, 3, float64(9999999.999), 6, nil},
|
||||
{[]byte{227, 59, 154, 201, 255, 99, 137, 59}, 13, 2, float64(99999999999.99), 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 137, 59}, 15, 14, float64(9.99999999999999), 8, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 59, 154, 201, 255, 9, 128, 0}, 20, 10, float64(9999999999.9999999999), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 4, 210, 29, 205, 139, 148, 0, 195, 80, 137, 59}, 30, 5, float64(1234500009876.50000), 15, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 59, 154, 201, 255, 59, 154, 201, 255, 99, 129, 134}, 30, 20, float64(9999999999.99999999999999999999), 14, nil},
|
||||
{[]byte{129, 134, 159, 59, 154, 201, 255, 59, 154, 201, 255, 0, 152, 150, 127, 30, 0}, 30, 25, float64(99999.9999999999999999999999999), 15, nil},
|
||||
{[]byte{227, 99, 129, 134}, 4, 2, float64(99.99), 2, nil},
|
||||
{[]byte{129, 134, 159, 167, 15}, 5, 0, float64(99999), 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 133, 245}, 7, 3, float64(9999.999), 4, nil},
|
||||
{[]byte{133, 245, 224, 255, 99, 128, 152}, 10, 2, float64(99999999.99), 5, nil},
|
||||
{[]byte{128, 152, 150, 127, 3, 231, 128, 6}, 10, 3, float64(9999999.999), 6, nil},
|
||||
{[]byte{128, 6, 159, 107, 199, 11, 137, 59}, 13, 2, float64(111111111.11), 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 6}, 15, 14, float64(9.99999999999999), 8, nil},
|
||||
{[]byte{128, 6, 159, 107, 199, 6, 142, 119, 128, 0, 128, 0}, 20, 10, float64(111111111.1100000000), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 6, 159, 107, 199, 0, 42, 248, 128, 6}, 30, 5, float64(111111111.11000), 15, nil},
|
||||
{[]byte{128, 6, 159, 107, 199, 6, 142, 119, 128, 0, 0, 0, 0, 0, 129, 134}, 30, 20, float64(111111111.11000000000000000000), 14, nil},
|
||||
{[]byte{129, 134, 159, 59, 154, 201, 255, 59, 154, 201, 255, 0, 152, 150, 127, 10, 0}, 30, 25, float64(99999.9999999999999999999999999), 15, nil},
|
||||
{[]byte{128, 1, 128, 0}, 4, 2, float64(0.01), 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, float64(0), 3, nil},
|
||||
{[]byte{128, 0, 0, 10, 128, 0}, 7, 3, float64(0.010), 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 128, 0}, 10, 2, float64(0.01), 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 10, 128, 0}, 10, 3, float64(0.010), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 1, 128, 0}, 13, 2, float64(0.01), 6, nil},
|
||||
{[]byte{128, 0, 152, 150, 128, 0, 0, 0, 128, 0}, 15, 14, float64(0.01000000000000), 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 152, 150, 128, 0, 128, 0}, 20, 10, float64(0.0100000000), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 232, 128, 0}, 30, 5, float64(0.01000), 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 152, 150, 128, 0, 0, 0, 0, 0, 128, 0}, 30, 20, float64(0.01000000000000000000), 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 152, 150, 128, 0, 0, 0, 0, 0, 0, 0, 0, 7, 0}, 30, 25, float64(0.0100000000000000000000000), 15, nil},
|
||||
{[]byte{227, 99, 128, 0}, 4, 2, float64(99.99), 2, nil},
|
||||
{[]byte{128, 0, 123, 128, 123}, 5, 0, float64(123), 3, nil},
|
||||
{[]byte{128, 123, 1, 144, 128, 0}, 7, 3, float64(123.400), 4, nil},
|
||||
{[]byte{128, 0, 0, 123, 40, 128, 0}, 10, 2, float64(123.40), 5, nil},
|
||||
{[]byte{128, 0, 0, 123, 1, 144, 128, 0}, 10, 3, float64(123.400), 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 40, 137, 59}, 13, 2, float64(123.40), 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, float64(9.99999999999999), 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 23, 215, 132, 0, 0, 128, 0}, 20, 10, float64(123.4000000000), 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 123, 0, 156, 64, 128, 0}, 30, 5, float64(123.40000), 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 23, 215, 132, 0, 0, 0, 0, 0, 0, 128, 0}, 30, 20, float64(123.40000000000000000000), 14, nil},
|
||||
{[]byte{128, 0, 123, 23, 215, 132, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0}, 30, 25, float64(123.4000000000000000000000000), 15, nil},
|
||||
{[]byte{28, 156, 127, 253}, 4, 2, float64(-99.99), 2, nil},
|
||||
{[]byte{127, 253, 204, 125, 205}, 5, 0, float64(-563), 3, nil},
|
||||
{[]byte{125, 205, 253, 187, 127, 255}, 7, 3, float64(-562.580), 4, nil},
|
||||
{[]byte{127, 255, 253, 205, 197, 127, 255}, 10, 2, float64(-562.58), 5, nil},
|
||||
{[]byte{127, 255, 253, 205, 253, 187, 127, 255}, 10, 3, float64(-562.580), 6, nil},
|
||||
{[]byte{127, 255, 255, 253, 205, 197, 118, 196}, 13, 2, float64(-562.58), 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, float64(-9.99999999999999), 8, nil},
|
||||
{[]byte{127, 255, 255, 253, 205, 221, 109, 230, 255, 255, 127, 255}, 20, 10, float64(-562.5800000000), 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 253, 205, 255, 29, 111, 127, 255}, 30, 5, float64(-562.58000), 15, nil},
|
||||
{[]byte{127, 255, 255, 253, 205, 221, 109, 230, 255, 255, 255, 255, 255, 255, 127, 253}, 30, 20, float64(-562.58000000000000000000), 14, nil},
|
||||
{[]byte{127, 253, 205, 221, 109, 230, 255, 255, 255, 255, 255, 255, 255, 255, 255, 13, 0}, 30, 25, float64(-562.5800000000000000000000000), 15, nil},
|
||||
{[]byte{28, 156, 127, 241}, 4, 2, float64(-99.99), 2, nil},
|
||||
{[]byte{127, 241, 140, 113, 140}, 5, 0, float64(-3699), 3, nil},
|
||||
{[]byte{113, 140, 255, 245, 127, 255}, 7, 3, float64(-3699.010), 4, nil},
|
||||
{[]byte{127, 255, 241, 140, 254, 127, 255}, 10, 2, float64(-3699.01), 5, nil},
|
||||
{[]byte{127, 255, 241, 140, 255, 245, 127, 255}, 10, 3, float64(-3699.010), 6, nil},
|
||||
{[]byte{127, 255, 255, 241, 140, 254, 118, 196}, 13, 2, float64(-3699.01), 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, float64(-9.99999999999999), 8, nil},
|
||||
{[]byte{127, 255, 255, 241, 140, 255, 103, 105, 127, 255, 127, 255}, 20, 10, float64(-3699.0100000000), 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 241, 140, 255, 252, 23, 127, 255}, 30, 5, float64(-3699.01000), 15, nil},
|
||||
{[]byte{127, 255, 255, 241, 140, 255, 103, 105, 127, 255, 255, 255, 255, 255, 127, 241}, 30, 20, float64(-3699.01000000000000000000), 14, nil},
|
||||
{[]byte{127, 241, 140, 255, 103, 105, 127, 255, 255, 255, 255, 255, 255, 255, 255, 13, 0}, 30, 25, float64(-3699.0100000000000000000000000), 15, nil},
|
||||
{[]byte{28, 156, 127, 248}, 4, 2, float64(-99.99), 2, nil},
|
||||
{[]byte{127, 248, 99, 120, 99}, 5, 0, float64(-1948), 3, nil},
|
||||
{[]byte{120, 99, 255, 115, 127, 255}, 7, 3, float64(-1948.140), 4, nil},
|
||||
{[]byte{127, 255, 248, 99, 241, 127, 255}, 10, 2, float64(-1948.14), 5, nil},
|
||||
{[]byte{127, 255, 248, 99, 255, 115, 127, 255}, 10, 3, float64(-1948.140), 6, nil},
|
||||
{[]byte{127, 255, 255, 248, 99, 241, 118, 196}, 13, 2, float64(-1948.14), 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, float64(-9.99999999999999), 8, nil},
|
||||
{[]byte{127, 255, 255, 248, 99, 247, 167, 196, 255, 255, 127, 255}, 20, 10, float64(-1948.1400000000), 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 248, 99, 255, 201, 79, 127, 255}, 30, 5, float64(-1948.14000), 15, nil},
|
||||
{[]byte{127, 255, 255, 248, 99, 247, 167, 196, 255, 255, 255, 255, 255, 255, 127, 248}, 30, 20, float64(-1948.14000000000000000000), 14, nil},
|
||||
{[]byte{127, 248, 99, 247, 167, 196, 255, 255, 255, 255, 255, 255, 255, 255, 255, 13, 0}, 30, 25, float64(-1948.1400000000000000000000000), 15, nil},
|
||||
{[]byte{117, 200, 127, 255}, 4, 2, "-10.55", 2, nil},
|
||||
{[]byte{127, 255, 244, 127, 245}, 5, 0, "-11", 3, nil},
|
||||
{[]byte{127, 245, 253, 217, 127, 255}, 7, 3, "-10.550", 4, nil},
|
||||
{[]byte{127, 255, 255, 245, 200, 127, 255}, 10, 2, "-10.55", 5, nil},
|
||||
{[]byte{127, 255, 255, 245, 253, 217, 127, 255}, 10, 3, "-10.550", 6, nil},
|
||||
{[]byte{127, 255, 255, 255, 245, 200, 118, 196}, 13, 2, "-10.55", 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, "-9.99999999999999", 8, nil},
|
||||
{[]byte{127, 255, 255, 255, 245, 223, 55, 170, 127, 255, 127, 255}, 20, 10, "-10.5500000000", 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 245, 255, 41, 39, 127, 255}, 30, 5, "-10.55000", 15, nil},
|
||||
{[]byte{127, 255, 255, 255, 245, 223, 55, 170, 127, 255, 255, 255, 255, 255, 127, 255}, 30, 20, "-10.55000000000000000000", 14, nil},
|
||||
{[]byte{127, 255, 245, 223, 55, 170, 127, 255, 255, 255, 255, 255, 255, 255, 255, 4, 0}, 30, 25, "-10.5500000000000000000000000", 15, nil},
|
||||
{[]byte{128, 1, 128, 0}, 4, 2, "0.01", 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, "0", 3, nil},
|
||||
{[]byte{128, 0, 0, 12, 128, 0}, 7, 3, "0.012", 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 128, 0}, 10, 2, "0.01", 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 12, 128, 0}, 10, 3, "0.012", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 1, 128, 0}, 13, 2, "0.01", 6, nil},
|
||||
{[]byte{128, 0, 188, 97, 78, 1, 96, 11, 128, 0}, 15, 14, "0.01234567890123", 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 188, 97, 78, 9, 128, 0}, 20, 10, "0.0123456789", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 211, 128, 0}, 30, 5, "0.01235", 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 188, 97, 78, 53, 183, 191, 135, 89, 128, 0}, 30, 20, "0.01234567890123456789", 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 188, 97, 78, 53, 183, 191, 135, 0, 135, 253, 217, 30, 0}, 30, 25, "0.0123456789012345678912345", 15, nil},
|
||||
{[]byte{227, 99, 128, 48}, 4, 2, "99.99", 2, nil},
|
||||
{[]byte{128, 48, 57, 167, 15}, 5, 0, "12345", 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 128, 0}, 7, 3, "9999.999", 4, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 128, 0}, 10, 2, "12345.00", 5, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 0, 128, 0}, 10, 3, "12345.000", 6, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 137, 59}, 13, 2, "12345.00", 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, "9.99999999999999", 8, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 128, 0}, 20, 10, "12345.0000000000", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 57, 0, 0, 0, 128, 0}, 30, 5, "12345.00000", 15, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 48}, 30, 20, "12345.00000000000000000000", 14, nil},
|
||||
{[]byte{128, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0}, 30, 25, "12345.0000000000000000000000000", 15, nil},
|
||||
{[]byte{227, 99, 128, 48}, 4, 2, "99.99", 2, nil},
|
||||
{[]byte{128, 48, 57, 167, 15}, 5, 0, "12345", 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 128, 0}, 7, 3, "9999.999", 4, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 128, 0}, 10, 2, "12345.00", 5, nil},
|
||||
{[]byte{128, 0, 48, 57, 0, 0, 128, 0}, 10, 3, "12345.000", 6, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 137, 59}, 13, 2, "12345.00", 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, "9.99999999999999", 8, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 128, 0}, 20, 10, "12345.0000000000", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 57, 0, 0, 0, 128, 0}, 30, 5, "12345.00000", 15, nil},
|
||||
{[]byte{128, 0, 0, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 128, 48}, 30, 20, "12345.00000000000000000000", 14, nil},
|
||||
{[]byte{128, 48, 57, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0}, 30, 25, "12345.0000000000000000000000000", 15, nil},
|
||||
{[]byte{227, 99, 128, 0}, 4, 2, "99.99", 2, nil},
|
||||
{[]byte{128, 0, 123, 128, 123}, 5, 0, "123", 3, nil},
|
||||
{[]byte{128, 123, 1, 194, 128, 0}, 7, 3, "123.450", 4, nil},
|
||||
{[]byte{128, 0, 0, 123, 45, 128, 0}, 10, 2, "123.45", 5, nil},
|
||||
{[]byte{128, 0, 0, 123, 1, 194, 128, 0}, 10, 3, "123.450", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 45, 137, 59}, 13, 2, "123.45", 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, "9.99999999999999", 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 26, 210, 116, 128, 0, 128, 0}, 20, 10, "123.4500000000", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 123, 0, 175, 200, 128, 0}, 30, 5, "123.45000", 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 26, 210, 116, 128, 0, 0, 0, 0, 0, 128, 0}, 30, 20, "123.45000000000000000000", 14, nil},
|
||||
{[]byte{128, 0, 123, 26, 210, 116, 128, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0}, 30, 25, "123.4500000000000000000000000", 15, nil},
|
||||
{[]byte{28, 156, 127, 255}, 4, 2, "-99.99", 2, nil},
|
||||
{[]byte{127, 255, 132, 127, 132}, 5, 0, "-123", 3, nil},
|
||||
{[]byte{127, 132, 254, 61, 127, 255}, 7, 3, "-123.450", 4, nil},
|
||||
{[]byte{127, 255, 255, 132, 210, 127, 255}, 10, 2, "-123.45", 5, nil},
|
||||
{[]byte{127, 255, 255, 132, 254, 61, 127, 255}, 10, 3, "-123.450", 6, nil},
|
||||
{[]byte{127, 255, 255, 255, 132, 210, 118, 196}, 13, 2, "-123.45", 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, "-9.99999999999999", 8, nil},
|
||||
{[]byte{127, 255, 255, 255, 132, 229, 45, 139, 127, 255, 127, 255}, 20, 10, "-123.4500000000", 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 132, 255, 80, 55, 127, 255}, 30, 5, "-123.45000", 15, nil},
|
||||
{[]byte{127, 255, 255, 255, 132, 229, 45, 139, 127, 255, 255, 255, 255, 255, 127, 255}, 30, 20, "-123.45000000000000000000", 14, nil},
|
||||
{[]byte{127, 255, 132, 229, 45, 139, 127, 255, 255, 255, 255, 255, 255, 255, 255, 20, 0}, 30, 25, "-123.4500000000000000000000000", 15, nil},
|
||||
{[]byte{128, 0, 128, 0}, 4, 2, "0.00", 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, "0", 3, nil},
|
||||
{[]byte{128, 0, 0, 0, 128, 0}, 7, 3, "0.000", 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 128, 0}, 10, 2, "0.00", 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 10, 3, "0.000", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 13, 2, "0.00", 6, nil},
|
||||
{[]byte{128, 0, 1, 226, 58, 0, 0, 99, 128, 0}, 15, 14, "0.00012345000099", 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 128, 0}, 20, 10, "0.0001234500", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 128, 0}, 30, 5, "0.00012", 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 128, 0}, 30, 20, "0.00012345000098765000", 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 0, 0, 0, 15, 0}, 30, 25, "0.0001234500009876500000000", 15, nil},
|
||||
{[]byte{128, 0, 128, 0}, 4, 2, "0.00", 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, "0", 3, nil},
|
||||
{[]byte{128, 0, 0, 0, 128, 0}, 7, 3, "0.000", 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 128, 0}, 10, 2, "0.00", 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 10, 3, "0.000", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 13, 2, "0.00", 6, nil},
|
||||
{[]byte{128, 0, 1, 226, 58, 0, 0, 99, 128, 0}, 15, 14, "0.00012345000099", 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 128, 0}, 20, 10, "0.0001234500", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 12, 128, 0}, 30, 5, "0.00012", 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 128, 0}, 30, 20, "0.00012345000098765000", 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 226, 58, 0, 15, 18, 2, 0, 0, 0, 0, 22, 0}, 30, 25, "0.0001234500009876500000000", 15, nil},
|
||||
{[]byte{128, 12, 128, 0}, 4, 2, "0.12", 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, "0", 3, nil},
|
||||
{[]byte{128, 0, 0, 123, 128, 0}, 7, 3, "0.123", 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 12, 128, 0}, 10, 2, "0.12", 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 123, 128, 0}, 10, 3, "0.123", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 12, 128, 7}, 13, 2, "0.12", 6, nil},
|
||||
{[]byte{128, 7, 91, 178, 144, 1, 129, 205, 128, 0}, 15, 14, "0.12345000098765", 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 7, 91, 178, 145, 0, 128, 0}, 20, 10, "0.1234500010", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 48, 57, 128, 0}, 30, 5, "0.12345", 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 7, 91, 178, 144, 58, 222, 87, 208, 0, 128, 0}, 30, 20, "0.12345000098765000000", 14, nil},
|
||||
{[]byte{128, 0, 0, 7, 91, 178, 144, 58, 222, 87, 208, 0, 0, 0, 0, 30, 0}, 30, 25, "0.1234500009876500000000000", 15, nil},
|
||||
{[]byte{128, 0, 128, 0}, 4, 2, "0.00", 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, "0", 3, nil},
|
||||
{[]byte{128, 0, 0, 0, 128, 0}, 7, 3, "0.000", 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 128, 0}, 10, 2, "0.00", 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 128, 0}, 10, 3, "0.000", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 127, 255}, 13, 2, "0.00", 6, nil},
|
||||
{[]byte{127, 255, 255, 255, 243, 255, 121, 59, 127, 255}, 15, 14, "-0.00000001234500", 8, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 243, 252, 128, 0}, 20, 10, "-0.0000000123", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 127, 255}, 30, 5, "0.00000", 15, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 243, 235, 111, 183, 93, 178, 127, 255}, 30, 20, "-0.00000001234500009877", 14, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 243, 235, 111, 183, 93, 255, 139, 69, 47, 30, 0}, 30, 25, "-0.0000000123450000987650000", 15, nil},
|
||||
{[]byte{227, 99, 129, 134}, 4, 2, "99.99", 2, nil},
|
||||
{[]byte{129, 134, 159, 167, 15}, 5, 0, "99999", 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 133, 245}, 7, 3, "9999.999", 4, nil},
|
||||
{[]byte{133, 245, 224, 255, 99, 128, 152}, 10, 2, "99999999.99", 5, nil},
|
||||
{[]byte{128, 152, 150, 127, 3, 231, 227, 59}, 10, 3, "9999999.999", 6, nil},
|
||||
{[]byte{227, 59, 154, 201, 255, 99, 137, 59}, 13, 2, "99999999999.99", 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 137, 59}, 15, 14, "9.99999999999999", 8, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 59, 154, 201, 255, 9, 128, 0}, 20, 10, "9999999999.9999999999", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 4, 210, 29, 205, 139, 148, 0, 195, 80, 137, 59}, 30, 5, "1234500009876.50000", 15, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 59, 154, 201, 255, 59, 154, 201, 255, 99, 129, 134}, 30, 20, "9999999999.99999999999999999999", 14, nil},
|
||||
{[]byte{129, 134, 159, 59, 154, 201, 255, 59, 154, 201, 255, 0, 152, 150, 127, 30, 0}, 30, 25, "99999.9999999999999999999999999", 15, nil},
|
||||
{[]byte{227, 99, 129, 134}, 4, 2, "99.99", 2, nil},
|
||||
{[]byte{129, 134, 159, 167, 15}, 5, 0, "99999", 3, nil},
|
||||
{[]byte{167, 15, 3, 231, 133, 245}, 7, 3, "9999.999", 4, nil},
|
||||
{[]byte{133, 245, 224, 255, 99, 128, 152}, 10, 2, "99999999.99", 5, nil},
|
||||
{[]byte{128, 152, 150, 127, 3, 231, 128, 6}, 10, 3, "9999999.999", 6, nil},
|
||||
{[]byte{128, 6, 159, 107, 199, 11, 137, 59}, 13, 2, "111111111.11", 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 6}, 15, 14, "9.99999999999999", 8, nil},
|
||||
{[]byte{128, 6, 159, 107, 199, 6, 142, 119, 128, 0, 128, 0}, 20, 10, "111111111.1100000000", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 6, 159, 107, 199, 0, 42, 248, 128, 6}, 30, 5, "111111111.11000", 15, nil},
|
||||
{[]byte{128, 6, 159, 107, 199, 6, 142, 119, 128, 0, 0, 0, 0, 0, 129, 134}, 30, 20, "111111111.11000000000000000000", 14, nil},
|
||||
{[]byte{129, 134, 159, 59, 154, 201, 255, 59, 154, 201, 255, 0, 152, 150, 127, 10, 0}, 30, 25, "99999.9999999999999999999999999", 15, nil},
|
||||
{[]byte{128, 1, 128, 0}, 4, 2, "0.01", 2, nil},
|
||||
{[]byte{128, 0, 0, 128, 0}, 5, 0, "0", 3, nil},
|
||||
{[]byte{128, 0, 0, 10, 128, 0}, 7, 3, "0.010", 4, nil},
|
||||
{[]byte{128, 0, 0, 0, 1, 128, 0}, 10, 2, "0.01", 5, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 10, 128, 0}, 10, 3, "0.010", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 1, 128, 0}, 13, 2, "0.01", 6, nil},
|
||||
{[]byte{128, 0, 152, 150, 128, 0, 0, 0, 128, 0}, 15, 14, "0.01000000000000", 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 152, 150, 128, 0, 128, 0}, 20, 10, "0.0100000000", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3, 232, 128, 0}, 30, 5, "0.01000", 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 152, 150, 128, 0, 0, 0, 0, 0, 128, 0}, 30, 20, "0.01000000000000000000", 14, nil},
|
||||
{[]byte{128, 0, 0, 0, 152, 150, 128, 0, 0, 0, 0, 0, 0, 0, 0, 7, 0}, 30, 25, "0.0100000000000000000000000", 15, nil},
|
||||
{[]byte{227, 99, 128, 0}, 4, 2, "99.99", 2, nil},
|
||||
{[]byte{128, 0, 123, 128, 123}, 5, 0, "123", 3, nil},
|
||||
{[]byte{128, 123, 1, 144, 128, 0}, 7, 3, "123.400", 4, nil},
|
||||
{[]byte{128, 0, 0, 123, 40, 128, 0}, 10, 2, "123.40", 5, nil},
|
||||
{[]byte{128, 0, 0, 123, 1, 144, 128, 0}, 10, 3, "123.400", 6, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 40, 137, 59}, 13, 2, "123.40", 6, nil},
|
||||
{[]byte{137, 59, 154, 201, 255, 1, 134, 159, 128, 0}, 15, 14, "9.99999999999999", 8, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 23, 215, 132, 0, 0, 128, 0}, 20, 10, "123.4000000000", 10, nil},
|
||||
{[]byte{128, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 123, 0, 156, 64, 128, 0}, 30, 5, "123.40000", 15, nil},
|
||||
{[]byte{128, 0, 0, 0, 123, 23, 215, 132, 0, 0, 0, 0, 0, 0, 128, 0}, 30, 20, "123.40000000000000000000", 14, nil},
|
||||
{[]byte{128, 0, 123, 23, 215, 132, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0}, 30, 25, "123.4000000000000000000000000", 15, nil},
|
||||
{[]byte{28, 156, 127, 253}, 4, 2, "-99.99", 2, nil},
|
||||
{[]byte{127, 253, 204, 125, 205}, 5, 0, "-563", 3, nil},
|
||||
{[]byte{125, 205, 253, 187, 127, 255}, 7, 3, "-562.580", 4, nil},
|
||||
{[]byte{127, 255, 253, 205, 197, 127, 255}, 10, 2, "-562.58", 5, nil},
|
||||
{[]byte{127, 255, 253, 205, 253, 187, 127, 255}, 10, 3, "-562.580", 6, nil},
|
||||
{[]byte{127, 255, 255, 253, 205, 197, 118, 196}, 13, 2, "-562.58", 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, "-9.99999999999999", 8, nil},
|
||||
{[]byte{127, 255, 255, 253, 205, 221, 109, 230, 255, 255, 127, 255}, 20, 10, "-562.5800000000", 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 253, 205, 255, 29, 111, 127, 255}, 30, 5, "-562.58000", 15, nil},
|
||||
{[]byte{127, 255, 255, 253, 205, 221, 109, 230, 255, 255, 255, 255, 255, 255, 127, 253}, 30, 20, "-562.58000000000000000000", 14, nil},
|
||||
{[]byte{127, 253, 205, 221, 109, 230, 255, 255, 255, 255, 255, 255, 255, 255, 255, 13, 0}, 30, 25, "-562.5800000000000000000000000", 15, nil},
|
||||
{[]byte{28, 156, 127, 241}, 4, 2, "-99.99", 2, nil},
|
||||
{[]byte{127, 241, 140, 113, 140}, 5, 0, "-3699", 3, nil},
|
||||
{[]byte{113, 140, 255, 245, 127, 255}, 7, 3, "-3699.010", 4, nil},
|
||||
{[]byte{127, 255, 241, 140, 254, 127, 255}, 10, 2, "-3699.01", 5, nil},
|
||||
{[]byte{127, 255, 241, 140, 255, 245, 127, 255}, 10, 3, "-3699.010", 6, nil},
|
||||
{[]byte{127, 255, 255, 241, 140, 254, 118, 196}, 13, 2, "-3699.01", 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, "-9.99999999999999", 8, nil},
|
||||
{[]byte{127, 255, 255, 241, 140, 255, 103, 105, 127, 255, 127, 255}, 20, 10, "-3699.0100000000", 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 241, 140, 255, 252, 23, 127, 255}, 30, 5, "-3699.01000", 15, nil},
|
||||
{[]byte{127, 255, 255, 241, 140, 255, 103, 105, 127, 255, 255, 255, 255, 255, 127, 241}, 30, 20, "-3699.01000000000000000000", 14, nil},
|
||||
{[]byte{127, 241, 140, 255, 103, 105, 127, 255, 255, 255, 255, 255, 255, 255, 255, 13, 0}, 30, 25, "-3699.0100000000000000000000000", 15, nil},
|
||||
{[]byte{28, 156, 127, 248}, 4, 2, "-99.99", 2, nil},
|
||||
{[]byte{127, 248, 99, 120, 99}, 5, 0, "-1948", 3, nil},
|
||||
{[]byte{120, 99, 255, 115, 127, 255}, 7, 3, "-1948.140", 4, nil},
|
||||
{[]byte{127, 255, 248, 99, 241, 127, 255}, 10, 2, "-1948.14", 5, nil},
|
||||
{[]byte{127, 255, 248, 99, 255, 115, 127, 255}, 10, 3, "-1948.140", 6, nil},
|
||||
{[]byte{127, 255, 255, 248, 99, 241, 118, 196}, 13, 2, "-1948.14", 6, nil},
|
||||
{[]byte{118, 196, 101, 54, 0, 254, 121, 96, 127, 255}, 15, 14, "-9.99999999999999", 8, nil},
|
||||
{[]byte{127, 255, 255, 248, 99, 247, 167, 196, 255, 255, 127, 255}, 20, 10, "-1948.1400000000", 10, nil},
|
||||
{[]byte{127, 255, 255, 255, 255, 255, 255, 255, 255, 255, 248, 99, 255, 201, 79, 127, 255}, 30, 5, "-1948.14000", 15, nil},
|
||||
{[]byte{127, 255, 255, 248, 99, 247, 167, 196, 255, 255, 255, 255, 255, 255, 127, 248}, 30, 20, "-1948.14000000000000000000", 14, nil},
|
||||
{[]byte{127, 248, 99, 247, 167, 196, 255, 255, 255, 255, 255, 255, 255, 255, 255, 13, 0}, 30, 25, "-1948.1400000000000000000000000", 15, nil},
|
||||
}
|
||||
for i, tc := range testcases {
|
||||
value, pos, err := decodeDecimal(tc.Data, tc.Precision, tc.Decimals)
|
||||
c.Assert(value, DecodeDecimalsEquals, pos, err, tc.Expected, tc.ExpectedPos, tc.ExpectedErr, i)
|
||||
value, pos, err := decodeDecimal(tc.Data, tc.Precision, tc.Decimals, false)
|
||||
expectedFloat, _ := strconv.ParseFloat(tc.Expected, 64)
|
||||
c.Assert(value.(float64), DecodeDecimalsEquals, pos, err, expectedFloat, tc.ExpectedPos, tc.ExpectedErr, i)
|
||||
|
||||
value, pos, err = decodeDecimal(tc.Data, tc.Precision, tc.Decimals, true)
|
||||
expectedDecimal, _ := decimal.NewFromString(tc.Expected)
|
||||
c.Assert(value.(decimal.Decimal), DecodeDecimalsEquals, pos, err, expectedDecimal, tc.ExpectedPos, tc.ExpectedErr, i)
|
||||
}
|
||||
}
|
||||
|
||||
@ -386,6 +393,25 @@ func (_ *testDecodeSuite) TestParseRowPanic(c *C) {
|
||||
c.Assert(rows.Rows[0][0], Equals, int32(16270))
|
||||
}
|
||||
|
||||
type simpleDecimalEqualsChecker struct {
|
||||
*CheckerInfo
|
||||
}
|
||||
|
||||
var SimpleDecimalEqualsChecker Checker = &simpleDecimalEqualsChecker{
|
||||
&CheckerInfo{Name: "Equals", Params: []string{"obtained", "expected"}},
|
||||
}
|
||||
|
||||
func (checker *simpleDecimalEqualsChecker) Check(params []interface{}, names []string) (result bool, error string) {
|
||||
defer func() {
|
||||
if v := recover(); v != nil {
|
||||
result = false
|
||||
error = fmt.Sprint(v)
|
||||
}
|
||||
}()
|
||||
|
||||
return params[0].(decimal.Decimal).Equal(params[1].(decimal.Decimal)), ""
|
||||
}
|
||||
|
||||
func (_ *testDecodeSuite) TestParseJson(c *C) {
|
||||
// Table format:
|
||||
// mysql> desc t10;
|
||||
@ -405,7 +431,6 @@ func (_ *testDecodeSuite) TestParseJson(c *C) {
|
||||
// INSERT INTO `t10` (`c1`, `c2`) VALUES ('{"key1": "value1", "key2": "value2"}', 1);
|
||||
// test json deserialization
|
||||
// INSERT INTO `t10`(`c1`,`c2`) VALUES ('{"text":"Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius laoreet. Quisque rutrum. Aenean imperdiet. Etiam ultricies nisi vel augue. Curabitur ullamcorper ultricies nisi. Nam eget dui. Etiam rhoncus. Maecenas tempus, tellus eget condimentum rhoncus, sem quam semper libero, sit amet adipiscing sem neque sed ipsum. Nam quam nunc, blandit vel, luctus pulvinar, hendrerit id, lorem. Maecenas nec odio et ante tincidunt tempus. Donec vitae sapien ut libero venenatis faucibus. Nullam quis ante. Etiam sit amet orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, quis gravida magna mi a libero. Fusce vulputate eleifend sapien. Vestibulum purus quam, scelerisque ut, mollis sed, nonummy id, metus. Nullam accumsan lorem in dui. Cras ultricies mi eu turpis hendrerit fringilla. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; In ac dui quis mi consectetuer lacinia. Nam pretium turpis et arcu. Duis arcu tortor, suscipit eget, imperdiet nec, imperdiet iaculis, ipsum. Sed aliquam ultrices mauris. Integer ante arcu, accumsan a, consectetuer eget, posuere ut, mauris. Praesent adipiscing. Phasellus ullamcorper ipsum rutrum nunc. Nunc nonummy metus. Vestibulum volutpat pretium libero. Cras id dui. Aenean ut eros et nisl sagittis vestibulum. Nullam nulla eros, ultricies sit amet, nonummy id, imperdiet feugiat, pede. Sed lectus. Donec mollis hendrerit risus. Phasellus nec sem in justo pellentesque facilisis. Etiam imperdiet imperdiet orci. Nunc nec neque. Phasellus leo dolor, tempus non, auctor et, hendrerit quis, nisi. Curabitur ligula sapien, tincidunt non, euismod vitae, posuere imperdiet, leo. Maecenas malesuada. Praesent congue erat at massa. Sed cursus turpis vitae tortor. Donec posuere vulputate arcu. Phasellus accumsan cursus velit. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Sed aliquam, nisi quis porttitor congue, elit erat euismod orci, ac"}',101);
|
||||
|
||||
tableMapEventData := []byte("m\x00\x00\x00\x00\x00\x01\x00\x04test\x00\x03t10\x00\x02\xf5\xf6\x03\x04\n\x00\x03")
|
||||
|
||||
tableMapEvent := new(TableMapEvent)
|
||||
@ -442,3 +467,196 @@ func (_ *testDecodeSuite) TestParseJson(c *C) {
|
||||
c.Assert(rows.Rows[0][1], Equals, float64(101))
|
||||
}
|
||||
}
|
||||
func (_ *testDecodeSuite) TestParseJsonDecimal(c *C) {
|
||||
// Table format:
|
||||
// mysql> desc t10;
|
||||
// +-------+---------------+------+-----+---------+-------+
|
||||
// | Field | Type | Null | Key | Default | Extra |
|
||||
// +-------+---------------+------+-----+---------+-------+
|
||||
// | c1 | json | YES | | NULL | |
|
||||
// | c2 | decimal(10,0) | YES | | NULL | |
|
||||
// +-------+---------------+------+-----+---------+-------+
|
||||
|
||||
// CREATE TABLE `t10` (
|
||||
// `c1` json DEFAULT NULL,
|
||||
// `c2` decimal(10,0)
|
||||
// ) ENGINE=InnoDB DEFAULT CHARSET=utf8;
|
||||
|
||||
// INSERT INTO `t10` (`c2`) VALUES (1);
|
||||
// INSERT INTO `t10` (`c1`, `c2`) VALUES ('{"key1": "value1", "key2": "value2"}', 1);
|
||||
// test json deserialization
|
||||
// INSERT INTO `t10`(`c1`,`c2`) VALUES ('{"text":"Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius laoreet. Quisque rutrum. Aenean imperdiet. Etiam ultricies nisi vel augue. Curabitur ullamcorper ultricies nisi. Nam eget dui. Etiam rhoncus. Maecenas tempus, tellus eget condimentum rhoncus, sem quam semper libero, sit amet adipiscing sem neque sed ipsum. Nam quam nunc, blandit vel, luctus pulvinar, hendrerit id, lorem. Maecenas nec odio et ante tincidunt tempus. Donec vitae sapien ut libero venenatis faucibus. Nullam quis ante. Etiam sit amet orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, quis gravida magna mi a libero. Fusce vulputate eleifend sapien. Vestibulum purus quam, scelerisque ut, mollis sed, nonummy id, metus. Nullam accumsan lorem in dui. Cras ultricies mi eu turpis hendrerit fringilla. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; In ac dui quis mi consectetuer lacinia. Nam pretium turpis et arcu. Duis arcu tortor, suscipit eget, imperdiet nec, imperdiet iaculis, ipsum. Sed aliquam ultrices mauris. Integer ante arcu, accumsan a, consectetuer eget, posuere ut, mauris. Praesent adipiscing. Phasellus ullamcorper ipsum rutrum nunc. Nunc nonummy metus. Vestibulum volutpat pretium libero. Cras id dui. Aenean ut eros et nisl sagittis vestibulum. Nullam nulla eros, ultricies sit amet, nonummy id, imperdiet feugiat, pede. Sed lectus. Donec mollis hendrerit risus. Phasellus nec sem in justo pellentesque facilisis. Etiam imperdiet imperdiet orci. Nunc nec neque. Phasellus leo dolor, tempus non, auctor et, hendrerit quis, nisi. Curabitur ligula sapien, tincidunt non, euismod vitae, posuere imperdiet, leo. Maecenas malesuada. Praesent congue erat at massa. Sed cursus turpis vitae tortor. Donec posuere vulputate arcu. Phasellus accumsan cursus velit. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Sed aliquam, nisi quis porttitor congue, elit erat euismod orci, ac"}',101);
|
||||
tableMapEventData := []byte("m\x00\x00\x00\x00\x00\x01\x00\x04test\x00\x03t10\x00\x02\xf5\xf6\x03\x04\n\x00\x03")
|
||||
|
||||
tableMapEvent := new(TableMapEvent)
|
||||
tableMapEvent.tableIDSize = 6
|
||||
err := tableMapEvent.Decode(tableMapEventData)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
rows := RowsEvent{useDecimal: true}
|
||||
rows.tableIDSize = 6
|
||||
rows.tables = make(map[uint64]*TableMapEvent)
|
||||
rows.tables[tableMapEvent.TableID] = tableMapEvent
|
||||
rows.Version = 2
|
||||
|
||||
tbls := [][]byte{
|
||||
[]byte("m\x00\x00\x00\x00\x00\x01\x00\x02\x00\x02\xff\xfd\x80\x00\x00\x00\x01"),
|
||||
[]byte("m\x00\x00\x00\x00\x00\x01\x00\x02\x00\x02\xff\xfc)\x00\x00\x00\x00\x02\x00(\x00\x12\x00\x04\x00\x16\x00\x04\x00\f\x1a\x00\f!\x00key1key2\x06value1\x06value2\x80\x00\x00\x00\x01"),
|
||||
}
|
||||
|
||||
for _, tbl := range tbls {
|
||||
rows.Rows = nil
|
||||
err = rows.Decode(tbl)
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(rows.Rows[0][1], SimpleDecimalEqualsChecker, decimal.NewFromFloat(1))
|
||||
}
|
||||
|
||||
longTbls := [][]byte{
|
||||
[]byte("m\x00\x00\x00\x00\x00\x01\x00\x02\x00\x02\xff\xfc\xd0\n\x00\x00\x00\x01\x00\xcf\n\v\x00\x04\x00\f\x0f\x00text\xbe\x15Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Aenean commodo ligula eget dolor. Aenean massa. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Donec quam felis, ultricies nec, pellentesque eu, pretium quis, sem. Nulla consequat massa quis enim. Donec pede justo, fringilla vel, aliquet nec, vulputate eget, arcu. In enim justo, rhoncus ut, imperdiet a, venenatis vitae, justo. Nullam dictum felis eu pede mollis pretium. Integer tincidunt. Cras dapibus. Vivamus elementum semper nisi. Aenean vulputate eleifend tellus. Aenean leo ligula, porttitor eu, consequat vitae, eleifend ac, enim. Aliquam lorem ante, dapibus in, viverra quis, feugiat a, tellus. Phasellus viverra nulla ut metus varius laoreet. Quisque rutrum. Aenean imperdiet. Etiam ultricies nisi vel augue. Curabitur ullamcorper ultricies nisi. Nam eget dui. Etiam rhoncus. Maecenas tempus, tellus eget condimentum rhoncus, sem quam semper libero, sit amet adipiscing sem neque sed ipsum. Nam quam nunc, blandit vel, luctus pulvinar, hendrerit id, lorem. Maecenas nec odio et ante tincidunt tempus. Donec vitae sapien ut libero venenatis faucibus. Nullam quis ante. Etiam sit amet orci eget eros faucibus tincidunt. Duis leo. Sed fringilla mauris sit amet nibh. Donec sodales sagittis magna. Sed consequat, leo eget bibendum sodales, augue velit cursus nunc, quis gravida magna mi a libero. Fusce vulputate eleifend sapien. Vestibulum purus quam, scelerisque ut, mollis sed, nonummy id, metus. Nullam accumsan lorem in dui. Cras ultricies mi eu turpis hendrerit fringilla. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; In ac dui quis mi consectetuer lacinia. Nam pretium turpis et arcu. Duis arcu tortor, suscipit eget, imperdiet nec, imperdiet iaculis, ipsum. Sed aliquam ultrices mauris. Integer ante arcu, accumsan a, consectetuer eget, posuere ut, mauris. Praesent adipiscing. Phasellus ullamcorper ipsum rutrum nunc. Nunc nonummy metus. Vestibulum volutpat pretium libero. Cras id dui. Aenean ut eros et nisl sagittis vestibulum. Nullam nulla eros, ultricies sit amet, nonummy id, imperdiet feugiat, pede. Sed lectus. Donec mollis hendrerit risus. Phasellus nec sem in justo pellentesque facilisis. Etiam imperdiet imperdiet orci. Nunc nec neque. Phasellus leo dolor, tempus non, auctor et, hendrerit quis, nisi. Curabitur ligula sapien, tincidunt non, euismod vitae, posuere imperdiet, leo. Maecenas malesuada. Praesent congue erat at massa. Sed cursus turpis vitae tortor. Donec posuere vulputate arcu. Phasellus accumsan cursus velit. Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Sed aliquam, nisi quis porttitor congue, elit erat euismod orci, ac\x80\x00\x00\x00e"),
|
||||
}
|
||||
|
||||
for _, ltbl := range longTbls {
|
||||
rows.Rows = nil
|
||||
err = rows.Decode(ltbl)
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(rows.Rows[0][1], SimpleDecimalEqualsChecker, decimal.NewFromFloat(101))
|
||||
}
|
||||
}
|
||||
|
||||
func (_ *testDecodeSuite) TestEnum(c *C) {
|
||||
// mysql> desc aenum;
|
||||
// +-------+-------------------------------------------+------+-----+---------+-------+
|
||||
// | Field | Type | Null | Key | Default | Extra |
|
||||
// +-------+-------------------------------------------+------+-----+---------+-------+
|
||||
// | id | int(11) | YES | | NULL | |
|
||||
// | aset | enum('0','1','2','3','4','5','6','7','8') | YES | | NULL | |
|
||||
// +-------+-------------------------------------------+------+-----+---------+-------+
|
||||
// 2 rows in set (0.00 sec)
|
||||
//
|
||||
// insert into aenum(id, aset) values(1, '0');
|
||||
tableMapEventData := []byte("\x42\x0f\x00\x00\x00\x00\x01\x00\x05\x74\x74\x65\x73\x74\x00\x05")
|
||||
tableMapEventData = append(tableMapEventData, []byte("\x61\x65\x6e\x75\x6d\x00\x02\x03\xfe\x02\xf7\x01\x03")...)
|
||||
tableMapEvent := new(TableMapEvent)
|
||||
tableMapEvent.tableIDSize = 6
|
||||
err := tableMapEvent.Decode(tableMapEventData)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
rows := new(RowsEvent)
|
||||
rows.tableIDSize = 6
|
||||
rows.tables = make(map[uint64]*TableMapEvent)
|
||||
rows.tables[tableMapEvent.TableID] = tableMapEvent
|
||||
rows.Version = 2
|
||||
|
||||
data := []byte("\x42\x0f\x00\x00\x00\x00\x01\x00\x02\x00\x02\xff\xfc\x01\x00\x00\x00\x01")
|
||||
|
||||
rows.Rows = nil
|
||||
err = rows.Decode(data)
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(rows.Rows[0][1], Equals, int64(1))
|
||||
}
|
||||
|
||||
func (_ *testDecodeSuite) TestMultiBytesEnum(c *C) {
|
||||
// CREATE TABLE numbers (
|
||||
// id int auto_increment,
|
||||
// num ENUM( '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100', '101', '102', '103', '104', '105', '106', '107', '108', '109', '110', '111', '112', '113', '114', '115', '116', '117', '118', '119', '120', '121', '122', '123', '124', '125', '126', '127', '128', '129', '130', '131', '132', '133', '134', '135', '136', '137', '138', '139', '140', '141', '142', '143', '144', '145', '146', '147', '148', '149', '150', '151', '152', '153', '154', '155', '156', '157', '158', '159', '160', '161', '162', '163', '164', '165', '166', '167', '168', '169', '170', '171', '172', '173', '174', '175', '176', '177', '178', '179', '180', '181', '182', '183', '184', '185', '186', '187', '188', '189', '190', '191', '192', '193', '194', '195', '196', '197', '198', '199', '200', '201', '202', '203', '204', '205', '206', '207', '208', '209', '210', '211', '212', '213', '214', '215', '216', '217', '218', '219', '220', '221', '222', '223', '224', '225', '226', '227', '228', '229', '230', '231', '232', '233', '234', '235', '236', '237', '238', '239', '240', '241', '242', '243', '244', '245', '246', '247', '248', '249', '250', '251', '252', '253', '254', '255','256','257'
|
||||
|
||||
// ),
|
||||
// primary key(id)
|
||||
// );
|
||||
|
||||
//
|
||||
// insert into numbers(num) values ('0'), ('256');
|
||||
tableMapEventData := []byte("\x84\x0f\x00\x00\x00\x00\x01\x00\x05\x74\x74\x65\x73\x74\x00\x07")
|
||||
tableMapEventData = append(tableMapEventData, []byte("\x6e\x75\x6d\x62\x65\x72\x73\x00\x02\x03\xfe\x02\xf7\x02\x02")...)
|
||||
tableMapEvent := new(TableMapEvent)
|
||||
tableMapEvent.tableIDSize = 6
|
||||
err := tableMapEvent.Decode(tableMapEventData)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
rows := new(RowsEvent)
|
||||
rows.tableIDSize = 6
|
||||
rows.tables = make(map[uint64]*TableMapEvent)
|
||||
rows.tables[tableMapEvent.TableID] = tableMapEvent
|
||||
rows.Version = 2
|
||||
|
||||
data := []byte("\x84\x0f\x00\x00\x00\x00\x01\x00\x02\x00\x02\xff\xfc\x01\x00\x00\x00\x01\x00\xfc\x02\x00\x00\x00\x01\x01")
|
||||
|
||||
rows.Rows = nil
|
||||
err = rows.Decode(data)
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(rows.Rows[0][1], Equals, int64(1))
|
||||
c.Assert(rows.Rows[1][1], Equals, int64(257))
|
||||
}
|
||||
|
||||
func (_ *testDecodeSuite) TestSet(c *C) {
|
||||
// mysql> desc aset;
|
||||
// +--------+---------------------------------------------------------------------------------------+------+-----+---------+-------+
|
||||
// | Field | Type | Null | Key | Default | Extra |
|
||||
// +--------+---------------------------------------------------------------------------------------+------+-----+---------+-------+
|
||||
// | id | int(11) | YES | | NULL | |
|
||||
// | region | set('1','2','3','4','5','6','7','8','9','10','11','12','13','14','15','16','17','18') | YES | | NULL | |
|
||||
// +--------+---------------------------------------------------------------------------------------+------+-----+---------+-------+
|
||||
// 2 rows in set (0.00 sec)
|
||||
//
|
||||
// insert into aset(id, region) values(1, '1,3');
|
||||
|
||||
tableMapEventData := []byte("\xe7\x0e\x00\x00\x00\x00\x01\x00\x05\x74\x74\x65\x73\x74\x00\x04")
|
||||
tableMapEventData = append(tableMapEventData, []byte("\x61\x73\x65\x74\x00\x02\x03\xfe\x02\xf8\x03\x03")...)
|
||||
tableMapEvent := new(TableMapEvent)
|
||||
tableMapEvent.tableIDSize = 6
|
||||
err := tableMapEvent.Decode(tableMapEventData)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
rows := new(RowsEvent)
|
||||
rows.tableIDSize = 6
|
||||
rows.tables = make(map[uint64]*TableMapEvent)
|
||||
rows.tables[tableMapEvent.TableID] = tableMapEvent
|
||||
rows.Version = 2
|
||||
|
||||
data := []byte("\xe7\x0e\x00\x00\x00\x00\x01\x00\x02\x00\x02\xff\xfc\x01\x00\x00\x00\x05\x00\x00")
|
||||
|
||||
rows.Rows = nil
|
||||
err = rows.Decode(data)
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(rows.Rows[0][1], Equals, int64(5))
|
||||
}
|
||||
|
||||
func (_ *testDecodeSuite) TestJsonNull(c *C) {
|
||||
// Table:
|
||||
// desc hj_order_preview
|
||||
// +------------------+------------+------+-----+-------------------+----------------+
|
||||
// | Field | Type | Null | Key | Default | Extra |
|
||||
// +------------------+------------+------+-----+-------------------+----------------+
|
||||
// | id | int(13) | NO | PRI | <null> | auto_increment |
|
||||
// | buyer_id | bigint(13) | NO | | <null> | |
|
||||
// | order_sn | bigint(13) | NO | | <null> | |
|
||||
// | order_detail | json | NO | | <null> | |
|
||||
// | is_del | tinyint(1) | NO | | 0 | |
|
||||
// | add_time | int(13) | NO | | <null> | |
|
||||
// | last_update_time | timestamp | NO | | CURRENT_TIMESTAMP | |
|
||||
// +------------------+------------+------+-----+-------------------+----------------+
|
||||
// insert into hj_order_preview
|
||||
// (id, buyer_id, order_sn, is_del, add_time, last_update_time)
|
||||
// values (1, 95891865464386, 13376222192996417, 0, 1479983995, 1479983995)
|
||||
|
||||
tableMapEventData := []byte("r\x00\x00\x00\x00\x00\x01\x00\x04test\x00\x10hj_order_preview\x00\a\x03\b\b\xf5\x01\x03\x11\x02\x04\x00\x00")
|
||||
|
||||
tableMapEvent := new(TableMapEvent)
|
||||
tableMapEvent.tableIDSize = 6
|
||||
err := tableMapEvent.Decode(tableMapEventData)
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
rows := new(RowsEvent)
|
||||
rows.tableIDSize = 6
|
||||
rows.tables = make(map[uint64]*TableMapEvent)
|
||||
rows.tables[tableMapEvent.TableID] = tableMapEvent
|
||||
rows.Version = 2
|
||||
|
||||
data :=
|
||||
[]byte("r\x00\x00\x00\x00\x00\x01\x00\x02\x00\a\xff\x80\x01\x00\x00\x00B\ue4d06W\x00\x00A\x10@l\x9a\x85/\x00\x00\x00\x00\x00\x00{\xc36X\x00\x00\x00\x00")
|
||||
|
||||
rows.Rows = nil
|
||||
err = rows.Decode(data)
|
||||
c.Assert(err, IsNil)
|
||||
c.Assert(rows.Rows[0][3], HasLen, 0)
|
||||
}
|
||||
|
49
vendor/github.com/siddontang/go-mysql/replication/time.go
generated
vendored
Normal file
49
vendor/github.com/siddontang/go-mysql/replication/time.go
generated
vendored
Normal file
@ -0,0 +1,49 @@
|
||||
package replication
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
fracTimeFormat []string
|
||||
)
|
||||
|
||||
// fracTime is a help structure wrapping Golang Time.
|
||||
type fracTime struct {
|
||||
time.Time
|
||||
|
||||
// Dec must in [0, 6]
|
||||
Dec int
|
||||
|
||||
timestampStringLocation *time.Location
|
||||
}
|
||||
|
||||
func (t fracTime) String() string {
|
||||
tt := t.Time
|
||||
if t.timestampStringLocation != nil {
|
||||
tt = tt.In(t.timestampStringLocation)
|
||||
}
|
||||
return tt.Format(fracTimeFormat[t.Dec])
|
||||
}
|
||||
|
||||
func formatZeroTime(frac int, dec int) string {
|
||||
if dec == 0 {
|
||||
return "0000-00-00 00:00:00"
|
||||
}
|
||||
|
||||
s := fmt.Sprintf("0000-00-00 00:00:00.%06d", frac)
|
||||
|
||||
// dec must < 6, if frac is 924000, but dec is 3, we must output 924 here.
|
||||
return s[0 : len(s)-(6-dec)]
|
||||
}
|
||||
|
||||
func init() {
|
||||
fracTimeFormat = make([]string, 7)
|
||||
fracTimeFormat[0] = "2006-01-02 15:04:05"
|
||||
|
||||
for i := 1; i <= 6; i++ {
|
||||
fracTimeFormat[i] = fmt.Sprintf("2006-01-02 15:04:05.%s", strings.Repeat("0", i))
|
||||
}
|
||||
}
|
70
vendor/github.com/siddontang/go-mysql/replication/time_test.go
generated
vendored
Normal file
70
vendor/github.com/siddontang/go-mysql/replication/time_test.go
generated
vendored
Normal file
@ -0,0 +1,70 @@
|
||||
package replication
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
. "github.com/pingcap/check"
|
||||
)
|
||||
|
||||
type testTimeSuite struct{}
|
||||
|
||||
var _ = Suite(&testTimeSuite{})
|
||||
|
||||
func (s *testTimeSuite) TestTime(c *C) {
|
||||
tbls := []struct {
|
||||
year int
|
||||
month int
|
||||
day int
|
||||
hour int
|
||||
min int
|
||||
sec int
|
||||
microSec int
|
||||
frac int
|
||||
expected string
|
||||
}{
|
||||
{2000, 1, 1, 1, 1, 1, 1, 0, "2000-01-01 01:01:01"},
|
||||
{2000, 1, 1, 1, 1, 1, 1, 1, "2000-01-01 01:01:01.0"},
|
||||
{2000, 1, 1, 1, 1, 1, 1, 6, "2000-01-01 01:01:01.000001"},
|
||||
}
|
||||
|
||||
for _, t := range tbls {
|
||||
t1 := fracTime{time.Date(t.year, time.Month(t.month), t.day, t.hour, t.min, t.sec, t.microSec*1000, time.UTC), t.frac, nil}
|
||||
c.Assert(t1.String(), Equals, t.expected)
|
||||
}
|
||||
|
||||
zeroTbls := []struct {
|
||||
frac int
|
||||
dec int
|
||||
expected string
|
||||
}{
|
||||
{0, 1, "0000-00-00 00:00:00.0"},
|
||||
{1, 1, "0000-00-00 00:00:00.0"},
|
||||
{123, 3, "0000-00-00 00:00:00.000"},
|
||||
{123000, 3, "0000-00-00 00:00:00.123"},
|
||||
{123, 6, "0000-00-00 00:00:00.000123"},
|
||||
{123000, 6, "0000-00-00 00:00:00.123000"},
|
||||
}
|
||||
|
||||
for _, t := range zeroTbls {
|
||||
c.Assert(formatZeroTime(t.frac, t.dec), Equals, t.expected)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *testTimeSuite) TestTimeStringLocation(c *C) {
|
||||
t := fracTime{
|
||||
time.Date(2018, time.Month(7), 30, 10, 0, 0, 0, time.FixedZone("EST", -5*3600)),
|
||||
0,
|
||||
nil,
|
||||
}
|
||||
|
||||
c.Assert(t.String(), Equals, "2018-07-30 10:00:00")
|
||||
|
||||
t = fracTime{
|
||||
time.Date(2018, time.Month(7), 30, 10, 0, 0, 0, time.FixedZone("EST", -5*3600)),
|
||||
0,
|
||||
time.UTC,
|
||||
}
|
||||
c.Assert(t.String(), Equals, "2018-07-30 15:00:00")
|
||||
}
|
||||
|
||||
var _ = Suite(&testTimeSuite{})
|
207
vendor/github.com/siddontang/go-mysql/schema/schema.go
generated
vendored
207
vendor/github.com/siddontang/go-mysql/schema/schema.go
generated
vendored
@ -5,6 +5,7 @@
|
||||
package schema
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
@ -12,6 +13,11 @@ import (
|
||||
"github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
var ErrTableNotExist = errors.New("table is not exist")
|
||||
var ErrMissingTableMeta = errors.New("missing table meta")
|
||||
var HAHealthCheckSchema = "mysql.ha_health_check"
|
||||
|
||||
// Different column type
|
||||
const (
|
||||
TYPE_NUMBER = iota + 1 // tinyint, smallint, mediumint, int, bigint, year
|
||||
TYPE_FLOAT // float, double
|
||||
@ -24,12 +30,16 @@ const (
|
||||
TYPE_TIME // time
|
||||
TYPE_BIT // bit
|
||||
TYPE_JSON // json
|
||||
TYPE_DECIMAL // decimal
|
||||
)
|
||||
|
||||
type TableColumn struct {
|
||||
Name string
|
||||
Type int
|
||||
Collation string
|
||||
RawType string
|
||||
IsAuto bool
|
||||
IsUnsigned bool
|
||||
EnumValues []string
|
||||
SetValues []string
|
||||
}
|
||||
@ -47,22 +57,24 @@ type Table struct {
|
||||
Columns []TableColumn
|
||||
Indexes []*Index
|
||||
PKColumns []int
|
||||
|
||||
UnsignedColumns []int
|
||||
}
|
||||
|
||||
func (ta *Table) String() string {
|
||||
return fmt.Sprintf("%s.%s", ta.Schema, ta.Name)
|
||||
}
|
||||
|
||||
func (ta *Table) AddColumn(name string, columnType string, extra string) {
|
||||
func (ta *Table) AddColumn(name string, columnType string, collation string, extra string) {
|
||||
index := len(ta.Columns)
|
||||
ta.Columns = append(ta.Columns, TableColumn{Name: name})
|
||||
ta.Columns = append(ta.Columns, TableColumn{Name: name, Collation: collation})
|
||||
ta.Columns[index].RawType = columnType
|
||||
|
||||
if strings.Contains(columnType, "int") || strings.HasPrefix(columnType, "year") {
|
||||
ta.Columns[index].Type = TYPE_NUMBER
|
||||
} else if strings.HasPrefix(columnType, "float") ||
|
||||
strings.HasPrefix(columnType, "double") ||
|
||||
strings.HasPrefix(columnType, "decimal") {
|
||||
if strings.HasPrefix(columnType, "float") ||
|
||||
strings.HasPrefix(columnType, "double") {
|
||||
ta.Columns[index].Type = TYPE_FLOAT
|
||||
} else if strings.HasPrefix(columnType, "decimal") {
|
||||
ta.Columns[index].Type = TYPE_DECIMAL
|
||||
} else if strings.HasPrefix(columnType, "enum") {
|
||||
ta.Columns[index].Type = TYPE_ENUM
|
||||
ta.Columns[index].EnumValues = strings.Split(strings.Replace(
|
||||
@ -93,10 +105,17 @@ func (ta *Table) AddColumn(name string, columnType string, extra string) {
|
||||
ta.Columns[index].Type = TYPE_BIT
|
||||
} else if strings.HasPrefix(columnType, "json") {
|
||||
ta.Columns[index].Type = TYPE_JSON
|
||||
} else if strings.Contains(columnType, "int") || strings.HasPrefix(columnType, "year") {
|
||||
ta.Columns[index].Type = TYPE_NUMBER
|
||||
} else {
|
||||
ta.Columns[index].Type = TYPE_STRING
|
||||
}
|
||||
|
||||
if strings.Contains(columnType, "unsigned") || strings.Contains(columnType, "zerofill") {
|
||||
ta.Columns[index].IsUnsigned = true
|
||||
ta.UnsignedColumns = append(ta.UnsignedColumns, index)
|
||||
}
|
||||
|
||||
if extra == "auto_increment" {
|
||||
ta.Columns[index].IsAuto = true
|
||||
}
|
||||
@ -142,6 +161,35 @@ func (idx *Index) FindColumn(name string) int {
|
||||
return -1
|
||||
}
|
||||
|
||||
func IsTableExist(conn mysql.Executer, schema string, name string) (bool, error) {
|
||||
query := fmt.Sprintf("SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = '%s' and TABLE_NAME = '%s' LIMIT 1", schema, name)
|
||||
r, err := conn.Execute(query)
|
||||
if err != nil {
|
||||
return false, errors.Trace(err)
|
||||
}
|
||||
|
||||
return r.RowNumber() == 1, nil
|
||||
}
|
||||
|
||||
func NewTableFromSqlDB(conn *sql.DB, schema string, name string) (*Table, error) {
|
||||
ta := &Table{
|
||||
Schema: schema,
|
||||
Name: name,
|
||||
Columns: make([]TableColumn, 0, 16),
|
||||
Indexes: make([]*Index, 0, 8),
|
||||
}
|
||||
|
||||
if err := ta.fetchColumnsViaSqlDB(conn); err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
if err := ta.fetchIndexesViaSqlDB(conn); err != nil {
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
return ta, nil
|
||||
}
|
||||
|
||||
func NewTable(conn mysql.Executer, schema string, name string) (*Table, error) {
|
||||
ta := &Table{
|
||||
Schema: schema,
|
||||
@ -151,18 +199,18 @@ func NewTable(conn mysql.Executer, schema string, name string) (*Table, error) {
|
||||
}
|
||||
|
||||
if err := ta.fetchColumns(conn); err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
if err := ta.fetchIndexes(conn); err != nil {
|
||||
return nil, err
|
||||
return nil, errors.Trace(err)
|
||||
}
|
||||
|
||||
return ta, nil
|
||||
}
|
||||
|
||||
func (ta *Table) fetchColumns(conn mysql.Executer) error {
|
||||
r, err := conn.Execute(fmt.Sprintf("describe `%s`.`%s`", ta.Schema, ta.Name))
|
||||
r, err := conn.Execute(fmt.Sprintf("show full columns from `%s`.`%s`", ta.Schema, ta.Name))
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
@ -170,14 +218,39 @@ func (ta *Table) fetchColumns(conn mysql.Executer) error {
|
||||
for i := 0; i < r.RowNumber(); i++ {
|
||||
name, _ := r.GetString(i, 0)
|
||||
colType, _ := r.GetString(i, 1)
|
||||
extra, _ := r.GetString(i, 5)
|
||||
collation, _ := r.GetString(i, 2)
|
||||
extra, _ := r.GetString(i, 6)
|
||||
|
||||
ta.AddColumn(name, colType, extra)
|
||||
ta.AddColumn(name, colType, collation, extra)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ta *Table) fetchColumnsViaSqlDB(conn *sql.DB) error {
|
||||
r, err := conn.Query(fmt.Sprintf("show full columns from `%s`.`%s`", ta.Schema, ta.Name))
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
defer r.Close()
|
||||
|
||||
var unusedVal interface{}
|
||||
unused := &unusedVal
|
||||
|
||||
for r.Next() {
|
||||
var name, colType, extra string
|
||||
var collation sql.NullString
|
||||
err := r.Scan(&name, &colType, &collation, &unused, &unused, &unused, &extra, &unused, &unused)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
ta.AddColumn(name, colType, collation.String, extra)
|
||||
}
|
||||
|
||||
return r.Err()
|
||||
}
|
||||
|
||||
func (ta *Table) fetchIndexes(conn mysql.Executer) error {
|
||||
r, err := conn.Execute(fmt.Sprintf("show index from `%s`.`%s`", ta.Schema, ta.Name))
|
||||
if err != nil {
|
||||
@ -197,6 +270,87 @@ func (ta *Table) fetchIndexes(conn mysql.Executer) error {
|
||||
currentIndex.AddColumn(colName, cardinality)
|
||||
}
|
||||
|
||||
return ta.fetchPrimaryKeyColumns()
|
||||
|
||||
}
|
||||
|
||||
func (ta *Table) fetchIndexesViaSqlDB(conn *sql.DB) error {
|
||||
r, err := conn.Query(fmt.Sprintf("show index from `%s`.`%s`", ta.Schema, ta.Name))
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
defer r.Close()
|
||||
|
||||
var currentIndex *Index
|
||||
currentName := ""
|
||||
|
||||
var unusedVal interface{}
|
||||
unused := &unusedVal
|
||||
|
||||
for r.Next() {
|
||||
var indexName, colName string
|
||||
var cardinality interface{}
|
||||
|
||||
err := r.Scan(
|
||||
&unused,
|
||||
&unused,
|
||||
&indexName,
|
||||
&unused,
|
||||
&colName,
|
||||
&unused,
|
||||
&cardinality,
|
||||
&unused,
|
||||
&unused,
|
||||
&unused,
|
||||
&unused,
|
||||
&unused,
|
||||
&unused,
|
||||
)
|
||||
if err != nil {
|
||||
return errors.Trace(err)
|
||||
}
|
||||
|
||||
if currentName != indexName {
|
||||
currentIndex = ta.AddIndex(indexName)
|
||||
currentName = indexName
|
||||
}
|
||||
|
||||
c := toUint64(cardinality)
|
||||
currentIndex.AddColumn(colName, c)
|
||||
}
|
||||
|
||||
return ta.fetchPrimaryKeyColumns()
|
||||
}
|
||||
|
||||
func toUint64(i interface{}) uint64 {
|
||||
switch i := i.(type) {
|
||||
case int:
|
||||
return uint64(i)
|
||||
case int8:
|
||||
return uint64(i)
|
||||
case int16:
|
||||
return uint64(i)
|
||||
case int32:
|
||||
return uint64(i)
|
||||
case int64:
|
||||
return uint64(i)
|
||||
case uint:
|
||||
return uint64(i)
|
||||
case uint8:
|
||||
return uint64(i)
|
||||
case uint16:
|
||||
return uint64(i)
|
||||
case uint32:
|
||||
return uint64(i)
|
||||
case uint64:
|
||||
return uint64(i)
|
||||
}
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
func (ta *Table) fetchPrimaryKeyColumns() error {
|
||||
if len(ta.Indexes) == 0 {
|
||||
return nil
|
||||
}
|
||||
@ -213,3 +367,32 @@ func (ta *Table) fetchIndexes(conn mysql.Executer) error {
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get primary keys in one row for a table, a table may use multi fields as the PK
|
||||
func (ta *Table) GetPKValues(row []interface{}) ([]interface{}, error) {
|
||||
indexes := ta.PKColumns
|
||||
if len(indexes) == 0 {
|
||||
return nil, errors.Errorf("table %s has no PK", ta)
|
||||
} else if len(ta.Columns) != len(row) {
|
||||
return nil, errors.Errorf("table %s has %d columns, but row data %v len is %d", ta,
|
||||
len(ta.Columns), row, len(row))
|
||||
}
|
||||
|
||||
values := make([]interface{}, 0, len(indexes))
|
||||
|
||||
for _, index := range indexes {
|
||||
values = append(values, row[index])
|
||||
}
|
||||
|
||||
return values, nil
|
||||
}
|
||||
|
||||
// Get term column's value
|
||||
func (ta *Table) GetColumnValue(column string, row []interface{}) (interface{}, error) {
|
||||
index := ta.FindColumn(column)
|
||||
if index == -1 {
|
||||
return nil, errors.Errorf("table %s has no column name %s", ta, column)
|
||||
}
|
||||
|
||||
return row[index], nil
|
||||
}
|
||||
|
34
vendor/github.com/siddontang/go-mysql/schema/schema_test.go
generated
vendored
34
vendor/github.com/siddontang/go-mysql/schema/schema_test.go
generated
vendored
@ -1,12 +1,14 @@
|
||||
package schema
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"flag"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
. "github.com/pingcap/check"
|
||||
"github.com/siddontang/go-mysql/client"
|
||||
_ "github.com/siddontang/go-mysql/driver"
|
||||
)
|
||||
|
||||
// use docker mysql for test
|
||||
@ -17,7 +19,8 @@ func Test(t *testing.T) {
|
||||
}
|
||||
|
||||
type schemaTestSuite struct {
|
||||
conn *client.Conn
|
||||
conn *client.Conn
|
||||
sqlDB *sql.DB
|
||||
}
|
||||
|
||||
var _ = Suite(&schemaTestSuite{})
|
||||
@ -26,12 +29,19 @@ func (s *schemaTestSuite) SetUpSuite(c *C) {
|
||||
var err error
|
||||
s.conn, err = client.Connect(fmt.Sprintf("%s:%d", *host, 3306), "root", "", "test")
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
s.sqlDB, err = sql.Open("mysql", fmt.Sprintf("root:@%s:3306", *host))
|
||||
c.Assert(err, IsNil)
|
||||
}
|
||||
|
||||
func (s *schemaTestSuite) TearDownSuite(c *C) {
|
||||
if s.conn != nil {
|
||||
s.conn.Close()
|
||||
}
|
||||
|
||||
if s.sqlDB != nil {
|
||||
s.sqlDB.Close()
|
||||
}
|
||||
}
|
||||
|
||||
func (s *schemaTestSuite) TestSchema(c *C) {
|
||||
@ -44,10 +54,14 @@ func (s *schemaTestSuite) TestSchema(c *C) {
|
||||
id1 INT,
|
||||
id2 INT,
|
||||
name VARCHAR(256),
|
||||
e ENUM("a", "b", "c"),
|
||||
status ENUM('appointing','serving','abnormal','stop','noaftermarket','finish','financial_audit'),
|
||||
se SET('a', 'b', 'c'),
|
||||
f FLOAT,
|
||||
d DECIMAL(2, 1),
|
||||
uint INT UNSIGNED,
|
||||
zfint INT ZEROFILL,
|
||||
name_ucs VARCHAR(256) CHARACTER SET ucs2,
|
||||
name_utf8 VARCHAR(256) CHARACTER SET utf8,
|
||||
PRIMARY KEY(id2, id),
|
||||
UNIQUE (id1),
|
||||
INDEX name_idx (name)
|
||||
@ -60,15 +74,25 @@ func (s *schemaTestSuite) TestSchema(c *C) {
|
||||
ta, err := NewTable(s.conn, "test", "schema_test")
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
c.Assert(ta.Columns, HasLen, 8)
|
||||
c.Assert(ta.Columns, HasLen, 12)
|
||||
c.Assert(ta.Indexes, HasLen, 3)
|
||||
c.Assert(ta.PKColumns, DeepEquals, []int{2, 0})
|
||||
c.Assert(ta.Indexes[0].Columns, HasLen, 2)
|
||||
c.Assert(ta.Indexes[0].Name, Equals, "PRIMARY")
|
||||
c.Assert(ta.Indexes[2].Name, Equals, "name_idx")
|
||||
c.Assert(ta.Columns[4].EnumValues, DeepEquals, []string{"a", "b", "c"})
|
||||
c.Assert(ta.Columns[4].EnumValues, DeepEquals, []string{"appointing", "serving", "abnormal", "stop", "noaftermarket", "finish", "financial_audit"})
|
||||
c.Assert(ta.Columns[5].SetValues, DeepEquals, []string{"a", "b", "c"})
|
||||
c.Assert(ta.Columns[7].Type, Equals, TYPE_FLOAT)
|
||||
c.Assert(ta.Columns[7].Type, Equals, TYPE_DECIMAL)
|
||||
c.Assert(ta.Columns[0].IsUnsigned, IsFalse)
|
||||
c.Assert(ta.Columns[8].IsUnsigned, IsTrue)
|
||||
c.Assert(ta.Columns[9].IsUnsigned, IsTrue)
|
||||
c.Assert(ta.Columns[10].Collation, Matches, "^ucs2.*")
|
||||
c.Assert(ta.Columns[11].Collation, Matches, "^utf8.*")
|
||||
|
||||
taSqlDb, err := NewTableFromSqlDB(s.sqlDB, "test", "schema_test")
|
||||
c.Assert(err, IsNil)
|
||||
|
||||
c.Assert(taSqlDb, DeepEquals, ta)
|
||||
}
|
||||
|
||||
func (s *schemaTestSuite) TestQuoteSchema(c *C) {
|
||||
|
253
vendor/github.com/siddontang/go-mysql/server/auth.go
generated
vendored
253
vendor/github.com/siddontang/go-mysql/server/auth.go
generated
vendored
@ -2,118 +2,173 @@ package server
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha1"
|
||||
"crypto/sha256"
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
|
||||
"github.com/juju/errors"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
func (c *Conn) writeInitialHandshake() error {
|
||||
capability := CLIENT_LONG_PASSWORD | CLIENT_LONG_FLAG |
|
||||
CLIENT_CONNECT_WITH_DB | CLIENT_PROTOCOL_41 |
|
||||
CLIENT_TRANSACTIONS | CLIENT_SECURE_CONNECTION
|
||||
var ErrAccessDenied = errors.New("access denied")
|
||||
|
||||
data := make([]byte, 4, 128)
|
||||
func (c *Conn) compareAuthData(authPluginName string, clientAuthData []byte) error {
|
||||
switch authPluginName {
|
||||
case AUTH_NATIVE_PASSWORD:
|
||||
if err := c.acquirePassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
return c.compareNativePasswordAuthData(clientAuthData, c.password)
|
||||
|
||||
//min version 10
|
||||
data = append(data, 10)
|
||||
case AUTH_CACHING_SHA2_PASSWORD:
|
||||
if err := c.compareCacheSha2PasswordAuthData(clientAuthData); err != nil {
|
||||
return err
|
||||
}
|
||||
if c.cachingSha2FullAuth {
|
||||
return c.handleAuthSwitchResponse()
|
||||
}
|
||||
return nil
|
||||
|
||||
//server version[00]
|
||||
data = append(data, ServerVersion...)
|
||||
data = append(data, 0)
|
||||
case AUTH_SHA256_PASSWORD:
|
||||
if err := c.acquirePassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
cont, err := c.handlePublicKeyRetrieval(clientAuthData)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !cont {
|
||||
return nil
|
||||
}
|
||||
return c.compareSha256PasswordAuthData(clientAuthData, c.password)
|
||||
|
||||
//connection id
|
||||
data = append(data, byte(c.connectionID), byte(c.connectionID>>8), byte(c.connectionID>>16), byte(c.connectionID>>24))
|
||||
|
||||
//auth-plugin-data-part-1
|
||||
data = append(data, c.salt[0:8]...)
|
||||
|
||||
//filter [00]
|
||||
data = append(data, 0)
|
||||
|
||||
//capability flag lower 2 bytes, using default capability here
|
||||
data = append(data, byte(capability), byte(capability>>8))
|
||||
|
||||
//charset, utf-8 default
|
||||
data = append(data, uint8(DEFAULT_COLLATION_ID))
|
||||
|
||||
//status
|
||||
data = append(data, byte(c.status), byte(c.status>>8))
|
||||
|
||||
//below 13 byte may not be used
|
||||
//capability flag upper 2 bytes, using default capability here
|
||||
data = append(data, byte(capability>>16), byte(capability>>24))
|
||||
|
||||
//filter [0x15], for wireshark dump, value is 0x15
|
||||
data = append(data, 0x15)
|
||||
|
||||
//reserved 10 [00]
|
||||
data = append(data, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
|
||||
|
||||
//auth-plugin-data-part-2
|
||||
data = append(data, c.salt[8:]...)
|
||||
|
||||
//filter [00]
|
||||
data = append(data, 0)
|
||||
|
||||
return c.WritePacket(data)
|
||||
default:
|
||||
return errors.Errorf("unknown authentication plugin name '%s'", authPluginName)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Conn) readHandshakeResponse(password string) error {
|
||||
data, err := c.ReadPacket()
|
||||
|
||||
func (c *Conn) acquirePassword() error {
|
||||
password, found, err := c.credentialProvider.GetCredential(c.user)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pos := 0
|
||||
|
||||
//capability
|
||||
c.capability = binary.LittleEndian.Uint32(data[:4])
|
||||
pos += 4
|
||||
|
||||
//skip max packet size
|
||||
pos += 4
|
||||
|
||||
//charset, skip, if you want to use another charset, use set names
|
||||
//c.collation = CollationId(data[pos])
|
||||
pos++
|
||||
|
||||
//skip reserved 23[00]
|
||||
pos += 23
|
||||
|
||||
//user name
|
||||
user := string(data[pos : pos+bytes.IndexByte(data[pos:], 0)])
|
||||
pos += len(user) + 1
|
||||
|
||||
if c.user != user {
|
||||
return NewDefaultError(ER_NO_SUCH_USER, user, c.RemoteAddr().String())
|
||||
if !found {
|
||||
return NewDefaultError(ER_NO_SUCH_USER, c.user, c.RemoteAddr().String())
|
||||
}
|
||||
|
||||
//auth length and auth
|
||||
authLen := int(data[pos])
|
||||
pos++
|
||||
auth := data[pos : pos+authLen]
|
||||
|
||||
checkAuth := CalcPassword(c.salt, []byte(password))
|
||||
|
||||
if !bytes.Equal(auth, checkAuth) {
|
||||
return NewDefaultError(ER_ACCESS_DENIED_ERROR, c.RemoteAddr().String(), c.user, "Yes")
|
||||
}
|
||||
|
||||
pos += authLen
|
||||
|
||||
if c.capability|CLIENT_CONNECT_WITH_DB > 0 {
|
||||
if len(data[pos:]) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
db := string(data[pos : pos+bytes.IndexByte(data[pos:], 0)])
|
||||
pos += len(db) + 1
|
||||
|
||||
if err = c.h.UseDB(db); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
c.password = password
|
||||
return nil
|
||||
}
|
||||
|
||||
func scrambleValidation(cached, nonce, scramble []byte) bool {
|
||||
// SHA256(SHA256(SHA256(STORED_PASSWORD)), NONCE)
|
||||
crypt := sha256.New()
|
||||
crypt.Write(cached)
|
||||
crypt.Write(nonce)
|
||||
message2 := crypt.Sum(nil)
|
||||
// SHA256(PASSWORD)
|
||||
if len(message2) != len(scramble) {
|
||||
return false
|
||||
}
|
||||
for i := range message2 {
|
||||
message2[i] ^= scramble[i]
|
||||
}
|
||||
// SHA256(SHA256(PASSWORD)
|
||||
crypt.Reset()
|
||||
crypt.Write(message2)
|
||||
m := crypt.Sum(nil)
|
||||
return bytes.Equal(m, cached)
|
||||
}
|
||||
|
||||
func (c *Conn) compareNativePasswordAuthData(clientAuthData []byte, password string) error {
|
||||
if bytes.Equal(CalcPassword(c.salt, []byte(c.password)), clientAuthData) {
|
||||
return nil
|
||||
}
|
||||
return ErrAccessDenied
|
||||
}
|
||||
|
||||
func (c *Conn) compareSha256PasswordAuthData(clientAuthData []byte, password string) error {
|
||||
// Empty passwords are not hashed, but sent as empty string
|
||||
if len(clientAuthData) == 0 {
|
||||
if password == "" {
|
||||
return nil
|
||||
}
|
||||
return ErrAccessDenied
|
||||
}
|
||||
if tlsConn, ok := c.Conn.Conn.(*tls.Conn); ok {
|
||||
if !tlsConn.ConnectionState().HandshakeComplete {
|
||||
return errors.New("incomplete TSL handshake")
|
||||
}
|
||||
// connection is SSL/TLS, client should send plain password
|
||||
// deal with the trailing \NUL added for plain text password received
|
||||
if l := len(clientAuthData); l != 0 && clientAuthData[l-1] == 0x00 {
|
||||
clientAuthData = clientAuthData[:l-1]
|
||||
}
|
||||
if bytes.Equal(clientAuthData, []byte(password)) {
|
||||
return nil
|
||||
}
|
||||
return ErrAccessDenied
|
||||
} else {
|
||||
// client should send encrypted password
|
||||
// decrypt
|
||||
dbytes, err := rsa.DecryptOAEP(sha1.New(), rand.Reader, (c.serverConf.tlsConfig.Certificates[0].PrivateKey).(*rsa.PrivateKey), clientAuthData, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
plain := make([]byte, len(password)+1)
|
||||
copy(plain, password)
|
||||
for i := range plain {
|
||||
j := i % len(c.salt)
|
||||
plain[i] ^= c.salt[j]
|
||||
}
|
||||
if bytes.Equal(plain, dbytes) {
|
||||
return nil
|
||||
}
|
||||
return ErrAccessDenied
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Conn) compareCacheSha2PasswordAuthData(clientAuthData []byte) error {
|
||||
// Empty passwords are not hashed, but sent as empty string
|
||||
if len(clientAuthData) == 0 {
|
||||
if err := c.acquirePassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
if c.password == "" {
|
||||
return nil
|
||||
}
|
||||
return ErrAccessDenied
|
||||
}
|
||||
// the caching of 'caching_sha2_password' in MySQL, see: https://dev.mysql.com/worklog/task/?id=9591
|
||||
if _, ok := c.credentialProvider.(*InMemoryProvider); ok {
|
||||
// since we have already kept the password in memory and calculate the scramble is not that high of cost, we eliminate
|
||||
// the caching part. So our server will never ask the client to do a full authentication via RSA key exchange and it appears
|
||||
// like the auth will always hit the cache.
|
||||
if err := c.acquirePassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
if bytes.Equal(CalcCachingSha2Password(c.salt, c.password), clientAuthData) {
|
||||
// 'fast' auth: write "More data" packet (first byte == 0x01) with the second byte = 0x03
|
||||
return c.writeAuthMoreDataFastAuth()
|
||||
}
|
||||
return ErrAccessDenied
|
||||
}
|
||||
// other type of credential provider, we use the cache
|
||||
cached, ok := c.serverConf.cacheShaPassword.Load(fmt.Sprintf("%s@%s", c.user, c.Conn.LocalAddr()))
|
||||
if ok {
|
||||
// Scramble validation
|
||||
if scrambleValidation(cached.([]byte), c.salt, clientAuthData) {
|
||||
// 'fast' auth: write "More data" packet (first byte == 0x01) with the second byte = 0x03
|
||||
return c.writeAuthMoreDataFastAuth()
|
||||
}
|
||||
return ErrAccessDenied
|
||||
}
|
||||
// cache miss, do full auth
|
||||
if err := c.writeAuthMoreDataFullAuth(); err != nil {
|
||||
return err
|
||||
}
|
||||
c.cachingSha2FullAuth = true
|
||||
return nil
|
||||
}
|
||||
|
133
vendor/github.com/siddontang/go-mysql/server/auth_switch_response.go
generated
vendored
Normal file
133
vendor/github.com/siddontang/go-mysql/server/auth_switch_response.go
generated
vendored
Normal file
@ -0,0 +1,133 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha1"
|
||||
"crypto/sha256"
|
||||
"crypto/tls"
|
||||
"fmt"
|
||||
|
||||
"github.com/juju/errors"
|
||||
. "github.com/siddontang/go-mysql/mysql"
|
||||
)
|
||||
|
||||
func (c *Conn) handleAuthSwitchResponse() error {
|
||||
authData, err := c.readAuthSwitchRequestResponse()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch c.authPluginName {
|
||||
case AUTH_NATIVE_PASSWORD:
|
||||
if err := c.acquirePassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
if !bytes.Equal(CalcPassword(c.salt, []byte(c.password)), authData) {
|
||||
return ErrAccessDenied
|
||||
}
|
||||
return nil
|
||||
|
||||
case AUTH_CACHING_SHA2_PASSWORD:
|
||||
if !c.cachingSha2FullAuth {
|
||||
// Switched auth method but no MoreData packet send yet
|
||||
if err := c.compareCacheSha2PasswordAuthData(authData); err != nil {
|
||||
return err
|
||||
} else {
|
||||
if c.cachingSha2FullAuth {
|
||||
return c.handleAuthSwitchResponse()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
// AuthMoreData packet already sent, do full auth
|
||||
if err := c.handleCachingSha2PasswordFullAuth(authData); err != nil {
|
||||
return err
|
||||
}
|
||||
c.writeCachingSha2Cache()
|
||||
return nil
|
||||
|
||||
case AUTH_SHA256_PASSWORD:
|
||||
cont, err := c.handlePublicKeyRetrieval(authData)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !cont {
|
||||
return nil
|
||||
}
|
||||
if err := c.acquirePassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
return c.compareSha256PasswordAuthData(authData, c.password)
|
||||
|
||||
default:
|
||||
return errors.Errorf("unknown authentication plugin name '%s'", c.authPluginName)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Conn) handleCachingSha2PasswordFullAuth(authData []byte) error {
|
||||
if err := c.acquirePassword(); err != nil {
|
||||
return err
|
||||
}
|
||||
if tlsConn, ok := c.Conn.Conn.(*tls.Conn); ok {
|
||||
if !tlsConn.ConnectionState().HandshakeComplete {
|
||||
return errors.New("incomplete TSL handshake")
|
||||
}
|
||||
// connection is SSL/TLS, client should send plain password
|
||||
// deal with the trailing \NUL added for plain text password received
|
||||
if l := len(authData); l != 0 && authData[l-1] == 0x00 {
|
||||
authData = authData[:l-1]
|
||||
}
|
||||
if bytes.Equal(authData, []byte(c.password)) {
|
||||
return nil
|
||||
}
|
||||
return ErrAccessDenied
|
||||
} else {
|
||||
// client either request for the public key or send the encrypted password
|
||||
if len(authData) == 1 && authData[0] == 0x02 {
|
||||
// send the public key
|
||||
if err := c.writeAuthMoreDataPubkey(); err != nil {
|
||||
return err
|
||||
}
|
||||
// read the encrypted password
|
||||
var err error
|
||||
if authData, err = c.readAuthSwitchRequestResponse(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// the encrypted password
|
||||
// decrypt
|
||||
dbytes, err := rsa.DecryptOAEP(sha1.New(), rand.Reader, (c.serverConf.tlsConfig.Certificates[0].PrivateKey).(*rsa.PrivateKey), authData, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
plain := make([]byte, len(c.password)+1)
|
||||
copy(plain, c.password)
|
||||
for i := range plain {
|
||||
j := i % len(c.salt)
|
||||
plain[i] ^= c.salt[j]
|
||||
}
|
||||
if bytes.Equal(plain, dbytes) {
|
||||
return nil
|
||||
}
|
||||
return ErrAccessDenied
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Conn) writeCachingSha2Cache() {
|
||||
// write cache
|
||||
if c.password == "" {
|
||||
return
|
||||
}
|
||||
// SHA256(PASSWORD)
|
||||
crypt := sha256.New()
|
||||
crypt.Write([]byte(c.password))
|
||||
m1 := crypt.Sum(nil)
|
||||
// SHA256(SHA256(PASSWORD))
|
||||
crypt.Reset()
|
||||
crypt.Write(m1)
|
||||
m2 := crypt.Sum(nil)
|
||||
// caching_sha2_password will maintain an in-memory hash of `user`@`host` => SHA256(SHA256(PASSWORD))
|
||||
c.serverConf.cacheShaPassword.Store(fmt.Sprintf("%s@%s", c.user, c.Conn.LocalAddr()), m2)
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user