Merge branch 'master' into adaptive-eta
This commit is contained in:
commit
cf08babc55
12
.github/ISSUE_TEMPLATE.md
vendored
12
.github/ISSUE_TEMPLATE.md
vendored
@ -1,9 +1,13 @@
|
||||
> This is the place to report a bug, ask a question, suggest an enhancement.
|
||||
> This is the place to report a bug, ask a question, or suggest an enhancement.
|
||||
|
||||
> This is the place to make a discussion before creating a PR.
|
||||
> This is also the place to make a discussion before creating a PR.
|
||||
|
||||
> Please label your Issue
|
||||
> If this is a bug report, please provide a test case (e.g., your table definition and gh-ost command) and the error output.
|
||||
|
||||
> Please understand if this Issue is not addressed immediately or in a timeframe you were expecting.
|
||||
> Please use markdown to format code or SQL: https://guides.github.com/features/mastering-markdown/
|
||||
|
||||
> Please label the issue on the right (bug, enhancement, question, etc.).
|
||||
|
||||
> And please understand if this issue is not addressed immediately or in a timeframe you were expecting.
|
||||
|
||||
> Thank you!
|
||||
|
2
build.sh
2
build.sh
@ -2,7 +2,7 @@
|
||||
#
|
||||
#
|
||||
|
||||
RELEASE_VERSION="1.0.18"
|
||||
RELEASE_VERSION="1.0.23"
|
||||
|
||||
function build {
|
||||
osname=$1
|
||||
|
@ -43,10 +43,26 @@ password=123456
|
||||
|
||||
See `exact-rowcount`
|
||||
|
||||
### critical-load-interval-millis
|
||||
|
||||
`--critical-load` defines a threshold that, when met, `gh-ost` panics and bails out. The default behavior is to bail out immediately when meeting this threshold.
|
||||
|
||||
This may sometimes lead to migrations bailing out on a very short spike, that, while in itself is impacting production and is worth investigating, isn't reason enough to kill a 10 hour migration.
|
||||
|
||||
When `--critical-load-interval-millis` is specified (e.g. `--critical-load-interval-millis=2500`), `gh-ost` gives a second chance: when it meets `critical-load` threshold, it doesn't bail out. Instead, it starts a timer (in this example: `2.5` seconds) and re-checks `critical-load` when the timer expires. If `critical-load` is met again, `gh-ost` panics and bails out. If not, execution continues.
|
||||
|
||||
This is somewhat similar to a Nagios `n`-times test, where `n` in our case is always `2`.
|
||||
|
||||
### cut-over
|
||||
|
||||
Optional. Default is `safe`. See more discussion in [cut-over](cut-over.md)
|
||||
|
||||
### discard-foreign-keys
|
||||
|
||||
**Danger**: this flag will _silently_ discard any foreign keys existing on your table.
|
||||
|
||||
At this time (10-2016) `gh-ost` does not support foreign keys on migrated tables (it bails out when it notices a FK on the migrated table). However, it is able to support _dropping_ of foreign keys via this flag. If you're trying to get rid of foreign keys in your environment, this is a useful flag.
|
||||
|
||||
### exact-rowcount
|
||||
|
||||
A `gh-ost` execution need to copy whatever rows you have in your existing table onto the ghost table. This can, and often be, a large number. Exactly what that number is?
|
||||
@ -98,4 +114,4 @@ See `approve-renamed-columns`
|
||||
|
||||
### test-on-replica
|
||||
|
||||
Issue the migration on a replica; do not modify data on master. Useful for validating, testing and benchmarking. See [test-on-replica](test-on-replica.md)
|
||||
Issue the migration on a replica; do not modify data on master. Useful for validating, testing and benchmarking. See [testing-on-replica](testing-on-replica.md)
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
Even though `gh-ost` relies on Row Based Replication (RBR), it does not mean you can't keep your Statement Based Replication (SBR).
|
||||
|
||||
`gh-ost` is happy to, and actually prefers and suggests so, connect to a replica. On this replica, it is happy to:
|
||||
`gh-ost` is happy to, and actually prefers and suggests to, connect to a replica. On this replica, it is happy to:
|
||||
- issue the heavyweight `INFORMATION_SCHEMA` queries that make a table structure analysis
|
||||
- issue a `select count(*) from mydb.mytable`, should `--exact-rowcount` be provided
|
||||
- connect itself as a fake replica to get the binary log stream
|
||||
@ -11,7 +11,7 @@ All of the above can be executed on the master, but we're more comfortable that
|
||||
|
||||
Please note the third item: `gh-ost` connects as a fake replica and pulls the binary logs. This is how `gh-ost` finds the table's changelog: it looks up entries in the binary log.
|
||||
|
||||
The magic is that your master can still produce SRB, but if you have a replica with `log-slave-updates`, you can also configure it to have `binlog_format='ROW'`. Such a replica accepts SBR statements from its master, and produces RBR statements onto its binary logs.
|
||||
The magic is that your master can still produce SBR, but if you have a replica with `log-slave-updates`, you can also configure it to have `binlog_format='ROW'`. Such a replica accepts SBR statements from its master, and produces RBR statements onto its binary logs.
|
||||
|
||||
`gh-ost` is happy to modify the `binlog_format` on the replica for you:
|
||||
- If you supply `--switch-to-rbr`, `gh-ost` will convert the binlog format for you, and restart replication to make sure this takes effect.
|
||||
|
@ -2,7 +2,9 @@
|
||||
|
||||
### Requirements
|
||||
|
||||
- You will need to have one server serving Row Based Replication (RBR) format binary logs. Right now `FULL` row image is supported. `MINIMAL` to be supported in the near future. `gh-ost` prefers to work with replicas. You may [still have your master configured with Statement Based Replication](migrating-with-sbr) (SBR).
|
||||
- You will need to have one server serving Row Based Replication (RBR) format binary logs. Right now `FULL` row image is supported. `MINIMAL` to be supported in the near future. `gh-ost` prefers to work with replicas. You may [still have your master configured with Statement Based Replication](migrating-with-sbr.md) (SBR).
|
||||
|
||||
- If you are using a replica, the table must have an identical schema between the master and replica.
|
||||
|
||||
- `gh-ost` requires an account with these privileges:
|
||||
|
||||
@ -15,20 +17,29 @@ The `SUPER` privilege is required for `STOP SLAVE`, `START SLAVE` operations. Th
|
||||
|
||||
- Switching your `binlog_format` to `ROW`, in the case where it is _not_ `ROW` and you explicitly specified `--switch-to-rbr`
|
||||
- If your replication is already in RBR (`binlog_format=ROW`) you can specify `--assume-rbr` to avoid the `STOP SLAVE/START SLAVE` operations, hence no need for `SUPER`.
|
||||
|
||||
- Running `--test-on-replica`: before the cut-over phase, `gh-ost` stops replication so that you can compare the two tables and satisfy that the migration is sound.
|
||||
|
||||
### Limitations
|
||||
|
||||
- Foreign keys not supported. They may be supported in the future, to some extent.
|
||||
|
||||
- Triggers are not supported. They may be supported in the future.
|
||||
|
||||
- MySQL 5.7 generated columns are not supported. They may be supported in the future.
|
||||
|
||||
- The two _before_ & _after_ tables must share some `UNIQUE KEY`. Such key would be used by `gh-ost` to iterate the table.
|
||||
- As an example, if your table has a single `UNIQUE KEY` and no `PRIMARY KEY`, and you wish to replace it with a `PRIMARY KEY`, you will need two migrations: one to add the `PRIMARY KEY` (this migration will use the existing `UNIQUE KEY`), another to drop the now redundant `UNIQUE KEY` (this migration will use the `PRIMARY KEY`).
|
||||
|
||||
- The chosen migration key must not include columns with `NULL` values.
|
||||
- `gh-ost` will do its best to pick a migration key with non-nullable columns. It will by default refuse a migration where the only possible `UNIQUE KEY` includes nullable-columns. You may override this refusal via `--allow-nullable-unique-key` but **you must** be sure there are no actual `NULL` values in those columns. Such `NULL` values would cause a data integrity problem and potentially a corrupted migration.
|
||||
|
||||
- It is not allowed to migrate a table where another table exists with same name and different upper/lower case.
|
||||
- For example, you may not migrate `MyTable` if another table called `MYtable` exists in the same schema.
|
||||
|
||||
- Amazon RDS and Google Cloud SQL are currently not supported
|
||||
- We began working towards removing this limitation. See tracking issue: https://github.com/github/gh-ost/issues/163
|
||||
|
||||
- Multisource is not supported when migrating via replica. It _should_ work (but never tested) when connecting directly to master (`--allow-on-master`)
|
||||
|
||||
- Master-master setup is only supported in active-passive setup. Active-active (where table is being written to on both masters concurrently) is unsupported. It may be supported in the future.
|
||||
|
@ -49,12 +49,12 @@ It's your job to:
|
||||
|
||||
Simple:
|
||||
```shell
|
||||
$ gh-osc --host=myhost.com --conf=/etc/gh-ost.cnf --database=test --table=sample_table --alter="engine=innodb" --chunk-size=2000 --max-load=Threads_connected=20 --initially-drop-ghost-table --initially-drop-old-table --test-on-replica --verbose --execute
|
||||
$ gh-ost --host=myhost.com --conf=/etc/gh-ost.cnf --database=test --table=sample_table --alter="engine=innodb" --chunk-size=2000 --max-load=Threads_connected=20 --initially-drop-ghost-table --initially-drop-old-table --test-on-replica --verbose --execute
|
||||
```
|
||||
|
||||
Elaborate:
|
||||
```shell
|
||||
$ gh-osc --host=myhost.com --conf=/etc/gh-ost.cnf --database=test --table=sample_table --alter="engine=innodb" --chunk-size=2000 --max-load=Threads_connected=20 --switch-to-rbr --initially-drop-ghost-table --initially-drop-old-table --test-on-replica --postpone-cut-over-flag-file=/tmp/ghost-postpone.flag --exact-rowcount --concurrent-rowcount --allow-nullable-unique-key --verbose --execute
|
||||
$ gh-ost --host=myhost.com --conf=/etc/gh-ost.cnf --database=test --table=sample_table --alter="engine=innodb" --chunk-size=2000 --max-load=Threads_connected=20 --switch-to-rbr --initially-drop-ghost-table --initially-drop-old-table --test-on-replica --postpone-cut-over-flag-file=/tmp/ghost-postpone.flag --exact-rowcount --concurrent-rowcount --allow-nullable-unique-key --verbose --execute
|
||||
```
|
||||
- Count exact number of rows (makes ETA estimation very good). This goes at the expense of paying the time for issuing a `SELECT COUNT(*)` on your table. We use this lovingly.
|
||||
- Automatically switch to `RBR` if replica is configured as `SBR`. See also: [migrating with SBR](migrating-with-sbr.md)
|
||||
|
@ -4,24 +4,27 @@ Here are technical considerations you may be interested in. We write here things
|
||||
|
||||
# Connecting to replica
|
||||
|
||||
`gh-ost` prefers connecting to replica. If your master uses Statement Based Replication, this is a _requirement_.
|
||||
`gh-ost` prefers connecting to a replica. If your master uses Statement Based Replication, this is a _requirement_.
|
||||
|
||||
What does "connect to replica" mean?
|
||||
|
||||
- `gh-ost` connects to the replica as a normal client
|
||||
- It additionally connects as a replica to the replica (pretends to be a MySQL replica itself)
|
||||
- It auto-detects master
|
||||
- It auto-detects the master
|
||||
|
||||
`gh-ost` reads the RBR binary logs from the replica, and applies events onto the master as tables are being migrated.
|
||||
`gh-ost` reads the RBR binary logs from the replica, and applies events onto the master as part of the table migration.
|
||||
|
||||
THE FINE PRINT:
|
||||
|
||||
- You trust the replica's binary logs to represent events applied on master.
|
||||
If you don't trust the replica, if you suspect there's data drift between replica & master, take notice. If your master is RBR, do instead connect `gh-ost` to master, via `--allow-on-master` (see [cheatsheet](cheatsheet.md)).
|
||||
Our take: we trust replica data; if master dies in production, we promote a replica. Our read serving is based on replica(s).
|
||||
- If you don't trust the replica, or if you suspect there's data drift between replica & master, take notice.
|
||||
- If the table on the replica has a different schema than the master, `gh-ost` likely won't work correctly.
|
||||
- Our take: we trust replica data; if master dies in production, we promote a replica. Our read serving is based on replica(s).
|
||||
|
||||
- If your master is RBR, do instead connect `gh-ost` to master, via `--allow-on-master` (see [cheatsheet](cheatsheet.md)).
|
||||
|
||||
- Replication needs to run.
|
||||
This is an obvious, but worth stating. You cannot perform a migration with "connect to replica" if your replica lags. `gh-ost` will actually do all it can so that replication does not lag, and avoid critical operations at such time when replication does lag.
|
||||
- This is an obvious, but worth stating. You cannot perform a migration with "connect to replica" if your replica lags. `gh-ost` will actually do all it can so that replication does not lag, and avoid critical operations if replication is lagging.
|
||||
|
||||
# Network usage
|
||||
|
||||
@ -30,12 +33,12 @@ THE FINE PRINT:
|
||||
THE FINE PRINT:
|
||||
|
||||
- `gh-ost` delivers more network traffic than other online-schema-change tools, that let MySQL handle all data transfer internally. This is part of the [triggerless design](triggerless-design.md).
|
||||
Our take: we deal with cross-DC migration traffic and this is working well for us.
|
||||
- Our take: we deal with cross-DC migration traffic and this is working well for us.
|
||||
|
||||
# Impersonating as a replica
|
||||
|
||||
`gh-ost` impersonates as a replica: connects to a MySQL server, says "oh hey, I'm a replica, please send me binary logs kthx".
|
||||
`gh-ost` impersonates as a replica: it connects to a MySQL server, says "oh hey, I'm a replica, please send me binary logs kthx".
|
||||
|
||||
THE FINE PRINT:
|
||||
|
||||
- `SHOW SLAVE HOSTS` or `SHOW PROCESSLIST` will list down this strange "replica" that you can't really connect to.
|
||||
- `SHOW SLAVE HOSTS` or `SHOW PROCESSLIST` will list this strange "replica" that you can't really connect to.
|
||||
|
@ -46,7 +46,7 @@ Note that you may dynamically change both `replication-lag-query` and the `throt
|
||||
|
||||
`--max-load='Threads_running=100,Threads_connected=500'`
|
||||
|
||||
Metrics must be valid, numeric [statis variables](http://dev.mysql.com/doc/refman/5.6/en/server-status-variables.html)
|
||||
Metrics must be valid, numeric [status variables](http://dev.mysql.com/doc/refman/5.6/en/server-status-variables.html)
|
||||
|
||||
#### Throttle query
|
||||
|
||||
@ -80,7 +80,7 @@ In addition to the above, you are able to take control and throttle the operatio
|
||||
|
||||
Any single factor in the above that suggests the migration should throttle - causes throttling. That is, once some component decides to throttle, you cannot override it; you cannot force continued execution of the migration.
|
||||
|
||||
`gh-ost` collects different throttle-related metrics at different times, independently. It asynchronously reads the collected metrics and checks if they satisfy conditions/threasholds.
|
||||
`gh-ost` collects different throttle-related metrics at different times, independently. It asynchronously reads the collected metrics and checks if they satisfy conditions/thresholds.
|
||||
|
||||
The first check to suggest throttling stops the check; the status message will note the reason for throttling as the first satisfied check.
|
||||
|
||||
@ -97,7 +97,7 @@ Copy: 0/2915 0.0%; Applied: 0; Backlog: 0/100; Elapsed: 42s(copy), 42s(total); s
|
||||
|
||||
Throttling time is limited by the availability of the binary logs. When throttling begins, `gh-ost` suspends reading the binary logs, and expects to resume reading from same binary log where it paused.
|
||||
|
||||
Your availability of binary logs is typically determined by the [expire_logs_days](dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_expire_logs_days) variable. If you have `expire_logs_days = 10` (or check `select @@global.expire_logs_days`), then you should be able to throttle for up to `10` days.
|
||||
Your availability of binary logs is typically determined by the [expire_logs_days](https://dev.mysql.com/doc/refman/5.6/en/server-system-variables.html#sysvar_expire_logs_days) variable. If you have `expire_logs_days = 10` (or check `select @@global.expire_logs_days`), then you should be able to throttle for up to `10` days.
|
||||
|
||||
Having said that, throttling for so long is far fetching, in that the `gh-ost` process itself must be kept alive during that time; and the amount of binary logs to process once it resumes will potentially take days to replay.
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Understading gh-ost output
|
||||
# Understanding gh-ost output
|
||||
|
||||
`gh-ost` attempts to be verbose to the point where you really know what it's doing, without completely spamming you.
|
||||
You can control output levels:
|
||||
|
@ -33,7 +33,7 @@ You're always able to actively begin [throttling](throttle.md). Just touch the `
|
||||
|
||||
### What if my replicas don't use binary logs?
|
||||
|
||||
If the master is running Row Based Replication (RBR) - point `gh-ost` to the master, and specify `--allow-on-master`. See [cheatsheets](cheatsheets.md)
|
||||
If the master is running Row Based Replication (RBR) - point `gh-ost` to the master, and specify `--allow-on-master`. See [cheatsheets](cheatsheet.md)
|
||||
|
||||
If the master is running Statement Based Replication (SBR) - you have no alternative but to reconfigure a replica with:
|
||||
|
||||
|
@ -70,6 +70,7 @@ type MigrationContext struct {
|
||||
ApproveRenamedColumns bool
|
||||
SkipRenamedColumns bool
|
||||
IsTungsten bool
|
||||
DiscardForeignKeys bool
|
||||
|
||||
config ContextConfig
|
||||
configMutex *sync.Mutex
|
||||
@ -90,8 +91,10 @@ type MigrationContext struct {
|
||||
ThrottleCommandedByUser int64
|
||||
maxLoad LoadMap
|
||||
criticalLoad LoadMap
|
||||
CriticalLoadIntervalMilliseconds int64
|
||||
PostponeCutOverFlagFile string
|
||||
CutOverLockTimeoutSeconds int64
|
||||
ForceNamedCutOverCommand bool
|
||||
PanicFlagFile string
|
||||
HooksPath string
|
||||
HooksHintMessage string
|
||||
@ -111,6 +114,7 @@ type MigrationContext struct {
|
||||
|
||||
Hostname string
|
||||
AssumeMasterHostname string
|
||||
ApplierTimeZone string
|
||||
TableEngine string
|
||||
RowsEstimate int64
|
||||
RowsDeltaEstimate int64
|
||||
@ -140,6 +144,8 @@ type MigrationContext struct {
|
||||
CountingRowsFlag int64
|
||||
AllEventsUpToLockProcessedInjectedFlag int64
|
||||
CleanupImminentFlag int64
|
||||
UserCommandedUnpostponeFlag int64
|
||||
PanicAbort chan error
|
||||
|
||||
OriginalTableColumns *sql.ColumnList
|
||||
OriginalTableUniqueKeys [](*sql.UniqueKey)
|
||||
@ -192,6 +198,7 @@ func newMigrationContext() *MigrationContext {
|
||||
configMutex: &sync.Mutex{},
|
||||
pointOfInterestTimeMutex: &sync.Mutex{},
|
||||
ColumnRenameMap: make(map[string]string),
|
||||
PanicAbort: make(chan error),
|
||||
}
|
||||
}
|
||||
|
||||
@ -232,6 +239,28 @@ func (this *MigrationContext) RequiresBinlogFormatChange() bool {
|
||||
return this.OriginalBinlogFormat != "ROW"
|
||||
}
|
||||
|
||||
// GetApplierHostname is a safe access method to the applier hostname
|
||||
func (this *MigrationContext) GetApplierHostname() string {
|
||||
if this.ApplierConnectionConfig == nil {
|
||||
return ""
|
||||
}
|
||||
if this.ApplierConnectionConfig.ImpliedKey == nil {
|
||||
return ""
|
||||
}
|
||||
return this.ApplierConnectionConfig.ImpliedKey.Hostname
|
||||
}
|
||||
|
||||
// GetInspectorHostname is a safe access method to the inspector hostname
|
||||
func (this *MigrationContext) GetInspectorHostname() string {
|
||||
if this.InspectorConnectionConfig == nil {
|
||||
return ""
|
||||
}
|
||||
if this.InspectorConnectionConfig.ImpliedKey == nil {
|
||||
return ""
|
||||
}
|
||||
return this.InspectorConnectionConfig.ImpliedKey.Hostname
|
||||
}
|
||||
|
||||
// InspectorIsAlsoApplier is `true` when the both inspector and applier are the
|
||||
// same database instance. This would be true when running directly on master or when
|
||||
// testing on replica.
|
||||
|
@ -61,6 +61,7 @@ func main() {
|
||||
flag.BoolVar(&migrationContext.ApproveRenamedColumns, "approve-renamed-columns", false, "in case your `ALTER` statement renames columns, gh-ost will note that and offer its interpretation of the rename. By default gh-ost does not proceed to execute. This flag approves that gh-ost's interpretation si correct")
|
||||
flag.BoolVar(&migrationContext.SkipRenamedColumns, "skip-renamed-columns", false, "in case your `ALTER` statement renames columns, gh-ost will note that and offer its interpretation of the rename. By default gh-ost does not proceed to execute. This flag tells gh-ost to skip the renamed columns, i.e. to treat what gh-ost thinks are renamed columns as unrelated columns. NOTE: you may lose column data")
|
||||
flag.BoolVar(&migrationContext.IsTungsten, "tungsten", false, "explicitly let gh-ost know that you are running on a tungsten-replication based topology (you are likely to also provide --assume-master-host)")
|
||||
flag.BoolVar(&migrationContext.DiscardForeignKeys, "discard-foreign-keys", false, "DANGER! This flag will migrate a table that has foreign keys and will NOT create foreign keys on the ghost table, thus your altered table will have NO foreign keys. This is useful for intentional dropping of foreign keys")
|
||||
|
||||
executeFlag := flag.Bool("execute", false, "actually execute the alter & migrate the table. Default is noop: do some tests and exit")
|
||||
flag.BoolVar(&migrationContext.TestOnReplica, "test-on-replica", false, "Have the migration run on a replica, not on the master. At the end of migration replication is stopped, and tables are swapped and immediately swap-revert. Replication remains stopped and you can compare the two tables for building trust")
|
||||
@ -71,6 +72,7 @@ func main() {
|
||||
flag.BoolVar(&migrationContext.InitiallyDropOldTable, "initially-drop-old-table", false, "Drop a possibly existing OLD table (remains from a previous run?) before beginning operation. Default is to panic and abort if such table exists")
|
||||
flag.BoolVar(&migrationContext.InitiallyDropGhostTable, "initially-drop-ghost-table", false, "Drop a possibly existing Ghost table (remains from a previous run?) before beginning operation. Default is to panic and abort if such table exists")
|
||||
cutOver := flag.String("cut-over", "atomic", "choose cut-over type (default|atomic, two-step)")
|
||||
flag.BoolVar(&migrationContext.ForceNamedCutOverCommand, "force-named-cut-over", false, "When true, the 'unpostpone|cut-over' interactive command must name the migrated table")
|
||||
|
||||
flag.BoolVar(&migrationContext.SwitchToRowBinlogFormat, "switch-to-rbr", false, "let this tool automatically switch binary log format to 'ROW' on the replica, if needed. The format will NOT be switched back. I'm too scared to do that, and wish to protect you if you happen to execute another migration while this one is running")
|
||||
flag.BoolVar(&migrationContext.AssumeRBR, "assume-rbr", false, "set to 'true' when you know for certain your server uses 'ROW' binlog_format. gh-ost is unable to tell, event after reading binlog_format, whether the replication process does indeed use 'ROW', and restarts replication to be certain RBR setting is applied. Such operation requires SUPER privileges which you might not have. Setting this flag avoids restarting replication and you can proceed to use gh-ost without SUPER privileges")
|
||||
@ -97,7 +99,8 @@ func main() {
|
||||
flag.StringVar(&migrationContext.HooksHintMessage, "hooks-hint", "", "arbitrary message to be injected to hooks via GH_OST_HOOKS_HINT, for your convenience")
|
||||
|
||||
maxLoad := flag.String("max-load", "", "Comma delimited status-name=threshold. e.g: 'Threads_running=100,Threads_connected=500'. When status exceeds threshold, app throttles writes")
|
||||
criticalLoad := flag.String("critical-load", "", "Comma delimited status-name=threshold, same format as `--max-load`. When status exceeds threshold, app panics and quits")
|
||||
criticalLoad := flag.String("critical-load", "", "Comma delimited status-name=threshold, same format as --max-load. When status exceeds threshold, app panics and quits")
|
||||
flag.Int64Var(&migrationContext.CriticalLoadIntervalMilliseconds, "critical-load-interval-millis", 0, "When 0, migration immediately bails out upon meeting critical-load. When non-zero, a second check is done after given interval, and migration only bails out if 2nd check still meets critical load")
|
||||
quiet := flag.Bool("quiet", false, "quiet")
|
||||
verbose := flag.Bool("verbose", false, "verbose")
|
||||
debug := flag.Bool("debug", false, "debug mode (very verbose)")
|
||||
|
@ -59,6 +59,9 @@ func (this *Applier) InitDBConnections() (err error) {
|
||||
if err := this.validateConnection(this.singletonDB); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := this.validateAndReadTimeZone(); err != nil {
|
||||
return err
|
||||
}
|
||||
if impliedKey, err := mysql.GetInstanceKey(this.db); err != nil {
|
||||
return err
|
||||
} else {
|
||||
@ -81,6 +84,17 @@ func (this *Applier) validateConnection(db *gosql.DB) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// validateAndReadTimeZone potentially reads server time-zone
|
||||
func (this *Applier) validateAndReadTimeZone() error {
|
||||
query := `select @@global.time_zone`
|
||||
if err := this.db.QueryRow(query).Scan(&this.migrationContext.ApplierTimeZone); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Infof("will use time_zone='%s' on applier", this.migrationContext.ApplierTimeZone)
|
||||
return nil
|
||||
}
|
||||
|
||||
// showTableStatus returns the output of `show table status like '...'` command
|
||||
func (this *Applier) showTableStatus(tableName string) (rowMap sqlutils.RowMap) {
|
||||
rowMap = nil
|
||||
@ -414,7 +428,29 @@ func (this *Applier) ApplyIterationInsertQuery() (chunkSize int64, rowsAffected
|
||||
if err != nil {
|
||||
return chunkSize, rowsAffected, duration, err
|
||||
}
|
||||
sqlResult, err := sqlutils.Exec(this.db, query, explodedArgs...)
|
||||
|
||||
sqlResult, err := func() (gosql.Result, error) {
|
||||
tx, err := this.db.Begin()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sessionQuery := fmt.Sprintf(`SET
|
||||
SESSION time_zone = '%s',
|
||||
sql_mode = CONCAT(@@session.sql_mode, ',STRICT_ALL_TABLES')
|
||||
`, this.migrationContext.ApplierTimeZone)
|
||||
if _, err := tx.Exec(sessionQuery); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
result, err := tx.Exec(query, explodedArgs...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := tx.Commit(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return result, nil
|
||||
}()
|
||||
|
||||
if err != nil {
|
||||
return chunkSize, rowsAffected, duration, err
|
||||
}
|
||||
@ -871,10 +907,11 @@ func (this *Applier) ApplyDMLEventQuery(dmlEvent *binlog.BinlogDMLEvent) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tx.Exec(`SET
|
||||
sessionQuery := `SET
|
||||
SESSION time_zone = '+00:00',
|
||||
sql_mode = CONCAT(@@session.sql_mode, ',STRICT_ALL_TABLES')
|
||||
`); err != nil {
|
||||
`
|
||||
if _, err := tx.Exec(sessionQuery); err != nil {
|
||||
return err
|
||||
}
|
||||
if _, err := tx.Exec(query, args...); err != nil {
|
||||
|
@ -59,8 +59,8 @@ func (this *HooksExecutor) applyEnvironmentVairables(extraVariables ...string) [
|
||||
env = append(env, fmt.Sprintf("GH_OST_ESTIMATED_ROWS=%d", estimatedRows))
|
||||
totalRowsCopied := this.migrationContext.GetTotalRowsCopied()
|
||||
env = append(env, fmt.Sprintf("GH_OST_COPIED_ROWS=%d", totalRowsCopied))
|
||||
env = append(env, fmt.Sprintf("GH_OST_MIGRATED_HOST=%s", this.migrationContext.ApplierConnectionConfig.ImpliedKey.Hostname))
|
||||
env = append(env, fmt.Sprintf("GH_OST_INSPECTED_HOST=%s", this.migrationContext.InspectorConnectionConfig.ImpliedKey.Hostname))
|
||||
env = append(env, fmt.Sprintf("GH_OST_MIGRATED_HOST=%s", this.migrationContext.GetApplierHostname()))
|
||||
env = append(env, fmt.Sprintf("GH_OST_INSPECTED_HOST=%s", this.migrationContext.GetInspectorHostname()))
|
||||
env = append(env, fmt.Sprintf("GH_OST_EXECUTING_HOST=%s", this.migrationContext.Hostname))
|
||||
env = append(env, fmt.Sprintf("GH_OST_HOOKS_HINT=%s", this.migrationContext.HooksHintMessage))
|
||||
|
||||
|
@ -63,7 +63,7 @@ func (this *Inspector) ValidateOriginalTable() (err error) {
|
||||
if err := this.validateTable(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := this.validateTableForeignKeys(); err != nil {
|
||||
if err := this.validateTableForeignKeys(this.migrationContext.DiscardForeignKeys); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := this.validateTableTriggers(); err != nil {
|
||||
@ -138,6 +138,20 @@ func (this *Inspector) InspectOriginalAndGhostTables() (err error) {
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.OriginalTableName, this.migrationContext.OriginalTableColumns, this.migrationContext.SharedColumns)
|
||||
this.applyColumnTypes(this.migrationContext.DatabaseName, this.migrationContext.GetGhostTableName(), this.migrationContext.GhostTableColumns, this.migrationContext.MappedSharedColumns)
|
||||
|
||||
for i := range this.migrationContext.SharedColumns.Columns() {
|
||||
column := this.migrationContext.SharedColumns.Columns()[i]
|
||||
mappedColumn := this.migrationContext.MappedSharedColumns.Columns()[i]
|
||||
if column.Name == mappedColumn.Name && column.Type == sql.DateTimeColumnType && mappedColumn.Type == sql.TimestampColumnType {
|
||||
this.migrationContext.MappedSharedColumns.SetConvertDatetimeToTimestamp(column.Name, this.migrationContext.ApplierTimeZone)
|
||||
}
|
||||
}
|
||||
|
||||
for _, column := range this.migrationContext.UniqueKey.Columns.Columns() {
|
||||
if this.migrationContext.MappedSharedColumns.HasTimezoneConversion(column.Name) {
|
||||
return fmt.Errorf("No support at this time for converting a column from DATETIME to TIMESTAMP that is also part of the chosen unique key. Column: %s, key: %s", column.Name, this.migrationContext.UniqueKey.Name)
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -313,7 +327,7 @@ func (this *Inspector) validateLogSlaveUpdates() error {
|
||||
if err := this.db.QueryRow(query).Scan(&logSlaveUpdates); err != nil {
|
||||
return err
|
||||
}
|
||||
if !logSlaveUpdates && !this.migrationContext.InspectorIsAlsoApplier() {
|
||||
if !logSlaveUpdates && !this.migrationContext.InspectorIsAlsoApplier() && !this.migrationContext.IsTungsten {
|
||||
return fmt.Errorf("%s:%d must have log_slave_updates enabled", this.connectionConfig.Key.Hostname, this.connectionConfig.Key.Port)
|
||||
}
|
||||
|
||||
@ -349,9 +363,11 @@ func (this *Inspector) validateTable() error {
|
||||
}
|
||||
|
||||
// validateTableForeignKeys makes sure no foreign keys exist on the migrated table
|
||||
func (this *Inspector) validateTableForeignKeys() error {
|
||||
func (this *Inspector) validateTableForeignKeys(allowChildForeignKeys bool) error {
|
||||
query := `
|
||||
SELECT TABLE_SCHEMA, TABLE_NAME
|
||||
SELECT
|
||||
SUM(REFERENCED_TABLE_NAME IS NOT NULL AND TABLE_SCHEMA=? AND TABLE_NAME=?) as num_child_side_fk,
|
||||
SUM(REFERENCED_TABLE_NAME IS NOT NULL AND REFERENCED_TABLE_SCHEMA=? AND REFERENCED_TABLE_NAME=?) as num_parent_side_fk
|
||||
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
|
||||
WHERE
|
||||
REFERENCED_TABLE_NAME IS NOT NULL
|
||||
@ -359,24 +375,34 @@ func (this *Inspector) validateTableForeignKeys() error {
|
||||
OR (REFERENCED_TABLE_SCHEMA=? AND REFERENCED_TABLE_NAME=?)
|
||||
)
|
||||
`
|
||||
numForeignKeys := 0
|
||||
err := sqlutils.QueryRowsMap(this.db, query, func(rowMap sqlutils.RowMap) error {
|
||||
fkSchema := rowMap.GetString("TABLE_SCHEMA")
|
||||
fkTable := rowMap.GetString("TABLE_NAME")
|
||||
log.Infof("Found foreign key on %s.%s related to %s.%s", sql.EscapeName(fkSchema), sql.EscapeName(fkTable), sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.OriginalTableName))
|
||||
numForeignKeys++
|
||||
numParentForeignKeys := 0
|
||||
numChildForeignKeys := 0
|
||||
err := sqlutils.QueryRowsMap(this.db, query, func(m sqlutils.RowMap) error {
|
||||
numChildForeignKeys = m.GetInt("num_child_side_fk")
|
||||
numParentForeignKeys = m.GetInt("num_parent_side_fk")
|
||||
return nil
|
||||
},
|
||||
this.migrationContext.DatabaseName,
|
||||
this.migrationContext.OriginalTableName,
|
||||
this.migrationContext.DatabaseName,
|
||||
this.migrationContext.OriginalTableName,
|
||||
this.migrationContext.DatabaseName,
|
||||
this.migrationContext.OriginalTableName,
|
||||
this.migrationContext.DatabaseName,
|
||||
this.migrationContext.OriginalTableName,
|
||||
)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if numForeignKeys > 0 {
|
||||
return log.Errorf("Found %d foreign keys related to %s.%s. Foreign keys are not supported. Bailing out", numForeignKeys, sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.OriginalTableName))
|
||||
if numParentForeignKeys > 0 {
|
||||
return log.Errorf("Found %d parent-side foreign keys on %s.%s. Parent-side foreign keys are not supported. Bailing out", numParentForeignKeys, sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.OriginalTableName))
|
||||
}
|
||||
if numChildForeignKeys > 0 {
|
||||
if allowChildForeignKeys {
|
||||
log.Debugf("Foreign keys found and will be dropped, as per given --discard-foreign-keys flag")
|
||||
return nil
|
||||
}
|
||||
return log.Errorf("Found %d child-side foreign keys on %s.%s. Child-side foreign keys are not supported. Bailing out", numChildForeignKeys, sql.EscapeName(this.migrationContext.DatabaseName), sql.EscapeName(this.migrationContext.OriginalTableName))
|
||||
}
|
||||
log.Debugf("Validated no foreign keys exist on table")
|
||||
return nil
|
||||
@ -490,11 +516,22 @@ func (this *Inspector) applyColumnTypes(databaseName, tableName string, columnsL
|
||||
`
|
||||
err := sqlutils.QueryRowsMap(this.db, query, func(m sqlutils.RowMap) error {
|
||||
columnName := m.GetString("COLUMN_NAME")
|
||||
if strings.Contains(m.GetString("COLUMN_TYPE"), "unsigned") {
|
||||
columnType := m.GetString("COLUMN_TYPE")
|
||||
if strings.Contains(columnType, "unsigned") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.SetUnsigned(columnName)
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "timestamp") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.TimestampColumnType
|
||||
}
|
||||
}
|
||||
if strings.Contains(columnType, "datetime") {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.GetColumn(columnName).Type = sql.DateTimeColumnType
|
||||
}
|
||||
}
|
||||
if charset := m.GetString("CHARACTER_SET_NAME"); charset != "" {
|
||||
for _, columnsList := range columnsLists {
|
||||
columnsList.SetCharset(columnName, charset)
|
||||
|
@ -6,14 +6,11 @@
|
||||
package logic
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
"time"
|
||||
@ -42,7 +39,8 @@ const (
|
||||
type PrintStatusRule int
|
||||
|
||||
const (
|
||||
HeuristicPrintStatusRule PrintStatusRule = iota
|
||||
NoPrintStatusRule PrintStatusRule = iota
|
||||
HeuristicPrintStatusRule = iota
|
||||
ForcePrintStatusRule = iota
|
||||
ForcePrintStatusOnlyRule = iota
|
||||
ForcePrintStatusAndHintRule = iota
|
||||
@ -63,11 +61,9 @@ type Migrator struct {
|
||||
tablesInPlace chan bool
|
||||
rowCopyComplete chan bool
|
||||
allEventsUpToLockProcessed chan bool
|
||||
panicAbort chan error
|
||||
|
||||
rowCopyCompleteFlag int64
|
||||
inCutOverCriticalActionFlag int64
|
||||
userCommandedUnpostponeFlag int64
|
||||
// copyRowsQueue should not be buffered; if buffered some non-damaging but
|
||||
// excessive work happens at the end of the iteration as new copy-jobs arrive befroe realizing the copy is complete
|
||||
copyRowsQueue chan tableWriteFunc
|
||||
@ -84,7 +80,6 @@ func NewMigrator() *Migrator {
|
||||
firstThrottlingCollected: make(chan bool, 1),
|
||||
rowCopyComplete: make(chan bool),
|
||||
allEventsUpToLockProcessed: make(chan bool),
|
||||
panicAbort: make(chan error),
|
||||
|
||||
copyRowsQueue: make(chan tableWriteFunc),
|
||||
applyEventsQueue: make(chan tableWriteFunc, applyEventsQueueBuffer),
|
||||
@ -148,7 +143,7 @@ func (this *Migrator) retryOperation(operation func() error, notFatalHint ...boo
|
||||
// there's an error. Let's try again.
|
||||
}
|
||||
if len(notFatalHint) == 0 {
|
||||
this.panicAbort <- err
|
||||
this.migrationContext.PanicAbort <- err
|
||||
}
|
||||
return err
|
||||
}
|
||||
@ -217,7 +212,7 @@ func (this *Migrator) onChangelogStateEvent(dmlEvent *binlog.BinlogDMLEvent) (er
|
||||
|
||||
// listenOnPanicAbort aborts on abort request
|
||||
func (this *Migrator) listenOnPanicAbort() {
|
||||
err := <-this.panicAbort
|
||||
err := <-this.migrationContext.PanicAbort
|
||||
log.Fatale(err)
|
||||
}
|
||||
|
||||
@ -385,7 +380,7 @@ func (this *Migrator) cutOver() (err error) {
|
||||
if this.migrationContext.PostponeCutOverFlagFile == "" {
|
||||
return false, nil
|
||||
}
|
||||
if atomic.LoadInt64(&this.userCommandedUnpostponeFlag) > 0 {
|
||||
if atomic.LoadInt64(&this.migrationContext.UserCommandedUnpostponeFlag) > 0 {
|
||||
return false, nil
|
||||
}
|
||||
if base.FileExists(this.migrationContext.PostponeCutOverFlagFile) {
|
||||
@ -584,150 +579,12 @@ func (this *Migrator) atomicCutOver() (err error) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// onServerCommand responds to a user's interactive command
|
||||
func (this *Migrator) onServerCommand(command string, writer *bufio.Writer) (err error) {
|
||||
defer writer.Flush()
|
||||
|
||||
tokens := strings.SplitN(command, "=", 2)
|
||||
command = strings.TrimSpace(tokens[0])
|
||||
arg := ""
|
||||
if len(tokens) > 1 {
|
||||
arg = strings.TrimSpace(tokens[1])
|
||||
}
|
||||
|
||||
throttleHint := "# Note: you may only throttle for as long as your binary logs are not purged\n"
|
||||
|
||||
if err := this.hooksExecutor.onInteractiveCommand(command); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
switch command {
|
||||
case "help":
|
||||
{
|
||||
fmt.Fprintln(writer, `available commands:
|
||||
status # Print a detailed status message
|
||||
sup # Print a short status message
|
||||
chunk-size=<newsize> # Set a new chunk-size
|
||||
nice-ratio=<ratio> # Set a new nice-ratio, immediate sleep after each row-copy operation, float (examples: 0 is agrressive, 0.7 adds 70% runtime, 1.0 doubles runtime, 2.0 triples runtime, ...)
|
||||
critical-load=<load> # Set a new set of max-load thresholds
|
||||
max-lag-millis=<max-lag> # Set a new replication lag threshold
|
||||
replication-lag-query=<query> # Set a new query that determines replication lag (no quotes)
|
||||
max-load=<load> # Set a new set of max-load thresholds
|
||||
throttle-query=<query> # Set a new throttle-query (no quotes)
|
||||
throttle-control-replicas=<replicas> # Set a new comma delimited list of throttle control replicas
|
||||
throttle # Force throttling
|
||||
no-throttle # End forced throttling (other throttling may still apply)
|
||||
unpostpone # Bail out a cut-over postpone; proceed to cut-over
|
||||
panic # panic and quit without cleanup
|
||||
help # This message
|
||||
`)
|
||||
}
|
||||
case "sup":
|
||||
this.printStatus(ForcePrintStatusOnlyRule, writer)
|
||||
case "info", "status":
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
case "chunk-size":
|
||||
{
|
||||
if chunkSize, err := strconv.Atoi(arg); err != nil {
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
return log.Errore(err)
|
||||
} else {
|
||||
this.migrationContext.SetChunkSize(int64(chunkSize))
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
}
|
||||
case "max-lag-millis":
|
||||
{
|
||||
if maxLagMillis, err := strconv.Atoi(arg); err != nil {
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
return log.Errore(err)
|
||||
} else {
|
||||
this.migrationContext.SetMaxLagMillisecondsThrottleThreshold(int64(maxLagMillis))
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
}
|
||||
case "replication-lag-query":
|
||||
{
|
||||
this.migrationContext.SetReplicationLagQuery(arg)
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
case "nice-ratio":
|
||||
{
|
||||
if niceRatio, err := strconv.ParseFloat(arg, 64); err != nil {
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
return log.Errore(err)
|
||||
} else {
|
||||
this.migrationContext.SetNiceRatio(niceRatio)
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
}
|
||||
case "max-load":
|
||||
{
|
||||
if err := this.migrationContext.ReadMaxLoad(arg); err != nil {
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
return log.Errore(err)
|
||||
}
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
case "critical-load":
|
||||
{
|
||||
if err := this.migrationContext.ReadCriticalLoad(arg); err != nil {
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
return log.Errore(err)
|
||||
}
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
case "throttle-query":
|
||||
{
|
||||
this.migrationContext.SetThrottleQuery(arg)
|
||||
fmt.Fprintf(writer, throttleHint)
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
case "throttle-control-replicas":
|
||||
{
|
||||
if err := this.migrationContext.ReadThrottleControlReplicaKeys(arg); err != nil {
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
return log.Errore(err)
|
||||
}
|
||||
fmt.Fprintf(writer, "%s\n", this.migrationContext.GetThrottleControlReplicaKeys().ToCommaDelimitedList())
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
case "throttle", "pause", "suspend":
|
||||
{
|
||||
atomic.StoreInt64(&this.migrationContext.ThrottleCommandedByUser, 1)
|
||||
fmt.Fprintf(writer, throttleHint)
|
||||
this.printStatus(ForcePrintStatusAndHintRule, writer)
|
||||
}
|
||||
case "no-throttle", "unthrottle", "resume", "continue":
|
||||
{
|
||||
atomic.StoreInt64(&this.migrationContext.ThrottleCommandedByUser, 0)
|
||||
}
|
||||
case "unpostpone", "no-postpone", "cut-over":
|
||||
{
|
||||
if atomic.LoadInt64(&this.migrationContext.IsPostponingCutOver) > 0 {
|
||||
atomic.StoreInt64(&this.userCommandedUnpostponeFlag, 1)
|
||||
fmt.Fprintf(writer, "Unpostponed\n")
|
||||
} else {
|
||||
fmt.Fprintf(writer, "You may only invoke this when gh-ost is actively postponing migration. At this time it is not.\n")
|
||||
}
|
||||
}
|
||||
case "panic":
|
||||
{
|
||||
err := fmt.Errorf("User commanded 'panic'. I will now panic, without cleanup. PANIC!")
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
this.panicAbort <- err
|
||||
}
|
||||
default:
|
||||
err = fmt.Errorf("Unknown command: %s", command)
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// initiateServer begins listening on unix socket/tcp for incoming interactive commands
|
||||
func (this *Migrator) initiateServer() (err error) {
|
||||
this.server = NewServer(this.onServerCommand)
|
||||
var f printStatusFunc = func(rule PrintStatusRule, writer io.Writer) {
|
||||
this.printStatus(rule, writer)
|
||||
}
|
||||
this.server = NewServer(this.hooksExecutor, f)
|
||||
if err := this.server.BindSocketFile(); err != nil {
|
||||
return err
|
||||
}
|
||||
@ -759,16 +616,22 @@ func (this *Migrator) initiateInspector() (err error) {
|
||||
}
|
||||
// So far so good, table is accessible and valid.
|
||||
// Let's get master connection config
|
||||
if this.migrationContext.AssumeMasterHostname == "" {
|
||||
// No forced master host; detect master
|
||||
if this.migrationContext.ApplierConnectionConfig, err = this.inspector.getMasterConnectionConfig(); err != nil {
|
||||
return err
|
||||
}
|
||||
if this.migrationContext.AssumeMasterHostname != "" {
|
||||
if key, err := mysql.ParseRawInstanceKeyLoose(this.migrationContext.AssumeMasterHostname); err != nil {
|
||||
return err
|
||||
log.Infof("Master found to be %+v", *this.migrationContext.ApplierConnectionConfig.ImpliedKey)
|
||||
} else {
|
||||
this.migrationContext.ApplierConnectionConfig.Key = *key
|
||||
// Forced master host.
|
||||
key, err := mysql.ParseRawInstanceKeyLoose(this.migrationContext.AssumeMasterHostname)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
this.migrationContext.ApplierConnectionConfig = this.migrationContext.InspectorConnectionConfig.DuplicateCredentials(*key)
|
||||
log.Infof("Master forced to be %+v", *this.migrationContext.ApplierConnectionConfig.ImpliedKey)
|
||||
}
|
||||
// validate configs
|
||||
if this.migrationContext.TestOnReplica || this.migrationContext.MigrateOnReplica {
|
||||
if this.migrationContext.InspectorIsAlsoApplier() {
|
||||
return fmt.Errorf("Instructed to --test-on-replica or --migrate-on-replica, but the server we connect to doesn't seem to be a replica")
|
||||
@ -781,13 +644,12 @@ func (this *Migrator) initiateInspector() (err error) {
|
||||
this.migrationContext.AddThrottleControlReplicaKey(this.migrationContext.InspectorConnectionConfig.Key)
|
||||
}
|
||||
} else if this.migrationContext.InspectorIsAlsoApplier() && !this.migrationContext.AllowedRunningOnMaster {
|
||||
return fmt.Errorf("It seems like this migration attempt to run directly on master. Preferably it would be executed on a replica (and this reduces load from the master). To proceed please provide --allow-on-master")
|
||||
return fmt.Errorf("It seems like this migration attempt to run directly on master. Preferably it would be executed on a replica (and this reduces load from the master). To proceed please provide --allow-on-master. Inspector config=%+v, applier config=%+v", this.migrationContext.InspectorConnectionConfig, this.migrationContext.ApplierConnectionConfig)
|
||||
}
|
||||
if err := this.inspector.validateLogSlaveUpdates(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
log.Infof("Master found to be %+v", *this.migrationContext.ApplierConnectionConfig.ImpliedKey)
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -887,21 +749,26 @@ func (this *Migrator) printMigrationStatusHint(writers ...io.Writer) {
|
||||
// By default the status is written to standard output, but other writers can
|
||||
// be used as well.
|
||||
func (this *Migrator) printStatus(rule PrintStatusRule, writers ...io.Writer) {
|
||||
if rule == NoPrintStatusRule {
|
||||
return
|
||||
}
|
||||
writers = append(writers, os.Stdout)
|
||||
|
||||
elapsedTime := this.migrationContext.ElapsedTime()
|
||||
elapsedSeconds := int64(elapsedTime.Seconds())
|
||||
totalRowsCopied := this.migrationContext.GetTotalRowsCopied()
|
||||
rowsEstimate := atomic.LoadInt64(&this.migrationContext.RowsEstimate) + atomic.LoadInt64(&this.migrationContext.RowsDeltaEstimate)
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 {
|
||||
// Done copying rows. The totalRowsCopied value is the de-facto number of rows,
|
||||
// and there is no further need to keep updating the value.
|
||||
rowsEstimate = totalRowsCopied
|
||||
}
|
||||
var progressPct float64
|
||||
if rowsEstimate == 0 {
|
||||
progressPct = 100.0
|
||||
} else {
|
||||
progressPct = 100.0 * float64(totalRowsCopied) / float64(rowsEstimate)
|
||||
}
|
||||
if atomic.LoadInt64(&this.rowCopyCompleteFlag) == 1 {
|
||||
progressPct = 100.0
|
||||
}
|
||||
// Before status, let's see if we should print a nice reminder for what exactly we're doing here.
|
||||
shouldPrintMigrationStatusHint := (elapsedSeconds%600 == 0)
|
||||
if rule == ForcePrintStatusAndHintRule {
|
||||
@ -1005,7 +872,7 @@ func (this *Migrator) initiateStreaming() error {
|
||||
log.Debugf("Beginning streaming")
|
||||
err := this.eventsStreamer.StreamEvents(this.canStopStreaming)
|
||||
if err != nil {
|
||||
this.panicAbort <- err
|
||||
this.migrationContext.PanicAbort <- err
|
||||
}
|
||||
log.Debugf("Done streaming")
|
||||
}()
|
||||
@ -1033,7 +900,7 @@ func (this *Migrator) addDMLEventsListener() error {
|
||||
|
||||
// initiateThrottler kicks in the throttling collection and the throttling checks.
|
||||
func (this *Migrator) initiateThrottler() error {
|
||||
this.throttler = NewThrottler(this.applier, this.inspector, this.panicAbort)
|
||||
this.throttler = NewThrottler(this.applier, this.inspector)
|
||||
|
||||
go this.throttler.initiateThrottlerCollection(this.firstThrottlingCollected)
|
||||
log.Infof("Waiting for first throttle metrics to be collected")
|
||||
|
@ -8,27 +8,33 @@ package logic
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
|
||||
"github.com/github/gh-ost/go/base"
|
||||
"github.com/outbrain/golib/log"
|
||||
)
|
||||
|
||||
type onCommandFunc func(command string, writer *bufio.Writer) error
|
||||
type printStatusFunc func(PrintStatusRule, io.Writer)
|
||||
|
||||
// Server listens for requests on a socket file or via TCP
|
||||
type Server struct {
|
||||
migrationContext *base.MigrationContext
|
||||
unixListener net.Listener
|
||||
tcpListener net.Listener
|
||||
onCommand onCommandFunc
|
||||
hooksExecutor *HooksExecutor
|
||||
printStatus printStatusFunc
|
||||
}
|
||||
|
||||
func NewServer(onCommand onCommandFunc) *Server {
|
||||
func NewServer(hooksExecutor *HooksExecutor, printStatus printStatusFunc) *Server {
|
||||
return &Server{
|
||||
migrationContext: base.GetMigrationContext(),
|
||||
onCommand: onCommand,
|
||||
hooksExecutor: hooksExecutor,
|
||||
printStatus: printStatus,
|
||||
}
|
||||
}
|
||||
|
||||
@ -94,5 +100,163 @@ func (this *Server) Serve() (err error) {
|
||||
func (this *Server) handleConnection(conn net.Conn) (err error) {
|
||||
defer conn.Close()
|
||||
command, _, err := bufio.NewReader(conn).ReadLine()
|
||||
return this.onCommand(string(command), bufio.NewWriter(conn))
|
||||
return this.onServerCommand(string(command), bufio.NewWriter(conn))
|
||||
}
|
||||
|
||||
// onServerCommand responds to a user's interactive command
|
||||
func (this *Server) onServerCommand(command string, writer *bufio.Writer) (err error) {
|
||||
defer writer.Flush()
|
||||
|
||||
printStatusRule, err := this.applyServerCommand(command, writer)
|
||||
if err == nil {
|
||||
this.printStatus(printStatusRule, writer)
|
||||
} else {
|
||||
fmt.Fprintf(writer, "%s\n", err.Error())
|
||||
}
|
||||
return log.Errore(err)
|
||||
}
|
||||
|
||||
// applyServerCommand parses and executes commands by user
|
||||
func (this *Server) applyServerCommand(command string, writer *bufio.Writer) (printStatusRule PrintStatusRule, err error) {
|
||||
printStatusRule = NoPrintStatusRule
|
||||
|
||||
tokens := strings.SplitN(command, "=", 2)
|
||||
command = strings.TrimSpace(tokens[0])
|
||||
arg := ""
|
||||
if len(tokens) > 1 {
|
||||
arg = strings.TrimSpace(tokens[1])
|
||||
}
|
||||
|
||||
throttleHint := "# Note: you may only throttle for as long as your binary logs are not purged\n"
|
||||
|
||||
if err := this.hooksExecutor.onInteractiveCommand(command); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
|
||||
switch command {
|
||||
case "help":
|
||||
{
|
||||
fmt.Fprintln(writer, `available commands:
|
||||
status # Print a detailed status message
|
||||
sup # Print a short status message
|
||||
chunk-size=<newsize> # Set a new chunk-size
|
||||
nice-ratio=<ratio> # Set a new nice-ratio, immediate sleep after each row-copy operation, float (examples: 0 is agrressive, 0.7 adds 70% runtime, 1.0 doubles runtime, 2.0 triples runtime, ...)
|
||||
critical-load=<load> # Set a new set of max-load thresholds
|
||||
max-lag-millis=<max-lag> # Set a new replication lag threshold
|
||||
replication-lag-query=<query> # Set a new query that determines replication lag (no quotes)
|
||||
max-load=<load> # Set a new set of max-load thresholds
|
||||
throttle-query=<query> # Set a new throttle-query (no quotes)
|
||||
throttle-control-replicas=<replicas> # Set a new comma delimited list of throttle control replicas
|
||||
throttle # Force throttling
|
||||
no-throttle # End forced throttling (other throttling may still apply)
|
||||
unpostpone # Bail out a cut-over postpone; proceed to cut-over
|
||||
panic # panic and quit without cleanup
|
||||
help # This message
|
||||
`)
|
||||
}
|
||||
case "sup":
|
||||
return ForcePrintStatusOnlyRule, nil
|
||||
case "info", "status":
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
case "chunk-size":
|
||||
{
|
||||
if chunkSize, err := strconv.Atoi(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
} else {
|
||||
this.migrationContext.SetChunkSize(int64(chunkSize))
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
}
|
||||
case "max-lag-millis":
|
||||
{
|
||||
if maxLagMillis, err := strconv.Atoi(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
} else {
|
||||
this.migrationContext.SetMaxLagMillisecondsThrottleThreshold(int64(maxLagMillis))
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
}
|
||||
case "replication-lag-query":
|
||||
{
|
||||
this.migrationContext.SetReplicationLagQuery(arg)
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "nice-ratio":
|
||||
{
|
||||
if niceRatio, err := strconv.ParseFloat(arg, 64); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
} else {
|
||||
this.migrationContext.SetNiceRatio(niceRatio)
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
}
|
||||
case "max-load":
|
||||
{
|
||||
if err := this.migrationContext.ReadMaxLoad(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "critical-load":
|
||||
{
|
||||
if err := this.migrationContext.ReadCriticalLoad(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "throttle-query":
|
||||
{
|
||||
this.migrationContext.SetThrottleQuery(arg)
|
||||
fmt.Fprintf(writer, throttleHint)
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "throttle-control-replicas":
|
||||
{
|
||||
if err := this.migrationContext.ReadThrottleControlReplicaKeys(arg); err != nil {
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
fmt.Fprintf(writer, "%s\n", this.migrationContext.GetThrottleControlReplicaKeys().ToCommaDelimitedList())
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "throttle", "pause", "suspend":
|
||||
{
|
||||
atomic.StoreInt64(&this.migrationContext.ThrottleCommandedByUser, 1)
|
||||
fmt.Fprintf(writer, throttleHint)
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "no-throttle", "unthrottle", "resume", "continue":
|
||||
{
|
||||
atomic.StoreInt64(&this.migrationContext.ThrottleCommandedByUser, 0)
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
case "unpostpone", "no-postpone", "cut-over":
|
||||
{
|
||||
if arg == "" && this.migrationContext.ForceNamedCutOverCommand {
|
||||
err := fmt.Errorf("User commanded 'unpostpone' without specifying table name, but --force-named-cut-over is set")
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
if arg != "" && arg != this.migrationContext.OriginalTableName {
|
||||
// User exlpicitly provided table name. This is a courtesy protection mechanism
|
||||
err := fmt.Errorf("User commanded 'unpostpone' on %s, but migrated table is %s; ingoring request.", arg, this.migrationContext.OriginalTableName)
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
if atomic.LoadInt64(&this.migrationContext.IsPostponingCutOver) > 0 {
|
||||
atomic.StoreInt64(&this.migrationContext.UserCommandedUnpostponeFlag, 1)
|
||||
fmt.Fprintf(writer, "Unpostponed\n")
|
||||
return ForcePrintStatusAndHintRule, nil
|
||||
}
|
||||
fmt.Fprintf(writer, "You may only invoke this when gh-ost is actively postponing migration. At this time it is not.\n")
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
case "panic":
|
||||
{
|
||||
err := fmt.Errorf("User commanded 'panic'. I will now panic, without cleanup. PANIC!")
|
||||
this.migrationContext.PanicAbort <- err
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
default:
|
||||
err = fmt.Errorf("Unknown command: %s", command)
|
||||
return NoPrintStatusRule, err
|
||||
}
|
||||
return NoPrintStatusRule, nil
|
||||
}
|
||||
|
@ -204,7 +204,7 @@ func (this *EventsStreamer) StreamEvents(canStopStreaming func() bool) error {
|
||||
successiveFailures = 0
|
||||
}
|
||||
if successiveFailures > this.migrationContext.MaxRetries() {
|
||||
return fmt.Errorf("%d successive failures in streamer reconnect at coordinates %+v", lastAppliedRowsEventHint)
|
||||
return fmt.Errorf("%d successive failures in streamer reconnect at coordinates %+v", successiveFailures, this.GetReconnectBinlogCoordinates())
|
||||
}
|
||||
|
||||
// Reposition at same binlog file.
|
||||
|
@ -21,15 +21,13 @@ type Throttler struct {
|
||||
migrationContext *base.MigrationContext
|
||||
applier *Applier
|
||||
inspector *Inspector
|
||||
panicAbort chan error
|
||||
}
|
||||
|
||||
func NewThrottler(applier *Applier, inspector *Inspector, panicAbort chan error) *Throttler {
|
||||
func NewThrottler(applier *Applier, inspector *Inspector) *Throttler {
|
||||
return &Throttler{
|
||||
migrationContext: base.GetMigrationContext(),
|
||||
applier: applier,
|
||||
inspector: inspector,
|
||||
panicAbort: panicAbort,
|
||||
}
|
||||
}
|
||||
|
||||
@ -132,6 +130,20 @@ func (this *Throttler) collectControlReplicasLag() {
|
||||
}
|
||||
}
|
||||
|
||||
func (this *Throttler) criticalLoadIsMet() (met bool, variableName string, value int64, threshold int64, err error) {
|
||||
criticalLoad := this.migrationContext.GetCriticalLoad()
|
||||
for variableName, threshold = range criticalLoad {
|
||||
value, err = this.applier.ShowStatusVariable(variableName)
|
||||
if err != nil {
|
||||
return false, variableName, value, threshold, err
|
||||
}
|
||||
if value >= threshold {
|
||||
return true, variableName, value, threshold, nil
|
||||
}
|
||||
}
|
||||
return false, variableName, value, threshold, nil
|
||||
}
|
||||
|
||||
// collectGeneralThrottleMetrics reads the once-per-sec metrics, and stores them onto this.migrationContext
|
||||
func (this *Throttler) collectGeneralThrottleMetrics() error {
|
||||
|
||||
@ -143,18 +155,26 @@ func (this *Throttler) collectGeneralThrottleMetrics() error {
|
||||
// Regardless of throttle, we take opportunity to check for panic-abort
|
||||
if this.migrationContext.PanicFlagFile != "" {
|
||||
if base.FileExists(this.migrationContext.PanicFlagFile) {
|
||||
this.panicAbort <- fmt.Errorf("Found panic-file %s. Aborting without cleanup", this.migrationContext.PanicFlagFile)
|
||||
this.migrationContext.PanicAbort <- fmt.Errorf("Found panic-file %s. Aborting without cleanup", this.migrationContext.PanicFlagFile)
|
||||
}
|
||||
}
|
||||
criticalLoad := this.migrationContext.GetCriticalLoad()
|
||||
for variableName, threshold := range criticalLoad {
|
||||
value, err := this.applier.ShowStatusVariable(variableName)
|
||||
|
||||
criticalLoadMet, variableName, value, threshold, err := this.criticalLoadIsMet()
|
||||
if err != nil {
|
||||
return setThrottle(true, fmt.Sprintf("%s %s", variableName, err))
|
||||
}
|
||||
if value >= threshold {
|
||||
this.panicAbort <- fmt.Errorf("critical-load met: %s=%d, >=%d", variableName, value, threshold)
|
||||
if criticalLoadMet && this.migrationContext.CriticalLoadIntervalMilliseconds == 0 {
|
||||
this.migrationContext.PanicAbort <- fmt.Errorf("critical-load met: %s=%d, >=%d", variableName, value, threshold)
|
||||
}
|
||||
if criticalLoadMet && this.migrationContext.CriticalLoadIntervalMilliseconds > 0 {
|
||||
log.Errorf("critical-load met once: %s=%d, >=%d. Will check again in %d millis", variableName, value, threshold, this.migrationContext.CriticalLoadIntervalMilliseconds)
|
||||
go func() {
|
||||
timer := time.NewTimer(time.Millisecond * time.Duration(this.migrationContext.CriticalLoadIntervalMilliseconds))
|
||||
<-timer.C
|
||||
if criticalLoadMetAgain, variableName, value, threshold, _ := this.criticalLoadIsMet(); criticalLoadMetAgain {
|
||||
this.migrationContext.PanicAbort <- fmt.Errorf("critical-load met again after %d millis: %s=%d, >=%d", this.migrationContext.CriticalLoadIntervalMilliseconds, variableName, value, threshold)
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Back to throttle considerations
|
||||
|
@ -26,12 +26,10 @@ func NewConnectionConfig() *ConnectionConfig {
|
||||
return config
|
||||
}
|
||||
|
||||
func (this *ConnectionConfig) Duplicate() *ConnectionConfig {
|
||||
// DuplicateCredentials creates a new connection config with given key and with same credentials as this config
|
||||
func (this *ConnectionConfig) DuplicateCredentials(key InstanceKey) *ConnectionConfig {
|
||||
config := &ConnectionConfig{
|
||||
Key: InstanceKey{
|
||||
Hostname: this.Key.Hostname,
|
||||
Port: this.Key.Port,
|
||||
},
|
||||
Key: key,
|
||||
User: this.User,
|
||||
Password: this.Password,
|
||||
}
|
||||
@ -39,6 +37,10 @@ func (this *ConnectionConfig) Duplicate() *ConnectionConfig {
|
||||
return config
|
||||
}
|
||||
|
||||
func (this *ConnectionConfig) Duplicate() *ConnectionConfig {
|
||||
return this.DuplicateCredentials(this.Key)
|
||||
}
|
||||
|
||||
func (this *ConnectionConfig) String() string {
|
||||
return fmt.Sprintf("%s, user=%s", this.Key.DisplayString(), this.User)
|
||||
}
|
||||
|
57
go/mysql/connection_test.go
Normal file
57
go/mysql/connection_test.go
Normal file
@ -0,0 +1,57 @@
|
||||
/*
|
||||
Copyright 2016 GitHub Inc.
|
||||
See https://github.com/github/gh-ost/blob/master/LICENSE
|
||||
*/
|
||||
|
||||
package mysql
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/outbrain/golib/log"
|
||||
test "github.com/outbrain/golib/tests"
|
||||
)
|
||||
|
||||
func init() {
|
||||
log.SetLevel(log.ERROR)
|
||||
}
|
||||
|
||||
func TestNewConnectionConfig(t *testing.T) {
|
||||
c := NewConnectionConfig()
|
||||
test.S(t).ExpectEquals(c.Key.Hostname, "")
|
||||
test.S(t).ExpectEquals(c.Key.Port, 0)
|
||||
test.S(t).ExpectEquals(c.ImpliedKey.Hostname, "")
|
||||
test.S(t).ExpectEquals(c.ImpliedKey.Port, 0)
|
||||
test.S(t).ExpectEquals(c.User, "")
|
||||
test.S(t).ExpectEquals(c.Password, "")
|
||||
}
|
||||
|
||||
func TestDuplicateCredentials(t *testing.T) {
|
||||
c := NewConnectionConfig()
|
||||
c.Key = InstanceKey{Hostname: "myhost", Port: 3306}
|
||||
c.User = "gromit"
|
||||
c.Password = "penguin"
|
||||
|
||||
dup := c.DuplicateCredentials(InstanceKey{Hostname: "otherhost", Port: 3310})
|
||||
test.S(t).ExpectEquals(dup.Key.Hostname, "otherhost")
|
||||
test.S(t).ExpectEquals(dup.Key.Port, 3310)
|
||||
test.S(t).ExpectEquals(dup.ImpliedKey.Hostname, "otherhost")
|
||||
test.S(t).ExpectEquals(dup.ImpliedKey.Port, 3310)
|
||||
test.S(t).ExpectEquals(dup.User, "gromit")
|
||||
test.S(t).ExpectEquals(dup.Password, "penguin")
|
||||
}
|
||||
|
||||
func TestDuplicate(t *testing.T) {
|
||||
c := NewConnectionConfig()
|
||||
c.Key = InstanceKey{Hostname: "myhost", Port: 3306}
|
||||
c.User = "gromit"
|
||||
c.Password = "penguin"
|
||||
|
||||
dup := c.Duplicate()
|
||||
test.S(t).ExpectEquals(dup.Key.Hostname, "myhost")
|
||||
test.S(t).ExpectEquals(dup.Key.Port, 3306)
|
||||
test.S(t).ExpectEquals(dup.ImpliedKey.Hostname, "myhost")
|
||||
test.S(t).ExpectEquals(dup.ImpliedKey.Port, 3306)
|
||||
test.S(t).ExpectEquals(dup.User, "gromit")
|
||||
test.S(t).ExpectEquals(dup.Password, "penguin")
|
||||
}
|
@ -32,6 +32,20 @@ func EscapeName(name string) string {
|
||||
return fmt.Sprintf("`%s`", name)
|
||||
}
|
||||
|
||||
func buildColumnsPreparedValues(columns *ColumnList) []string {
|
||||
values := make([]string, columns.Len(), columns.Len())
|
||||
for i, column := range columns.Columns() {
|
||||
var token string
|
||||
if column.timezoneConversion != nil {
|
||||
token = fmt.Sprintf("convert_tz(?, '%s', '%s')", column.timezoneConversion.ToTimezone, "+00:00")
|
||||
} else {
|
||||
token = "?"
|
||||
}
|
||||
values[i] = token
|
||||
}
|
||||
return values
|
||||
}
|
||||
|
||||
func buildPreparedValues(length int) []string {
|
||||
values := make([]string, length, length)
|
||||
for i := 0; i < length; i++ {
|
||||
@ -83,13 +97,19 @@ func BuildEqualsPreparedComparison(columns []string) (result string, err error)
|
||||
return BuildEqualsComparison(columns, values)
|
||||
}
|
||||
|
||||
func BuildSetPreparedClause(columns []string) (result string, err error) {
|
||||
if len(columns) == 0 {
|
||||
func BuildSetPreparedClause(columns *ColumnList) (result string, err error) {
|
||||
if columns.Len() == 0 {
|
||||
return "", fmt.Errorf("Got 0 columns in BuildSetPreparedClause")
|
||||
}
|
||||
setTokens := []string{}
|
||||
for _, column := range columns {
|
||||
setTokens = append(setTokens, fmt.Sprintf("%s=?", EscapeName(column)))
|
||||
for _, column := range columns.Columns() {
|
||||
var setToken string
|
||||
if column.timezoneConversion != nil {
|
||||
setToken = fmt.Sprintf("%s=convert_tz(?, '%s', '%s')", EscapeName(column.Name), column.timezoneConversion.ToTimezone, "+00:00")
|
||||
} else {
|
||||
setToken = fmt.Sprintf("%s=?", EscapeName(column.Name))
|
||||
}
|
||||
setTokens = append(setTokens, setToken)
|
||||
}
|
||||
return strings.Join(setTokens, ", "), nil
|
||||
}
|
||||
@ -344,7 +364,7 @@ func BuildDMLInsertQuery(databaseName, tableName string, tableColumns, sharedCol
|
||||
databaseName = EscapeName(databaseName)
|
||||
tableName = EscapeName(tableName)
|
||||
|
||||
for _, column := range mappedSharedColumns.Columns() {
|
||||
for _, column := range sharedColumns.Columns() {
|
||||
tableOrdinal := tableColumns.Ordinals[column.Name]
|
||||
arg := column.convertArg(args[tableOrdinal])
|
||||
sharedArgs = append(sharedArgs, arg)
|
||||
@ -354,7 +374,7 @@ func BuildDMLInsertQuery(databaseName, tableName string, tableColumns, sharedCol
|
||||
for i := range mappedSharedColumnNames {
|
||||
mappedSharedColumnNames[i] = EscapeName(mappedSharedColumnNames[i])
|
||||
}
|
||||
preparedValues := buildPreparedValues(mappedSharedColumns.Len())
|
||||
preparedValues := buildColumnsPreparedValues(mappedSharedColumns)
|
||||
|
||||
result = fmt.Sprintf(`
|
||||
replace /* gh-ost %s.%s */ into
|
||||
@ -392,10 +412,9 @@ func BuildDMLUpdateQuery(databaseName, tableName string, tableColumns, sharedCol
|
||||
databaseName = EscapeName(databaseName)
|
||||
tableName = EscapeName(tableName)
|
||||
|
||||
for i, column := range sharedColumns.Columns() {
|
||||
mappedColumn := mappedSharedColumns.Columns()[i]
|
||||
for _, column := range sharedColumns.Columns() {
|
||||
tableOrdinal := tableColumns.Ordinals[column.Name]
|
||||
arg := mappedColumn.convertArg(valueArgs[tableOrdinal])
|
||||
arg := column.convertArg(valueArgs[tableOrdinal])
|
||||
sharedArgs = append(sharedArgs, arg)
|
||||
}
|
||||
|
||||
@ -405,11 +424,7 @@ func BuildDMLUpdateQuery(databaseName, tableName string, tableColumns, sharedCol
|
||||
uniqueKeyArgs = append(uniqueKeyArgs, arg)
|
||||
}
|
||||
|
||||
mappedSharedColumnNames := duplicateNames(mappedSharedColumns.Names())
|
||||
for i := range mappedSharedColumnNames {
|
||||
mappedSharedColumnNames[i] = EscapeName(mappedSharedColumnNames[i])
|
||||
}
|
||||
setClause, err := BuildSetPreparedClause(mappedSharedColumnNames)
|
||||
setClause, err := BuildSetPreparedClause(mappedSharedColumns)
|
||||
|
||||
equalsComparison, err := BuildEqualsPreparedComparison(uniqueKeyColumns.Names())
|
||||
result = fmt.Sprintf(`
|
||||
|
@ -79,19 +79,19 @@ func TestBuildEqualsPreparedComparison(t *testing.T) {
|
||||
|
||||
func TestBuildSetPreparedClause(t *testing.T) {
|
||||
{
|
||||
columns := []string{"c1"}
|
||||
columns := NewColumnList([]string{"c1"})
|
||||
clause, err := BuildSetPreparedClause(columns)
|
||||
test.S(t).ExpectNil(err)
|
||||
test.S(t).ExpectEquals(clause, "`c1`=?")
|
||||
}
|
||||
{
|
||||
columns := []string{"c1", "c2"}
|
||||
columns := NewColumnList([]string{"c1", "c2"})
|
||||
clause, err := BuildSetPreparedClause(columns)
|
||||
test.S(t).ExpectNil(err)
|
||||
test.S(t).ExpectEquals(clause, "`c1`=?, `c2`=?")
|
||||
}
|
||||
{
|
||||
columns := []string{}
|
||||
columns := NewColumnList([]string{})
|
||||
_, err := BuildSetPreparedClause(columns)
|
||||
test.S(t).ExpectNotNil(err)
|
||||
}
|
||||
|
@ -12,10 +12,24 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
type ColumnType int
|
||||
|
||||
const (
|
||||
UnknownColumnType ColumnType = iota
|
||||
TimestampColumnType = iota
|
||||
DateTimeColumnType = iota
|
||||
)
|
||||
|
||||
type TimezoneConvertion struct {
|
||||
ToTimezone string
|
||||
}
|
||||
|
||||
type Column struct {
|
||||
Name string
|
||||
IsUnsigned bool
|
||||
Charset string
|
||||
Type ColumnType
|
||||
timezoneConversion *TimezoneConvertion
|
||||
}
|
||||
|
||||
func (this *Column) convertArg(arg interface{}) interface{} {
|
||||
@ -112,20 +126,43 @@ func (this *ColumnList) Names() []string {
|
||||
return names
|
||||
}
|
||||
|
||||
func (this *ColumnList) GetColumn(columnName string) *Column {
|
||||
if ordinal, ok := this.Ordinals[columnName]; ok {
|
||||
return &this.columns[ordinal]
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (this *ColumnList) SetUnsigned(columnName string) {
|
||||
this.columns[this.Ordinals[columnName]].IsUnsigned = true
|
||||
this.GetColumn(columnName).IsUnsigned = true
|
||||
}
|
||||
|
||||
func (this *ColumnList) IsUnsigned(columnName string) bool {
|
||||
return this.columns[this.Ordinals[columnName]].IsUnsigned
|
||||
return this.GetColumn(columnName).IsUnsigned
|
||||
}
|
||||
|
||||
func (this *ColumnList) SetCharset(columnName string, charset string) {
|
||||
this.columns[this.Ordinals[columnName]].Charset = charset
|
||||
this.GetColumn(columnName).Charset = charset
|
||||
}
|
||||
|
||||
func (this *ColumnList) GetCharset(columnName string) string {
|
||||
return this.columns[this.Ordinals[columnName]].Charset
|
||||
return this.GetColumn(columnName).Charset
|
||||
}
|
||||
|
||||
func (this *ColumnList) SetColumnType(columnName string, columnType ColumnType) {
|
||||
this.GetColumn(columnName).Type = columnType
|
||||
}
|
||||
|
||||
func (this *ColumnList) GetColumnType(columnName string) ColumnType {
|
||||
return this.GetColumn(columnName).Type
|
||||
}
|
||||
|
||||
func (this *ColumnList) SetConvertDatetimeToTimestamp(columnName string, toTimezone string) {
|
||||
this.GetColumn(columnName).timezoneConversion = &TimezoneConvertion{ToTimezone: toTimezone}
|
||||
}
|
||||
|
||||
func (this *ColumnList) HasTimezoneConversion(columnName string) bool {
|
||||
return this.GetColumn(columnName).timezoneConversion != nil
|
||||
}
|
||||
|
||||
func (this *ColumnList) String() string {
|
||||
|
@ -28,3 +28,17 @@ func TestParseColumnList(t *testing.T) {
|
||||
test.S(t).ExpectEquals(columnList.Ordinals["category"], 1)
|
||||
test.S(t).ExpectEquals(columnList.Ordinals["max_len"], 2)
|
||||
}
|
||||
|
||||
func TestGetColumn(t *testing.T) {
|
||||
names := "id,category,max_len"
|
||||
columnList := ParseColumnList(names)
|
||||
{
|
||||
column := columnList.GetColumn("category")
|
||||
test.S(t).ExpectTrue(column != nil)
|
||||
test.S(t).ExpectEquals(column.Name, "category")
|
||||
}
|
||||
{
|
||||
column := columnList.GetColumn("no_such_column")
|
||||
test.S(t).ExpectTrue(column == nil)
|
||||
}
|
||||
}
|
||||
|
28
localtests/alter-charset-all-dml/create.sql
Normal file
28
localtests/alter-charset-all-dml/create.sql
Normal file
@ -0,0 +1,28 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
t1 varchar(128) charset latin1 collate latin1_swedish_ci,
|
||||
t2 varchar(128) charset latin1 collate latin1_swedish_ci,
|
||||
tutf8 varchar(128) charset utf8,
|
||||
tutf8mb4 varchar(128) charset utf8mb4,
|
||||
random_value varchar(128) charset ascii,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, md5(rand()), md5(rand()), md5(rand()), md5(rand()), md5(rand()));
|
||||
insert into gh_ost_test values (null, 'átesting', 'átesting', 'átesting', 'átesting', md5(rand()));
|
||||
insert into gh_ost_test values (null, 'átesting_del', 'átesting', 'átesting', 'átesting', md5(rand()));
|
||||
insert into gh_ost_test values (null, 'testátest', 'testátest', 'testátest', '🍻😀', md5(rand()));
|
||||
update gh_ost_test set t1='átesting2' where t1='átesting' order by id desc limit 1;
|
||||
delete from gh_ost_test where t1='átesting_del' order by id desc limit 1;
|
||||
end ;;
|
1
localtests/alter-charset-all-dml/extra_args
Normal file
1
localtests/alter-charset-all-dml/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter='MODIFY `t1` varchar(128) CHARACTER SET utf8mb4 NOT NULL, MODIFY `t2` varchar(128) CHARACTER SET latin2 NOT NULL, MODIFY `tutf8` varchar(128) CHARACTER SET latin1 NOT NULL'
|
24
localtests/alter-charset/create.sql
Normal file
24
localtests/alter-charset/create.sql
Normal file
@ -0,0 +1,24 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
t1 varchar(128) charset latin1 collate latin1_swedish_ci,
|
||||
t2 varchar(128) charset latin1 collate latin1_swedish_ci,
|
||||
tutf8 varchar(128) charset utf8,
|
||||
tutf8mb4 varchar(128) charset utf8mb4,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, md5(rand()), md5(rand()), md5(rand()), md5(rand()));
|
||||
insert into gh_ost_test values (null, 'átesting', 'átesting', 'átesting', 'átesting');
|
||||
insert into gh_ost_test values (null, 'testátest', 'testátest', 'testátest', '🍻😀');
|
||||
end ;;
|
1
localtests/alter-charset/extra_args
Normal file
1
localtests/alter-charset/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter='MODIFY `t1` varchar(128) CHARACTER SET utf8mb4 NOT NULL, MODIFY `t2` varchar(128) CHARACTER SET latin2 NOT NULL, MODIFY `tutf8` varchar(128) CHARACTER SET latin1 NOT NULL'
|
31
localtests/datetime-to-timestamp-pk-fail/create.sql
Normal file
31
localtests/datetime-to-timestamp-pk-fail/create.sql
Normal file
@ -0,0 +1,31 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int unsigned auto_increment,
|
||||
i int not null,
|
||||
ts0 timestamp default current_timestamp,
|
||||
ts1 timestamp,
|
||||
dt2 datetime,
|
||||
t datetime,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id, t),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 7, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
|
||||
insert into gh_ost_test values (null, 11, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set dt2=now() + interval 1 minute, updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set t=t + interval 1 minute, updated = 1 where i = 13 order by id desc limit 1;
|
||||
end ;;
|
1
localtests/datetime-to-timestamp-pk-fail/expect_failure
Normal file
1
localtests/datetime-to-timestamp-pk-fail/expect_failure
Normal file
@ -0,0 +1 @@
|
||||
No support at this time for converting a column from DATETIME to TIMESTAMP that is also part of the chosen unique key
|
1
localtests/datetime-to-timestamp-pk-fail/extra_args
Normal file
1
localtests/datetime-to-timestamp-pk-fail/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column t t timestamp not null"
|
31
localtests/datetime-to-timestamp/create.sql
Normal file
31
localtests/datetime-to-timestamp/create.sql
Normal file
@ -0,0 +1,31 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int unsigned auto_increment,
|
||||
i int not null,
|
||||
ts0 timestamp default current_timestamp,
|
||||
ts1 timestamp,
|
||||
dt2 datetime,
|
||||
t datetime,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 7, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
|
||||
insert into gh_ost_test values (null, 11, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set dt2=now() + interval 1 minute, updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set t=t + interval 1 minute, updated = 1 where i = 13 order by id desc limit 1;
|
||||
end ;;
|
1
localtests/datetime-to-timestamp/extra_args
Normal file
1
localtests/datetime-to-timestamp/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column t t timestamp not null"
|
37
localtests/datetime/create.sql
Normal file
37
localtests/datetime/create.sql
Normal file
@ -0,0 +1,37 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
dt0 datetime default current_timestamp,
|
||||
dt1 datetime,
|
||||
dt2 datetime,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, null, now(), now(), 0);
|
||||
update gh_ost_test set dt2=now() + interval 1 minute, updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), 0);
|
||||
update gh_ost_test set dt2=now() + interval 1 minute, updated = 1 where i = 13 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 17, null, now(), now(), 0);
|
||||
update gh_ost_test set dt2=now() + interval 1 minute, updated = 1 where i = 17 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 19, null, now(), now(), 0);
|
||||
update gh_ost_test set dt2=now() + interval 1 minute, updated = 1 where i = 19 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 23, null, now(), now(), 0);
|
||||
update gh_ost_test set dt2=now() + interval 1 minute, updated = 1 where i = 23 order by id desc limit 1;
|
||||
end ;;
|
32
localtests/discard-fk/create.sql
Normal file
32
localtests/discard-fk/create.sql
Normal file
@ -0,0 +1,32 @@
|
||||
drop table if exists gh_ost_test_child;
|
||||
drop table if exists gh_ost_test;
|
||||
drop table if exists gh_ost_test_fk_parent;
|
||||
create table gh_ost_test_fk_parent (
|
||||
id int auto_increment,
|
||||
ts timestamp,
|
||||
primary key(id)
|
||||
);
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
parent_id int not null,
|
||||
primary key(id),
|
||||
constraint test_fk foreign key (parent_id) references gh_ost_test_fk_parent (id) on delete no action
|
||||
) auto_increment=1;
|
||||
|
||||
insert into gh_ost_test_fk_parent (id) values (1),(2),(3);
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, 1);
|
||||
insert into gh_ost_test values (null, 13, 2);
|
||||
insert into gh_ost_test values (null, 17, 3);
|
||||
end ;;
|
1
localtests/discard-fk/extra_args
Normal file
1
localtests/discard-fk/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--discard-foreign-keys
|
41
localtests/fail-fk-parent/create.sql
Normal file
41
localtests/fail-fk-parent/create.sql
Normal file
@ -0,0 +1,41 @@
|
||||
drop table if exists gh_ost_test_child;
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
primary key(id)
|
||||
) engine=innodb auto_increment=1;
|
||||
|
||||
create table gh_ost_test_child (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
parent_id int not null,
|
||||
constraint test_fk foreign key (parent_id) references gh_ost_test (id) on delete no action,
|
||||
primary key(id)
|
||||
) engine=innodb;
|
||||
insert into gh_ost_test (id) values (1),(2),(3);
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
drop event if exists gh_ost_test_cleanup;
|
||||
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test_child values (null, 11, 1);
|
||||
insert into gh_ost_test_child values (null, 13, 2);
|
||||
insert into gh_ost_test_child values (null, 17, 3);
|
||||
end ;;
|
||||
|
||||
create event gh_ost_test_cleanup
|
||||
on schedule at current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
drop table if exists gh_ost_test_child;
|
||||
end ;;
|
1
localtests/fail-fk-parent/destroy.sql
Normal file
1
localtests/fail-fk-parent/destroy.sql
Normal file
@ -0,0 +1 @@
|
||||
drop table if exists gh_ost_test_child;
|
1
localtests/fail-fk-parent/expect_failure
Normal file
1
localtests/fail-fk-parent/expect_failure
Normal file
@ -0,0 +1 @@
|
||||
Parent-side foreign keys are not supported
|
1
localtests/fail-fk-parent/extra_args
Normal file
1
localtests/fail-fk-parent/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--discard-foreign-keys
|
32
localtests/fail-fk/create.sql
Normal file
32
localtests/fail-fk/create.sql
Normal file
@ -0,0 +1,32 @@
|
||||
drop table if exists gh_ost_test_child;
|
||||
drop table if exists gh_ost_test;
|
||||
drop table if exists gh_ost_test_fk_parent;
|
||||
create table gh_ost_test_fk_parent (
|
||||
id int auto_increment,
|
||||
ts timestamp,
|
||||
primary key(id)
|
||||
);
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
parent_id int not null,
|
||||
primary key(id),
|
||||
constraint test_fk foreign key (parent_id) references gh_ost_test_fk_parent (id) on delete no action
|
||||
) auto_increment=1;
|
||||
|
||||
insert into gh_ost_test_fk_parent (id) values (1),(2),(3);
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, 1);
|
||||
insert into gh_ost_test values (null, 13, 2);
|
||||
insert into gh_ost_test values (null, 17, 3);
|
||||
end ;;
|
1
localtests/fail-fk/expect_failure
Normal file
1
localtests/fail-fk/expect_failure
Normal file
@ -0,0 +1 @@
|
||||
Child-side foreign keys are not supported. Bailing out
|
22
localtests/rename-inserts-only/create.sql
Normal file
22
localtests/rename-inserts-only/create.sql
Normal file
@ -0,0 +1,22 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
c1 int not null,
|
||||
c2 int not null,
|
||||
primary key (id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, 23);
|
||||
insert into gh_ost_test values (null, 13, 23);
|
||||
insert into gh_ost_test values (null, floor(rand()*pow(2,32)), floor(rand()*pow(2,32)));
|
||||
end ;;
|
1
localtests/rename-inserts-only/extra_args
Normal file
1
localtests/rename-inserts-only/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column c2 c3 int not null" --approve-renamed-columns
|
26
localtests/rename-reorder-column/create.sql
Normal file
26
localtests/rename-reorder-column/create.sql
Normal file
@ -0,0 +1,26 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
c1 int not null,
|
||||
c2 int not null,
|
||||
primary key (id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert ignore into gh_ost_test values (1, 11, 23);
|
||||
insert ignore into gh_ost_test values (2, 13, 23);
|
||||
insert into gh_ost_test values (null, 17, 23);
|
||||
set @last_insert_id := last_insert_id();
|
||||
update gh_ost_test set c1=c1+@last_insert_id, c2=c2+@last_insert_id where id=@last_insert_id order by id desc limit 1;
|
||||
delete from gh_ost_test where id=1;
|
||||
delete from gh_ost_test where c1=13; -- id=2
|
||||
end ;;
|
1
localtests/rename-reorder-column/extra_args
Normal file
1
localtests/rename-reorder-column/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column c2 c2a int not null after id" --approve-renamed-columns
|
1
localtests/rename-reorder-column/ghost_columns
Normal file
1
localtests/rename-reorder-column/ghost_columns
Normal file
@ -0,0 +1 @@
|
||||
id, c1, c2a
|
1
localtests/rename-reorder-column/orig_columns
Normal file
1
localtests/rename-reorder-column/orig_columns
Normal file
@ -0,0 +1 @@
|
||||
id, c1, c2
|
27
localtests/rename-reorder-columns/create.sql
Normal file
27
localtests/rename-reorder-columns/create.sql
Normal file
@ -0,0 +1,27 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
c1 int not null,
|
||||
c2 int not null,
|
||||
c3 int not null,
|
||||
primary key (id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert ignore into gh_ost_test values (1, 11, 23, 97);
|
||||
insert ignore into gh_ost_test values (2, 13, 27, 61);
|
||||
insert into gh_ost_test values (null, 17, 31, 53);
|
||||
set @last_insert_id := last_insert_id();
|
||||
update gh_ost_test set c1=c1+@last_insert_id, c2=c2+@last_insert_id, c3=c3+@last_insert_id where id=@last_insert_id order by id desc limit 1;
|
||||
delete from gh_ost_test where id=1;
|
||||
delete from gh_ost_test where c1=13; -- id=2
|
||||
end ;;
|
1
localtests/rename-reorder-columns/extra_args
Normal file
1
localtests/rename-reorder-columns/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column c2 c2a int not null, change column c3 c3 int not null after id" --approve-renamed-columns
|
1
localtests/rename-reorder-columns/ghost_columns
Normal file
1
localtests/rename-reorder-columns/ghost_columns
Normal file
@ -0,0 +1 @@
|
||||
id, c1, c2a, c3
|
1
localtests/rename-reorder-columns/orig_columns
Normal file
1
localtests/rename-reorder-columns/orig_columns
Normal file
@ -0,0 +1 @@
|
||||
id, c1, c2, c3
|
26
localtests/reorder-columns/create.sql
Normal file
26
localtests/reorder-columns/create.sql
Normal file
@ -0,0 +1,26 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
c1 int not null,
|
||||
c2 int not null,
|
||||
primary key (id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert ignore into gh_ost_test values (1, 11, 23);
|
||||
insert ignore into gh_ost_test values (2, 13, 29);
|
||||
insert into gh_ost_test values (null, 17, 31);
|
||||
set @last_insert_id := last_insert_id();
|
||||
update gh_ost_test set c1=c1+@last_insert_id, c2=c2+@last_insert_id where id=@last_insert_id order by id desc limit 1;
|
||||
delete from gh_ost_test where id=1;
|
||||
delete from gh_ost_test where c1=13;
|
||||
end ;;
|
1
localtests/reorder-columns/extra_args
Normal file
1
localtests/reorder-columns/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column c2 c2 int not null after id"
|
1
localtests/reorder-columns/ghost_columns
Normal file
1
localtests/reorder-columns/ghost_columns
Normal file
@ -0,0 +1 @@
|
||||
id, c1, c2
|
1
localtests/reorder-columns/orig_columns
Normal file
1
localtests/reorder-columns/orig_columns
Normal file
@ -0,0 +1 @@
|
||||
id, c1, c2
|
@ -9,6 +9,7 @@
|
||||
|
||||
tests_path=$(dirname $0)
|
||||
test_logfile=/tmp/gh-ost-test.log
|
||||
ghost_binary=/tmp/gh-ost-test
|
||||
exec_command_file=/tmp/gh-ost-test.bash
|
||||
|
||||
test_pattern="${1:-.}"
|
||||
@ -56,15 +57,19 @@ test_single() {
|
||||
if [ -f $tests_path/$test_name/extra_args ] ; then
|
||||
extra_args=$(cat $tests_path/$test_name/extra_args)
|
||||
fi
|
||||
columns="*"
|
||||
if [ -f $tests_path/$test_name/test_columns ] ; then
|
||||
columns=$(cat $tests_path/$test_name/test_columns)
|
||||
orig_columns="*"
|
||||
ghost_columns="*"
|
||||
if [ -f $tests_path/$test_name/orig_columns ] ; then
|
||||
orig_columns=$(cat $tests_path/$test_name/orig_columns)
|
||||
fi
|
||||
if [ -f $tests_path/$test_name/ghost_columns ] ; then
|
||||
ghost_columns=$(cat $tests_path/$test_name/ghost_columns)
|
||||
fi
|
||||
# graceful sleep for replica to catch up
|
||||
echo_dot
|
||||
sleep 1
|
||||
#
|
||||
cmd="go run go/cmd/gh-ost/main.go \
|
||||
cmd="$ghost_binary \
|
||||
--user=gh-ost \
|
||||
--password=gh-ost \
|
||||
--host=$replica_host \
|
||||
@ -91,27 +96,59 @@ test_single() {
|
||||
echo_dot
|
||||
bash $exec_command_file 1> $test_logfile 2>&1
|
||||
|
||||
if [ $? -ne 0 ] ; then
|
||||
execution_result=$?
|
||||
|
||||
if [ -f $tests_path/$test_name/destroy.sql ] ; then
|
||||
gh-ost-test-mysql-master --default-character-set=utf8mb4 test < $tests_path/$test_name/destroy.sql
|
||||
fi
|
||||
|
||||
if [ -f $tests_path/$test_name/expect_failure ] ; then
|
||||
if [ $execution_result -eq 0 ] ; then
|
||||
echo
|
||||
echo "ERROR $test_name execution was expected to exit on error but did not. cat $test_logfile"
|
||||
return 1
|
||||
fi
|
||||
if [ -s $tests_path/$test_name/expect_failure ] ; then
|
||||
# 'expect_failure' file has content. We expect to find this content in the log.
|
||||
expected_error_message="$(cat $tests_path/$test_name/expect_failure)"
|
||||
if grep -q "$expected_error_message" $test_logfile ; then
|
||||
return 0
|
||||
fi
|
||||
echo
|
||||
echo "ERROR $test_name execution was expected to exit with error message '${expected_error_message}' but did not. cat $test_logfile"
|
||||
return 1
|
||||
fi
|
||||
# 'expect_failure' file has no content. We generally agree that the failure is correct
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [ $execution_result -ne 0 ] ; then
|
||||
echo
|
||||
echo "ERROR $test_name execution failure. cat $test_logfile"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo_dot
|
||||
orig_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${columns} from gh_ost_test" -ss | md5sum)
|
||||
ghost_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${columns} from _gh_ost_test_gho" -ss | md5sum)
|
||||
orig_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${orig_columns} from gh_ost_test" -ss | md5sum)
|
||||
ghost_checksum=$(gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${ghost_columns} from _gh_ost_test_gho" -ss | md5sum)
|
||||
|
||||
if [ "$orig_checksum" != "$ghost_checksum" ] ; then
|
||||
echo "ERROR $test_name: checksum mismatch"
|
||||
echo "---"
|
||||
gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${columns} from gh_ost_test" -ss
|
||||
gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${orig_columns} from gh_ost_test" -ss
|
||||
echo "---"
|
||||
gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${columns} from _gh_ost_test_gho" -ss
|
||||
gh-ost-test-mysql-replica --default-character-set=utf8mb4 test -e "select ${ghost_columns} from _gh_ost_test_gho" -ss
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
build_binary() {
|
||||
echo "Building"
|
||||
go build -o $ghost_binary go/cmd/gh-ost/main.go
|
||||
}
|
||||
|
||||
test_all() {
|
||||
build_binary
|
||||
find $tests_path ! -path . -type d -mindepth 1 -maxdepth 1 | cut -d "/" -f 3 | egrep "$test_pattern" | while read test_name ; do
|
||||
test_single "$test_name"
|
||||
if [ $? -ne 0 ] ; then
|
||||
|
33
localtests/timestamp-datetime/create.sql
Normal file
33
localtests/timestamp-datetime/create.sql
Normal file
@ -0,0 +1,33 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts timestamp default current_timestamp,
|
||||
dt datetime,
|
||||
ts2ts timestamp null,
|
||||
ts2dt datetime null,
|
||||
dt2ts timestamp null,
|
||||
dt2dt datetime null,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, now(), now(),null, null, null, null, 0);
|
||||
update gh_ost_test set ts2ts=ts, ts2dt=ts, dt2ts=dt, dt2dt=dt where i = 11 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2ts=ts, ts2dt=ts, dt2ts=dt, dt2dt=dt where i = 13 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 17, null, '2016-07-06 10:20:30', '2016-07-06 10:20:30', 0);
|
||||
update gh_ost_test set ts2ts=ts, ts2dt=ts, dt2ts=dt, dt2dt=dt where i = 17 order by id desc limit 1;
|
||||
end ;;
|
31
localtests/timestamp-to-datetime/create.sql
Normal file
31
localtests/timestamp-to-datetime/create.sql
Normal file
@ -0,0 +1,31 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts0 timestamp default current_timestamp,
|
||||
ts1 timestamp,
|
||||
dt2 datetime,
|
||||
t datetime,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 7, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
|
||||
insert into gh_ost_test values (null, 11, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 13 order by id desc limit 1;
|
||||
end ;;
|
1
localtests/timestamp-to-datetime/extra_args
Normal file
1
localtests/timestamp-to-datetime/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column t t datetime not null"
|
37
localtests/timestamp/create.sql
Normal file
37
localtests/timestamp/create.sql
Normal file
@ -0,0 +1,37 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts0 timestamp default current_timestamp,
|
||||
ts1 timestamp,
|
||||
ts2 timestamp,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 13 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 17, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 17 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 19, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 19 order by id desc limit 1;
|
||||
|
||||
insert into gh_ost_test values (null, 23, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 23 order by id desc limit 1;
|
||||
end ;;
|
44
localtests/tz-datetime-ts/create.sql
Normal file
44
localtests/tz-datetime-ts/create.sql
Normal file
@ -0,0 +1,44 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts0 timestamp default current_timestamp,
|
||||
ts1 timestamp,
|
||||
dt2 datetime,
|
||||
t datetime,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 7, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
|
||||
insert into gh_ost_test values (null, 11, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
set session time_zone='system';
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 13 order by id desc limit 1;
|
||||
|
||||
set session time_zone='+00:00';
|
||||
insert into gh_ost_test values (null, 17, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 17 order by id desc limit 1;
|
||||
|
||||
set session time_zone='-03:00';
|
||||
insert into gh_ost_test values (null, 19, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 19 order by id desc limit 1;
|
||||
|
||||
set session time_zone='+05:00';
|
||||
insert into gh_ost_test values (null, 23, null, now(), now(), '2010-10-20 10:20:30', 0);
|
||||
update gh_ost_test set ts2=now() + interval 1 minute, updated = 1 where i = 23 order by id desc limit 1;
|
||||
end ;;
|
1
localtests/tz-datetime-ts/extra_args
Normal file
1
localtests/tz-datetime-ts/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column t t timestamp not null"
|
41
localtests/tz-datetime/create.sql
Normal file
41
localtests/tz-datetime/create.sql
Normal file
@ -0,0 +1,41 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
ts0 timestamp default current_timestamp,
|
||||
ts1 datetime,
|
||||
ts2 datetime,
|
||||
updated tinyint unsigned default 0,
|
||||
primary key(id),
|
||||
key i_idx(i)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, 11, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 10 minute, updated = 1 where i = 11 order by id desc limit 1;
|
||||
|
||||
set session time_zone='system';
|
||||
insert into gh_ost_test values (null, 13, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 10 minute, updated = 1 where i = 13 order by id desc limit 1;
|
||||
|
||||
set session time_zone='+00:00';
|
||||
insert into gh_ost_test values (null, 17, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 10 minute, updated = 1 where i = 17 order by id desc limit 1;
|
||||
|
||||
set session time_zone='-03:00';
|
||||
insert into gh_ost_test values (null, 19, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 10 minute, updated = 1 where i = 19 order by id desc limit 1;
|
||||
|
||||
set session time_zone='+05:00';
|
||||
insert into gh_ost_test values (null, 23, null, now(), now(), 0);
|
||||
update gh_ost_test set ts2=now() + interval 10 minute, updated = 1 where i = 23 order by id desc limit 1;
|
||||
end ;;
|
24
localtests/unsigned-rename/create.sql
Normal file
24
localtests/unsigned-rename/create.sql
Normal file
@ -0,0 +1,24 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
bi bigint not null,
|
||||
iu int unsigned not null,
|
||||
biu bigint unsigned not null,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, -2147483647, -9223372036854775807, 4294967295, 18446744073709551615);
|
||||
set @last_insert_id := cast(last_insert_id() as signed);
|
||||
update gh_ost_test set i=-2147483647+@last_insert_id, bi=-9223372036854775807+@last_insert_id, iu=4294967295-@last_insert_id, biu=18446744073709551615-@last_insert_id where id < @last_insert_id order by id desc limit 1;
|
||||
end ;;
|
1
localtests/unsigned-rename/extra_args
Normal file
1
localtests/unsigned-rename/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column iu iu_renamed int unsigned not null" --approve-renamed-columns
|
1
localtests/unsigned-rename/ghost_columns
Normal file
1
localtests/unsigned-rename/ghost_columns
Normal file
@ -0,0 +1 @@
|
||||
id, i, bi, iu_renamed, biu
|
1
localtests/unsigned-rename/orig_columns
Normal file
1
localtests/unsigned-rename/orig_columns
Normal file
@ -0,0 +1 @@
|
||||
id, i, bi, iu, biu
|
24
localtests/unsigned-reorder/create.sql
Normal file
24
localtests/unsigned-reorder/create.sql
Normal file
@ -0,0 +1,24 @@
|
||||
drop table if exists gh_ost_test;
|
||||
create table gh_ost_test (
|
||||
id int auto_increment,
|
||||
i int not null,
|
||||
bi bigint not null,
|
||||
iu int unsigned not null,
|
||||
biu bigint unsigned not null,
|
||||
primary key(id)
|
||||
) auto_increment=1;
|
||||
|
||||
drop event if exists gh_ost_test;
|
||||
delimiter ;;
|
||||
create event gh_ost_test
|
||||
on schedule every 1 second
|
||||
starts current_timestamp
|
||||
ends current_timestamp + interval 60 second
|
||||
on completion not preserve
|
||||
enable
|
||||
do
|
||||
begin
|
||||
insert into gh_ost_test values (null, -2147483647, -9223372036854775807, 4294967295, 18446744073709551615);
|
||||
set @last_insert_id := cast(last_insert_id() as signed);
|
||||
update gh_ost_test set i=-2147483647+@last_insert_id, bi=-9223372036854775807+@last_insert_id, iu=4294967295-@last_insert_id, biu=18446744073709551615-@last_insert_id where id < @last_insert_id order by id desc limit 1;
|
||||
end ;;
|
1
localtests/unsigned-reorder/extra_args
Normal file
1
localtests/unsigned-reorder/extra_args
Normal file
@ -0,0 +1 @@
|
||||
--alter="change column iu iu int unsigned not null after id" --approve-renamed-columns
|
1
localtests/unsigned-reorder/ghost_columns
Normal file
1
localtests/unsigned-reorder/ghost_columns
Normal file
@ -0,0 +1 @@
|
||||
id, i, bi, iu, biu
|
1
localtests/unsigned-reorder/orig_columns
Normal file
1
localtests/unsigned-reorder/orig_columns
Normal file
@ -0,0 +1 @@
|
||||
id, i, bi, iu, biu
|
10
vendor/github.com/siddontang/go-mysql/replication/row_event.go
generated
vendored
10
vendor/github.com/siddontang/go-mysql/replication/row_event.go
generated
vendored
@ -591,7 +591,7 @@ func decodeBit(data []byte, nbits int, length int) (value int64, err error) {
|
||||
return
|
||||
}
|
||||
|
||||
func decodeTimestamp2(data []byte, dec uint16) (string, int, error) {
|
||||
func decodeTimestamp2(data []byte, dec uint16) (interface{}, int, error) {
|
||||
//get timestamp binary length
|
||||
n := int(4 + (dec+1)/2)
|
||||
sec := int64(binary.BigEndian.Uint32(data[0:4]))
|
||||
@ -609,13 +609,13 @@ func decodeTimestamp2(data []byte, dec uint16) (string, int, error) {
|
||||
return "0000-00-00 00:00:00", n, nil
|
||||
}
|
||||
|
||||
t := time.Unix(sec, usec*1000).UTC() // .UTC() converted by shlomi-noach
|
||||
return t.Format(TimeFormat), n, nil
|
||||
t := time.Unix(sec, usec*1000)
|
||||
return t, n, nil
|
||||
}
|
||||
|
||||
const DATETIMEF_INT_OFS int64 = 0x8000000000
|
||||
|
||||
func decodeDatetime2(data []byte, dec uint16) (string, int, error) {
|
||||
func decodeDatetime2(data []byte, dec uint16) (interface{}, int, error) {
|
||||
//get datetime binary length
|
||||
n := int(5 + (dec+1)/2)
|
||||
|
||||
@ -657,7 +657,7 @@ func decodeDatetime2(data []byte, dec uint16) (string, int, error) {
|
||||
minute := int((hms >> 6) % (1 << 6))
|
||||
hour := int((hms >> 12))
|
||||
|
||||
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d", year, month, day, hour, minute, second), n, nil
|
||||
return fmt.Sprintf("%04d-%02d-%02d %02d:%02d:%02d", year, month, day, hour, minute, second), n, nil // commented by Shlomi Noach. Yes I know about `git blame`
|
||||
}
|
||||
|
||||
const TIMEF_OFS int64 = 0x800000000000
|
||||
|
209
vendor/golang.org/x/text/encoding/charmap/charmap.go
generated
vendored
Normal file
209
vendor/golang.org/x/text/encoding/charmap/charmap.go
generated
vendored
Normal file
@ -0,0 +1,209 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:generate go run maketables.go
|
||||
|
||||
// Package charmap provides simple character encodings such as IBM Code Page 437
|
||||
// and Windows 1252.
|
||||
package charmap // import "golang.org/x/text/encoding/charmap"
|
||||
|
||||
import (
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
// These encodings vary only in the way clients should interpret them. Their
|
||||
// coded character set is identical and a single implementation can be shared.
|
||||
var (
|
||||
// ISO8859_6E is the ISO 8859-6E encoding.
|
||||
ISO8859_6E encoding.Encoding = &iso8859_6E
|
||||
|
||||
// ISO8859_6I is the ISO 8859-6I encoding.
|
||||
ISO8859_6I encoding.Encoding = &iso8859_6I
|
||||
|
||||
// ISO8859_8E is the ISO 8859-8E encoding.
|
||||
ISO8859_8E encoding.Encoding = &iso8859_8E
|
||||
|
||||
// ISO8859_8I is the ISO 8859-8I encoding.
|
||||
ISO8859_8I encoding.Encoding = &iso8859_8I
|
||||
|
||||
iso8859_6E = internal.Encoding{
|
||||
ISO8859_6,
|
||||
"ISO-8859-6E",
|
||||
identifier.ISO88596E,
|
||||
}
|
||||
|
||||
iso8859_6I = internal.Encoding{
|
||||
ISO8859_6,
|
||||
"ISO-8859-6I",
|
||||
identifier.ISO88596I,
|
||||
}
|
||||
|
||||
iso8859_8E = internal.Encoding{
|
||||
ISO8859_8,
|
||||
"ISO-8859-8E",
|
||||
identifier.ISO88598E,
|
||||
}
|
||||
|
||||
iso8859_8I = internal.Encoding{
|
||||
ISO8859_8,
|
||||
"ISO-8859-8I",
|
||||
identifier.ISO88598I,
|
||||
}
|
||||
)
|
||||
|
||||
// All is a list of all defined encodings in this package.
|
||||
var All = listAll
|
||||
|
||||
// TODO: implement these encodings, in order of importance.
|
||||
// ASCII, ISO8859_1: Rather common. Close to Windows 1252.
|
||||
// ISO8859_9: Close to Windows 1254.
|
||||
|
||||
// utf8Enc holds a rune's UTF-8 encoding in data[:len].
|
||||
type utf8Enc struct {
|
||||
len uint8
|
||||
data [3]byte
|
||||
}
|
||||
|
||||
// charmap describes an 8-bit character set encoding.
|
||||
type charmap struct {
|
||||
// name is the encoding's name.
|
||||
name string
|
||||
// mib is the encoding type of this encoder.
|
||||
mib identifier.MIB
|
||||
// asciiSuperset states whether the encoding is a superset of ASCII.
|
||||
asciiSuperset bool
|
||||
// low is the lower bound of the encoded byte for a non-ASCII rune. If
|
||||
// charmap.asciiSuperset is true then this will be 0x80, otherwise 0x00.
|
||||
low uint8
|
||||
// replacement is the encoded replacement character.
|
||||
replacement byte
|
||||
// decode is the map from encoded byte to UTF-8.
|
||||
decode [256]utf8Enc
|
||||
// encoding is the map from runes to encoded bytes. Each entry is a
|
||||
// uint32: the high 8 bits are the encoded byte and the low 24 bits are
|
||||
// the rune. The table entries are sorted by ascending rune.
|
||||
encode [256]uint32
|
||||
}
|
||||
|
||||
func (m *charmap) NewDecoder() *encoding.Decoder {
|
||||
return &encoding.Decoder{Transformer: charmapDecoder{charmap: m}}
|
||||
}
|
||||
|
||||
func (m *charmap) NewEncoder() *encoding.Encoder {
|
||||
return &encoding.Encoder{Transformer: charmapEncoder{charmap: m}}
|
||||
}
|
||||
|
||||
func (m *charmap) String() string {
|
||||
return m.name
|
||||
}
|
||||
|
||||
func (m *charmap) ID() (mib identifier.MIB, other string) {
|
||||
return m.mib, ""
|
||||
}
|
||||
|
||||
// charmapDecoder implements transform.Transformer by decoding to UTF-8.
|
||||
type charmapDecoder struct {
|
||||
transform.NopResetter
|
||||
charmap *charmap
|
||||
}
|
||||
|
||||
func (m charmapDecoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
for i, c := range src {
|
||||
if m.charmap.asciiSuperset && c < utf8.RuneSelf {
|
||||
if nDst >= len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst] = c
|
||||
nDst++
|
||||
nSrc = i + 1
|
||||
continue
|
||||
}
|
||||
|
||||
decode := &m.charmap.decode[c]
|
||||
n := int(decode.len)
|
||||
if nDst+n > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
// It's 15% faster to avoid calling copy for these tiny slices.
|
||||
for j := 0; j < n; j++ {
|
||||
dst[nDst] = decode.data[j]
|
||||
nDst++
|
||||
}
|
||||
nSrc = i + 1
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
|
||||
// charmapEncoder implements transform.Transformer by encoding from UTF-8.
|
||||
type charmapEncoder struct {
|
||||
transform.NopResetter
|
||||
charmap *charmap
|
||||
}
|
||||
|
||||
func (m charmapEncoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
loop:
|
||||
for nSrc < len(src) {
|
||||
if nDst >= len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
r = rune(src[nSrc])
|
||||
|
||||
// Decode a 1-byte rune.
|
||||
if r < utf8.RuneSelf {
|
||||
if m.charmap.asciiSuperset {
|
||||
nSrc++
|
||||
dst[nDst] = uint8(r)
|
||||
nDst++
|
||||
continue
|
||||
}
|
||||
size = 1
|
||||
|
||||
} else {
|
||||
// Decode a multi-byte rune.
|
||||
r, size = utf8.DecodeRune(src[nSrc:])
|
||||
if size == 1 {
|
||||
// All valid runes of size 1 (those below utf8.RuneSelf) were
|
||||
// handled above. We have invalid UTF-8 or we haven't seen the
|
||||
// full character yet.
|
||||
if !atEOF && !utf8.FullRune(src[nSrc:]) {
|
||||
err = transform.ErrShortSrc
|
||||
} else {
|
||||
err = internal.RepertoireError(m.charmap.replacement)
|
||||
}
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// Binary search in [low, high) for that rune in the m.charmap.encode table.
|
||||
for low, high := int(m.charmap.low), 0x100; ; {
|
||||
if low >= high {
|
||||
err = internal.RepertoireError(m.charmap.replacement)
|
||||
break loop
|
||||
}
|
||||
mid := (low + high) / 2
|
||||
got := m.charmap.encode[mid]
|
||||
gotRune := rune(got & (1<<24 - 1))
|
||||
if gotRune < r {
|
||||
low = mid + 1
|
||||
} else if gotRune > r {
|
||||
high = mid
|
||||
} else {
|
||||
dst[nDst] = byte(got >> 24)
|
||||
nDst++
|
||||
break
|
||||
}
|
||||
}
|
||||
nSrc += size
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
45
vendor/golang.org/x/text/encoding/charmap/charmap_test.go
generated
vendored
Normal file
45
vendor/golang.org/x/text/encoding/charmap/charmap_test.go
generated
vendored
Normal file
@ -0,0 +1,45 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package charmap
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
func dec(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
|
||||
return "Decode", e.NewDecoder(), nil
|
||||
}
|
||||
func enc(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
|
||||
return "Encode", e.NewEncoder(), internal.ErrASCIIReplacement
|
||||
}
|
||||
|
||||
func TestNonRepertoire(t *testing.T) {
|
||||
testCases := []struct {
|
||||
init func(e encoding.Encoding) (string, transform.Transformer, error)
|
||||
e encoding.Encoding
|
||||
src, want string
|
||||
}{
|
||||
{dec, Windows1252, "\x81", "\ufffd"},
|
||||
|
||||
{enc, Windows1252, "갂", ""},
|
||||
{enc, Windows1252, "a갂", "a"},
|
||||
{enc, Windows1252, "\u00E9갂", "\xE9"},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
dir, tr, wantErr := tc.init(tc.e)
|
||||
|
||||
dst, _, err := transform.String(tr, tc.src)
|
||||
if err != wantErr {
|
||||
t.Errorf("%s %v(%q): got %v; want %v", dir, tc.e, tc.src, err, wantErr)
|
||||
}
|
||||
if got := string(dst); got != tc.want {
|
||||
t.Errorf("%s %v(%q):\ngot %q\nwant %q", dir, tc.e, tc.src, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
524
vendor/golang.org/x/text/encoding/charmap/maketables.go
generated
vendored
Normal file
524
vendor/golang.org/x/text/encoding/charmap/maketables.go
generated
vendored
Normal file
@ -0,0 +1,524 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/internal/gen"
|
||||
)
|
||||
|
||||
const ascii = "\x00\x01\x02\x03\x04\x05\x06\x07\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +
|
||||
"\x10\x11\x12\x13\x14\x15\x16\x17\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +
|
||||
` !"#$%&'()*+,-./0123456789:;<=>?` +
|
||||
`@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_` +
|
||||
"`abcdefghijklmnopqrstuvwxyz{|}~\u007f"
|
||||
|
||||
var encodings = []struct {
|
||||
name string
|
||||
mib string
|
||||
comment string
|
||||
varName string
|
||||
replacement byte
|
||||
mapping string
|
||||
}{
|
||||
{
|
||||
"IBM Code Page 437",
|
||||
"PC8CodePage437",
|
||||
"",
|
||||
"CodePage437",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM437-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 850",
|
||||
"PC850Multilingual",
|
||||
"",
|
||||
"CodePage850",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM850-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 852",
|
||||
"PCp852",
|
||||
"",
|
||||
"CodePage852",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM852-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 855",
|
||||
"IBM855",
|
||||
"",
|
||||
"CodePage855",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM855-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"Windows Code Page 858", // PC latin1 with Euro
|
||||
"IBM00858",
|
||||
"",
|
||||
"CodePage858",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/windows-858-2000.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 860",
|
||||
"IBM860",
|
||||
"",
|
||||
"CodePage860",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM860-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 862",
|
||||
"PC862LatinHebrew",
|
||||
"",
|
||||
"CodePage862",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM862-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 863",
|
||||
"IBM863",
|
||||
"",
|
||||
"CodePage863",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM863-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 865",
|
||||
"IBM865",
|
||||
"",
|
||||
"CodePage865",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/glibc-IBM865-2.1.2.ucm",
|
||||
},
|
||||
{
|
||||
"IBM Code Page 866",
|
||||
"IBM866",
|
||||
"",
|
||||
"CodePage866",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-ibm866.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-1",
|
||||
"ISOLatin1",
|
||||
"",
|
||||
"ISO8859_1",
|
||||
encoding.ASCIISub,
|
||||
"http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/iso-8859_1-1998.ucm",
|
||||
},
|
||||
{
|
||||
"ISO 8859-2",
|
||||
"ISOLatin2",
|
||||
"",
|
||||
"ISO8859_2",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-2.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-3",
|
||||
"ISOLatin3",
|
||||
"",
|
||||
"ISO8859_3",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-3.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-4",
|
||||
"ISOLatin4",
|
||||
"",
|
||||
"ISO8859_4",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-4.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-5",
|
||||
"ISOLatinCyrillic",
|
||||
"",
|
||||
"ISO8859_5",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-5.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-6",
|
||||
"ISOLatinArabic",
|
||||
"",
|
||||
"ISO8859_6,ISO8859_6E,ISO8859_6I",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-6.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-7",
|
||||
"ISOLatinGreek",
|
||||
"",
|
||||
"ISO8859_7",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-7.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-8",
|
||||
"ISOLatinHebrew",
|
||||
"",
|
||||
"ISO8859_8,ISO8859_8E,ISO8859_8I",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-8.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-10",
|
||||
"ISOLatin6",
|
||||
"",
|
||||
"ISO8859_10",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-10.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-13",
|
||||
"ISO885913",
|
||||
"",
|
||||
"ISO8859_13",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-13.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-14",
|
||||
"ISO885914",
|
||||
"",
|
||||
"ISO8859_14",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-14.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-15",
|
||||
"ISO885915",
|
||||
"",
|
||||
"ISO8859_15",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-15.txt",
|
||||
},
|
||||
{
|
||||
"ISO 8859-16",
|
||||
"ISO885916",
|
||||
"",
|
||||
"ISO8859_16",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-iso-8859-16.txt",
|
||||
},
|
||||
{
|
||||
"KOI8-R",
|
||||
"KOI8R",
|
||||
"",
|
||||
"KOI8R",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-koi8-r.txt",
|
||||
},
|
||||
{
|
||||
"KOI8-U",
|
||||
"KOI8U",
|
||||
"",
|
||||
"KOI8U",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-koi8-u.txt",
|
||||
},
|
||||
{
|
||||
"Macintosh",
|
||||
"Macintosh",
|
||||
"",
|
||||
"Macintosh",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-macintosh.txt",
|
||||
},
|
||||
{
|
||||
"Macintosh Cyrillic",
|
||||
"MacintoshCyrillic",
|
||||
"",
|
||||
"MacintoshCyrillic",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-x-mac-cyrillic.txt",
|
||||
},
|
||||
{
|
||||
"Windows 874",
|
||||
"Windows874",
|
||||
"",
|
||||
"Windows874",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-874.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1250",
|
||||
"Windows1250",
|
||||
"",
|
||||
"Windows1250",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1250.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1251",
|
||||
"Windows1251",
|
||||
"",
|
||||
"Windows1251",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1251.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1252",
|
||||
"Windows1252",
|
||||
"",
|
||||
"Windows1252",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1252.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1253",
|
||||
"Windows1253",
|
||||
"",
|
||||
"Windows1253",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1253.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1254",
|
||||
"Windows1254",
|
||||
"",
|
||||
"Windows1254",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1254.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1255",
|
||||
"Windows1255",
|
||||
"",
|
||||
"Windows1255",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1255.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1256",
|
||||
"Windows1256",
|
||||
"",
|
||||
"Windows1256",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1256.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1257",
|
||||
"Windows1257",
|
||||
"",
|
||||
"Windows1257",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1257.txt",
|
||||
},
|
||||
{
|
||||
"Windows 1258",
|
||||
"Windows1258",
|
||||
"",
|
||||
"Windows1258",
|
||||
encoding.ASCIISub,
|
||||
"http://encoding.spec.whatwg.org/index-windows-1258.txt",
|
||||
},
|
||||
{
|
||||
"X-User-Defined",
|
||||
"XUserDefined",
|
||||
"It is defined at http://encoding.spec.whatwg.org/#x-user-defined",
|
||||
"XUserDefined",
|
||||
encoding.ASCIISub,
|
||||
ascii +
|
||||
"\uf780\uf781\uf782\uf783\uf784\uf785\uf786\uf787" +
|
||||
"\uf788\uf789\uf78a\uf78b\uf78c\uf78d\uf78e\uf78f" +
|
||||
"\uf790\uf791\uf792\uf793\uf794\uf795\uf796\uf797" +
|
||||
"\uf798\uf799\uf79a\uf79b\uf79c\uf79d\uf79e\uf79f" +
|
||||
"\uf7a0\uf7a1\uf7a2\uf7a3\uf7a4\uf7a5\uf7a6\uf7a7" +
|
||||
"\uf7a8\uf7a9\uf7aa\uf7ab\uf7ac\uf7ad\uf7ae\uf7af" +
|
||||
"\uf7b0\uf7b1\uf7b2\uf7b3\uf7b4\uf7b5\uf7b6\uf7b7" +
|
||||
"\uf7b8\uf7b9\uf7ba\uf7bb\uf7bc\uf7bd\uf7be\uf7bf" +
|
||||
"\uf7c0\uf7c1\uf7c2\uf7c3\uf7c4\uf7c5\uf7c6\uf7c7" +
|
||||
"\uf7c8\uf7c9\uf7ca\uf7cb\uf7cc\uf7cd\uf7ce\uf7cf" +
|
||||
"\uf7d0\uf7d1\uf7d2\uf7d3\uf7d4\uf7d5\uf7d6\uf7d7" +
|
||||
"\uf7d8\uf7d9\uf7da\uf7db\uf7dc\uf7dd\uf7de\uf7df" +
|
||||
"\uf7e0\uf7e1\uf7e2\uf7e3\uf7e4\uf7e5\uf7e6\uf7e7" +
|
||||
"\uf7e8\uf7e9\uf7ea\uf7eb\uf7ec\uf7ed\uf7ee\uf7ef" +
|
||||
"\uf7f0\uf7f1\uf7f2\uf7f3\uf7f4\uf7f5\uf7f6\uf7f7" +
|
||||
"\uf7f8\uf7f9\uf7fa\uf7fb\uf7fc\uf7fd\uf7fe\uf7ff",
|
||||
},
|
||||
}
|
||||
|
||||
func getWHATWG(url string) string {
|
||||
res, err := http.Get(url)
|
||||
if err != nil {
|
||||
log.Fatalf("%q: Get: %v", url, err)
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
mapping := make([]rune, 128)
|
||||
for i := range mapping {
|
||||
mapping[i] = '\ufffd'
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(res.Body)
|
||||
for scanner.Scan() {
|
||||
s := strings.TrimSpace(scanner.Text())
|
||||
if s == "" || s[0] == '#' {
|
||||
continue
|
||||
}
|
||||
x, y := 0, 0
|
||||
if _, err := fmt.Sscanf(s, "%d\t0x%x", &x, &y); err != nil {
|
||||
log.Fatalf("could not parse %q", s)
|
||||
}
|
||||
if x < 0 || 128 <= x {
|
||||
log.Fatalf("code %d is out of range", x)
|
||||
}
|
||||
if 0x80 <= y && y < 0xa0 {
|
||||
// We diverge from the WHATWG spec by mapping control characters
|
||||
// in the range [0x80, 0xa0) to U+FFFD.
|
||||
continue
|
||||
}
|
||||
mapping[x] = rune(y)
|
||||
}
|
||||
return ascii + string(mapping)
|
||||
}
|
||||
|
||||
func getUCM(url string) string {
|
||||
res, err := http.Get(url)
|
||||
if err != nil {
|
||||
log.Fatalf("%q: Get: %v", url, err)
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
mapping := make([]rune, 256)
|
||||
for i := range mapping {
|
||||
mapping[i] = '\ufffd'
|
||||
}
|
||||
|
||||
charsFound := 0
|
||||
scanner := bufio.NewScanner(res.Body)
|
||||
for scanner.Scan() {
|
||||
s := strings.TrimSpace(scanner.Text())
|
||||
if s == "" || s[0] == '#' {
|
||||
continue
|
||||
}
|
||||
var c byte
|
||||
var r rune
|
||||
if _, err := fmt.Sscanf(s, `<U%x> \x%x |0`, &r, &c); err != nil {
|
||||
continue
|
||||
}
|
||||
mapping[c] = r
|
||||
charsFound++
|
||||
}
|
||||
|
||||
if charsFound < 200 {
|
||||
log.Fatalf("%q: only %d characters found (wrong page format?)", url, charsFound)
|
||||
}
|
||||
|
||||
return string(mapping)
|
||||
}
|
||||
|
||||
func main() {
|
||||
mibs := map[string]bool{}
|
||||
all := []string{}
|
||||
|
||||
w := gen.NewCodeWriter()
|
||||
defer w.WriteGoFile("tables.go", "charmap")
|
||||
|
||||
printf := func(s string, a ...interface{}) { fmt.Fprintf(w, s, a...) }
|
||||
|
||||
printf("import (\n")
|
||||
printf("\t\"golang.org/x/text/encoding\"\n")
|
||||
printf("\t\"golang.org/x/text/encoding/internal/identifier\"\n")
|
||||
printf(")\n\n")
|
||||
for _, e := range encodings {
|
||||
varNames := strings.Split(e.varName, ",")
|
||||
all = append(all, varNames...)
|
||||
varName := varNames[0]
|
||||
switch {
|
||||
case strings.HasPrefix(e.mapping, "http://encoding.spec.whatwg.org/"):
|
||||
e.mapping = getWHATWG(e.mapping)
|
||||
case strings.HasPrefix(e.mapping, "http://source.icu-project.org/repos/icu/data/trunk/charset/data/ucm/"):
|
||||
e.mapping = getUCM(e.mapping)
|
||||
}
|
||||
|
||||
asciiSuperset, low := strings.HasPrefix(e.mapping, ascii), 0x00
|
||||
if asciiSuperset {
|
||||
low = 0x80
|
||||
}
|
||||
lvn := 1
|
||||
if strings.HasPrefix(varName, "ISO") || strings.HasPrefix(varName, "KOI") {
|
||||
lvn = 3
|
||||
}
|
||||
lowerVarName := strings.ToLower(varName[:lvn]) + varName[lvn:]
|
||||
printf("// %s is the %s encoding.\n", varName, e.name)
|
||||
if e.comment != "" {
|
||||
printf("//\n// %s\n", e.comment)
|
||||
}
|
||||
printf("var %s encoding.Encoding = &%s\n\nvar %s = charmap{\nname: %q,\n",
|
||||
varName, lowerVarName, lowerVarName, e.name)
|
||||
if mibs[e.mib] {
|
||||
log.Fatalf("MIB type %q declared multiple times.", e.mib)
|
||||
}
|
||||
printf("mib: identifier.%s,\n", e.mib)
|
||||
printf("asciiSuperset: %t,\n", asciiSuperset)
|
||||
printf("low: 0x%02x,\n", low)
|
||||
printf("replacement: 0x%02x,\n", e.replacement)
|
||||
|
||||
printf("decode: [256]utf8Enc{\n")
|
||||
i, backMapping := 0, map[rune]byte{}
|
||||
for _, c := range e.mapping {
|
||||
if _, ok := backMapping[c]; !ok && c != utf8.RuneError {
|
||||
backMapping[c] = byte(i)
|
||||
}
|
||||
var buf [8]byte
|
||||
n := utf8.EncodeRune(buf[:], c)
|
||||
if n > 3 {
|
||||
panic(fmt.Sprintf("rune %q (%U) is too long", c, c))
|
||||
}
|
||||
printf("{%d,[3]byte{0x%02x,0x%02x,0x%02x}},", n, buf[0], buf[1], buf[2])
|
||||
if i%2 == 1 {
|
||||
printf("\n")
|
||||
}
|
||||
i++
|
||||
}
|
||||
printf("},\n")
|
||||
|
||||
printf("encode: [256]uint32{\n")
|
||||
encode := make([]uint32, 0, 256)
|
||||
for c, i := range backMapping {
|
||||
encode = append(encode, uint32(i)<<24|uint32(c))
|
||||
}
|
||||
sort.Sort(byRune(encode))
|
||||
for len(encode) < cap(encode) {
|
||||
encode = append(encode, encode[len(encode)-1])
|
||||
}
|
||||
for i, enc := range encode {
|
||||
printf("0x%08x,", enc)
|
||||
if i%8 == 7 {
|
||||
printf("\n")
|
||||
}
|
||||
}
|
||||
printf("},\n}\n")
|
||||
|
||||
// Add an estimate of the size of a single charmap{} struct value, which
|
||||
// includes two 256 elem arrays of 4 bytes and some extra fields, which
|
||||
// align to 3 uint64s on 64-bit architectures.
|
||||
w.Size += 2*4*256 + 3*8
|
||||
}
|
||||
// TODO: add proper line breaking.
|
||||
printf("var listAll = []encoding.Encoding{\n%s,\n}\n\n", strings.Join(all, ",\n"))
|
||||
}
|
||||
|
||||
type byRune []uint32
|
||||
|
||||
func (b byRune) Len() int { return len(b) }
|
||||
func (b byRune) Less(i, j int) bool { return b[i]&0xffffff < b[j]&0xffffff }
|
||||
func (b byRune) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
|
6706
vendor/golang.org/x/text/encoding/charmap/tables.go
generated
vendored
Normal file
6706
vendor/golang.org/x/text/encoding/charmap/tables.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
335
vendor/golang.org/x/text/encoding/encoding.go
generated
vendored
Normal file
335
vendor/golang.org/x/text/encoding/encoding.go
generated
vendored
Normal file
@ -0,0 +1,335 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package encoding defines an interface for character encodings, such as Shift
|
||||
// JIS and Windows 1252, that can convert to and from UTF-8.
|
||||
//
|
||||
// Encoding implementations are provided in other packages, such as
|
||||
// golang.org/x/text/encoding/charmap and
|
||||
// golang.org/x/text/encoding/japanese.
|
||||
package encoding // import "golang.org/x/text/encoding"
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"strconv"
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
// TODO:
|
||||
// - There seems to be some inconsistency in when decoders return errors
|
||||
// and when not. Also documentation seems to suggest they shouldn't return
|
||||
// errors at all (except for UTF-16).
|
||||
// - Encoders seem to rely on or at least benefit from the input being in NFC
|
||||
// normal form. Perhaps add an example how users could prepare their output.
|
||||
|
||||
// Encoding is a character set encoding that can be transformed to and from
|
||||
// UTF-8.
|
||||
type Encoding interface {
|
||||
// NewDecoder returns a Decoder.
|
||||
NewDecoder() *Decoder
|
||||
|
||||
// NewEncoder returns an Encoder.
|
||||
NewEncoder() *Encoder
|
||||
}
|
||||
|
||||
// A Decoder converts bytes to UTF-8. It implements transform.Transformer.
|
||||
//
|
||||
// Transforming source bytes that are not of that encoding will not result in an
|
||||
// error per se. Each byte that cannot be transcoded will be represented in the
|
||||
// output by the UTF-8 encoding of '\uFFFD', the replacement rune.
|
||||
type Decoder struct {
|
||||
transform.Transformer
|
||||
|
||||
// This forces external creators of Decoders to use names in struct
|
||||
// initializers, allowing for future extendibility without having to break
|
||||
// code.
|
||||
_ struct{}
|
||||
}
|
||||
|
||||
// Bytes converts the given encoded bytes to UTF-8. It returns the converted
|
||||
// bytes or nil, err if any error occurred.
|
||||
func (d *Decoder) Bytes(b []byte) ([]byte, error) {
|
||||
b, _, err := transform.Bytes(d, b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
// String converts the given encoded string to UTF-8. It returns the converted
|
||||
// string or "", err if any error occurred.
|
||||
func (d *Decoder) String(s string) (string, error) {
|
||||
s, _, err := transform.String(d, s)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// Reader wraps another Reader to decode its bytes.
|
||||
//
|
||||
// The Decoder may not be used for any other operation as long as the returned
|
||||
// Reader is in use.
|
||||
func (d *Decoder) Reader(r io.Reader) io.Reader {
|
||||
return transform.NewReader(r, d)
|
||||
}
|
||||
|
||||
// An Encoder converts bytes from UTF-8. It implements transform.Transformer.
|
||||
//
|
||||
// Each rune that cannot be transcoded will result in an error. In this case,
|
||||
// the transform will consume all source byte up to, not including the offending
|
||||
// rune. Transforming source bytes that are not valid UTF-8 will be replaced by
|
||||
// `\uFFFD`. To return early with an error instead, use transform.Chain to
|
||||
// preprocess the data with a UTF8Validator.
|
||||
type Encoder struct {
|
||||
transform.Transformer
|
||||
|
||||
// This forces external creators of Encoders to use names in struct
|
||||
// initializers, allowing for future extendibility without having to break
|
||||
// code.
|
||||
_ struct{}
|
||||
}
|
||||
|
||||
// Bytes converts bytes from UTF-8. It returns the converted bytes or nil, err if
|
||||
// any error occurred.
|
||||
func (e *Encoder) Bytes(b []byte) ([]byte, error) {
|
||||
b, _, err := transform.Bytes(e, b)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
// String converts a string from UTF-8. It returns the converted string or
|
||||
// "", err if any error occurred.
|
||||
func (e *Encoder) String(s string) (string, error) {
|
||||
s, _, err := transform.String(e, s)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// Writer wraps another Writer to encode its UTF-8 output.
|
||||
//
|
||||
// The Encoder may not be used for any other operation as long as the returned
|
||||
// Writer is in use.
|
||||
func (e *Encoder) Writer(w io.Writer) io.Writer {
|
||||
return transform.NewWriter(w, e)
|
||||
}
|
||||
|
||||
// ASCIISub is the ASCII substitute character, as recommended by
|
||||
// http://unicode.org/reports/tr36/#Text_Comparison
|
||||
const ASCIISub = '\x1a'
|
||||
|
||||
// Nop is the nop encoding. Its transformed bytes are the same as the source
|
||||
// bytes; it does not replace invalid UTF-8 sequences.
|
||||
var Nop Encoding = nop{}
|
||||
|
||||
type nop struct{}
|
||||
|
||||
func (nop) NewDecoder() *Decoder {
|
||||
return &Decoder{Transformer: transform.Nop}
|
||||
}
|
||||
func (nop) NewEncoder() *Encoder {
|
||||
return &Encoder{Transformer: transform.Nop}
|
||||
}
|
||||
|
||||
// Replacement is the replacement encoding. Decoding from the replacement
|
||||
// encoding yields a single '\uFFFD' replacement rune. Encoding from UTF-8 to
|
||||
// the replacement encoding yields the same as the source bytes except that
|
||||
// invalid UTF-8 is converted to '\uFFFD'.
|
||||
//
|
||||
// It is defined at http://encoding.spec.whatwg.org/#replacement
|
||||
var Replacement Encoding = replacement{}
|
||||
|
||||
type replacement struct{}
|
||||
|
||||
func (replacement) NewDecoder() *Decoder {
|
||||
return &Decoder{Transformer: replacementDecoder{}}
|
||||
}
|
||||
|
||||
func (replacement) NewEncoder() *Encoder {
|
||||
return &Encoder{Transformer: replacementEncoder{}}
|
||||
}
|
||||
|
||||
func (replacement) ID() (mib identifier.MIB, other string) {
|
||||
return identifier.Replacement, ""
|
||||
}
|
||||
|
||||
type replacementDecoder struct{ transform.NopResetter }
|
||||
|
||||
func (replacementDecoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
if len(dst) < 3 {
|
||||
return 0, 0, transform.ErrShortDst
|
||||
}
|
||||
if atEOF {
|
||||
const fffd = "\ufffd"
|
||||
dst[0] = fffd[0]
|
||||
dst[1] = fffd[1]
|
||||
dst[2] = fffd[2]
|
||||
nDst = 3
|
||||
}
|
||||
return nDst, len(src), nil
|
||||
}
|
||||
|
||||
type replacementEncoder struct{ transform.NopResetter }
|
||||
|
||||
func (replacementEncoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
|
||||
for ; nSrc < len(src); nSrc += size {
|
||||
r = rune(src[nSrc])
|
||||
|
||||
// Decode a 1-byte rune.
|
||||
if r < utf8.RuneSelf {
|
||||
size = 1
|
||||
|
||||
} else {
|
||||
// Decode a multi-byte rune.
|
||||
r, size = utf8.DecodeRune(src[nSrc:])
|
||||
if size == 1 {
|
||||
// All valid runes of size 1 (those below utf8.RuneSelf) were
|
||||
// handled above. We have invalid UTF-8 or we haven't seen the
|
||||
// full character yet.
|
||||
if !atEOF && !utf8.FullRune(src[nSrc:]) {
|
||||
err = transform.ErrShortSrc
|
||||
break
|
||||
}
|
||||
r = '\ufffd'
|
||||
}
|
||||
}
|
||||
|
||||
if nDst+utf8.RuneLen(r) > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
nDst += utf8.EncodeRune(dst[nDst:], r)
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
|
||||
// HTMLEscapeUnsupported wraps encoders to replace source runes outside the
|
||||
// repertoire of the destination encoding with HTML escape sequences.
|
||||
//
|
||||
// This wrapper exists to comply to URL and HTML forms requiring a
|
||||
// non-terminating legacy encoder. The produced sequences may lead to data
|
||||
// loss as they are indistinguishable from legitimate input. To avoid this
|
||||
// issue, use UTF-8 encodings whenever possible.
|
||||
func HTMLEscapeUnsupported(e *Encoder) *Encoder {
|
||||
return &Encoder{Transformer: &errorHandler{e, errorToHTML}}
|
||||
}
|
||||
|
||||
// ReplaceUnsupported wraps encoders to replace source runes outside the
|
||||
// repertoire of the destination encoding with an encoding-specific
|
||||
// replacement.
|
||||
//
|
||||
// This wrapper is only provided for backwards compatibility and legacy
|
||||
// handling. Its use is strongly discouraged. Use UTF-8 whenever possible.
|
||||
func ReplaceUnsupported(e *Encoder) *Encoder {
|
||||
return &Encoder{Transformer: &errorHandler{e, errorToReplacement}}
|
||||
}
|
||||
|
||||
type errorHandler struct {
|
||||
*Encoder
|
||||
handler func(dst []byte, r rune, err repertoireError) (n int, ok bool)
|
||||
}
|
||||
|
||||
// TODO: consider making this error public in some form.
|
||||
type repertoireError interface {
|
||||
Replacement() byte
|
||||
}
|
||||
|
||||
func (h errorHandler) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
nDst, nSrc, err = h.Transformer.Transform(dst, src, atEOF)
|
||||
for err != nil {
|
||||
rerr, ok := err.(repertoireError)
|
||||
if !ok {
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
r, sz := utf8.DecodeRune(src[nSrc:])
|
||||
n, ok := h.handler(dst[nDst:], r, rerr)
|
||||
if !ok {
|
||||
return nDst, nSrc, transform.ErrShortDst
|
||||
}
|
||||
err = nil
|
||||
nDst += n
|
||||
if nSrc += sz; nSrc < len(src) {
|
||||
var dn, sn int
|
||||
dn, sn, err = h.Transformer.Transform(dst[nDst:], src[nSrc:], atEOF)
|
||||
nDst += dn
|
||||
nSrc += sn
|
||||
}
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
|
||||
func errorToHTML(dst []byte, r rune, err repertoireError) (n int, ok bool) {
|
||||
buf := [8]byte{}
|
||||
b := strconv.AppendUint(buf[:0], uint64(r), 10)
|
||||
if n = len(b) + len("&#;"); n >= len(dst) {
|
||||
return 0, false
|
||||
}
|
||||
dst[0] = '&'
|
||||
dst[1] = '#'
|
||||
dst[copy(dst[2:], b)+2] = ';'
|
||||
return n, true
|
||||
}
|
||||
|
||||
func errorToReplacement(dst []byte, r rune, err repertoireError) (n int, ok bool) {
|
||||
if len(dst) == 0 {
|
||||
return 0, false
|
||||
}
|
||||
dst[0] = err.Replacement()
|
||||
return 1, true
|
||||
}
|
||||
|
||||
// ErrInvalidUTF8 means that a transformer encountered invalid UTF-8.
|
||||
var ErrInvalidUTF8 = errors.New("encoding: invalid UTF-8")
|
||||
|
||||
// UTF8Validator is a transformer that returns ErrInvalidUTF8 on the first
|
||||
// input byte that is not valid UTF-8.
|
||||
var UTF8Validator transform.Transformer = utf8Validator{}
|
||||
|
||||
type utf8Validator struct{ transform.NopResetter }
|
||||
|
||||
func (utf8Validator) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
n := len(src)
|
||||
if n > len(dst) {
|
||||
n = len(dst)
|
||||
}
|
||||
for i := 0; i < n; {
|
||||
if c := src[i]; c < utf8.RuneSelf {
|
||||
dst[i] = c
|
||||
i++
|
||||
continue
|
||||
}
|
||||
_, size := utf8.DecodeRune(src[i:])
|
||||
if size == 1 {
|
||||
// All valid runes of size 1 (those below utf8.RuneSelf) were
|
||||
// handled above. We have invalid UTF-8 or we haven't seen the
|
||||
// full character yet.
|
||||
err = ErrInvalidUTF8
|
||||
if !atEOF && !utf8.FullRune(src[i:]) {
|
||||
err = transform.ErrShortSrc
|
||||
}
|
||||
return i, i, err
|
||||
}
|
||||
if i+size > len(dst) {
|
||||
return i, i, transform.ErrShortDst
|
||||
}
|
||||
for ; size > 0; size-- {
|
||||
dst[i] = src[i]
|
||||
i++
|
||||
}
|
||||
}
|
||||
if len(src) > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
}
|
||||
return n, n, err
|
||||
}
|
1128
vendor/golang.org/x/text/encoding/encoding_test.go
generated
vendored
Normal file
1128
vendor/golang.org/x/text/encoding/encoding_test.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
42
vendor/golang.org/x/text/encoding/example_test.go
generated
vendored
Normal file
42
vendor/golang.org/x/text/encoding/example_test.go
generated
vendored
Normal file
@ -0,0 +1,42 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package encoding_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/charmap"
|
||||
"golang.org/x/text/encoding/unicode"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
func ExampleDecodeWindows1252() {
|
||||
sr := strings.NewReader("Gar\xe7on !")
|
||||
tr := charmap.Windows1252.NewDecoder().Reader(sr)
|
||||
io.Copy(os.Stdout, tr)
|
||||
// Output: Garçon !
|
||||
}
|
||||
|
||||
func ExampleUTF8Validator() {
|
||||
for i := 0; i < 2; i++ {
|
||||
var transformer transform.Transformer
|
||||
transformer = unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM).NewEncoder()
|
||||
if i == 1 {
|
||||
transformer = transform.Chain(encoding.UTF8Validator, transformer)
|
||||
}
|
||||
dst := make([]byte, 256)
|
||||
src := []byte("abc\xffxyz") // src is invalid UTF-8.
|
||||
nDst, nSrc, err := transformer.Transform(dst, src, true)
|
||||
fmt.Printf("i=%d: produced %q, consumed %q, error %v\n",
|
||||
i, dst[:nDst], src[:nSrc], err)
|
||||
}
|
||||
// Output:
|
||||
// i=0: produced "\x00a\x00b\x00c\xff\xfd\x00x\x00y\x00z", consumed "abc\xffxyz", error <nil>
|
||||
// i=1: produced "\x00a\x00b\x00c", consumed "abc", error encoding: invalid UTF-8
|
||||
}
|
167
vendor/golang.org/x/text/encoding/htmlindex/gen.go
generated
vendored
Normal file
167
vendor/golang.org/x/text/encoding/htmlindex/gen.go
generated
vendored
Normal file
@ -0,0 +1,167 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/text/internal/gen"
|
||||
)
|
||||
|
||||
type group struct {
|
||||
Encodings []struct {
|
||||
Labels []string
|
||||
Name string
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
gen.Init()
|
||||
|
||||
r := gen.Open("http://www.w3.org/TR", "w3", "encoding/indexes/encodings.json")
|
||||
var groups []group
|
||||
if err := json.NewDecoder(r).Decode(&groups); err != nil {
|
||||
log.Fatalf("Error reading encodings.json: %v", err)
|
||||
}
|
||||
|
||||
w := &bytes.Buffer{}
|
||||
fmt.Fprintln(w, "type htmlEncoding byte")
|
||||
fmt.Fprintln(w, "const (")
|
||||
for i, g := range groups {
|
||||
for _, e := range g.Encodings {
|
||||
name := consts[e.Name]
|
||||
if name == "" {
|
||||
log.Fatalf("No const defined for %s.", e.Name)
|
||||
}
|
||||
if i == 0 {
|
||||
fmt.Fprintf(w, "%s htmlEncoding = iota\n", name)
|
||||
} else {
|
||||
fmt.Fprintf(w, "%s\n", name)
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Fprintln(w, "numEncodings")
|
||||
fmt.Fprint(w, ")\n\n")
|
||||
|
||||
fmt.Fprintln(w, "var canonical = [numEncodings]string{")
|
||||
for _, g := range groups {
|
||||
for _, e := range g.Encodings {
|
||||
fmt.Fprintf(w, "%q,\n", e.Name)
|
||||
}
|
||||
}
|
||||
fmt.Fprint(w, "}\n\n")
|
||||
|
||||
fmt.Fprintln(w, "var nameMap = map[string]htmlEncoding{")
|
||||
for _, g := range groups {
|
||||
for _, e := range g.Encodings {
|
||||
for _, l := range e.Labels {
|
||||
fmt.Fprintf(w, "%q: %s,\n", l, consts[e.Name])
|
||||
}
|
||||
}
|
||||
}
|
||||
fmt.Fprint(w, "}\n\n")
|
||||
|
||||
var tags []string
|
||||
fmt.Fprintln(w, "var localeMap = []htmlEncoding{")
|
||||
for _, loc := range locales {
|
||||
tags = append(tags, loc.tag)
|
||||
fmt.Fprintf(w, "%s, // %s \n", consts[loc.name], loc.tag)
|
||||
}
|
||||
fmt.Fprint(w, "}\n\n")
|
||||
|
||||
fmt.Fprintf(w, "const locales = %q\n", strings.Join(tags, " "))
|
||||
|
||||
gen.WriteGoFile("tables.go", "htmlindex", w.Bytes())
|
||||
}
|
||||
|
||||
// consts maps canonical encoding name to internal constant.
|
||||
var consts = map[string]string{
|
||||
"utf-8": "utf8",
|
||||
"ibm866": "ibm866",
|
||||
"iso-8859-2": "iso8859_2",
|
||||
"iso-8859-3": "iso8859_3",
|
||||
"iso-8859-4": "iso8859_4",
|
||||
"iso-8859-5": "iso8859_5",
|
||||
"iso-8859-6": "iso8859_6",
|
||||
"iso-8859-7": "iso8859_7",
|
||||
"iso-8859-8": "iso8859_8",
|
||||
"iso-8859-8-i": "iso8859_8I",
|
||||
"iso-8859-10": "iso8859_10",
|
||||
"iso-8859-13": "iso8859_13",
|
||||
"iso-8859-14": "iso8859_14",
|
||||
"iso-8859-15": "iso8859_15",
|
||||
"iso-8859-16": "iso8859_16",
|
||||
"koi8-r": "koi8r",
|
||||
"koi8-u": "koi8u",
|
||||
"macintosh": "macintosh",
|
||||
"windows-874": "windows874",
|
||||
"windows-1250": "windows1250",
|
||||
"windows-1251": "windows1251",
|
||||
"windows-1252": "windows1252",
|
||||
"windows-1253": "windows1253",
|
||||
"windows-1254": "windows1254",
|
||||
"windows-1255": "windows1255",
|
||||
"windows-1256": "windows1256",
|
||||
"windows-1257": "windows1257",
|
||||
"windows-1258": "windows1258",
|
||||
"x-mac-cyrillic": "macintoshCyrillic",
|
||||
"gbk": "gbk",
|
||||
"gb18030": "gb18030",
|
||||
// "hz-gb-2312": "hzgb2312", // Was removed from WhatWG
|
||||
"big5": "big5",
|
||||
"euc-jp": "eucjp",
|
||||
"iso-2022-jp": "iso2022jp",
|
||||
"shift_jis": "shiftJIS",
|
||||
"euc-kr": "euckr",
|
||||
"replacement": "replacement",
|
||||
"utf-16be": "utf16be",
|
||||
"utf-16le": "utf16le",
|
||||
"x-user-defined": "xUserDefined",
|
||||
}
|
||||
|
||||
// locales is taken from
|
||||
// https://html.spec.whatwg.org/multipage/syntax.html#encoding-sniffing-algorithm.
|
||||
var locales = []struct{ tag, name string }{
|
||||
{"und", "windows-1252"}, // The default value.
|
||||
{"ar", "windows-1256"},
|
||||
{"ba", "windows-1251"},
|
||||
{"be", "windows-1251"},
|
||||
{"bg", "windows-1251"},
|
||||
{"cs", "windows-1250"},
|
||||
{"el", "iso-8859-7"},
|
||||
{"et", "windows-1257"},
|
||||
{"fa", "windows-1256"},
|
||||
{"he", "windows-1255"},
|
||||
{"hr", "windows-1250"},
|
||||
{"hu", "iso-8859-2"},
|
||||
{"ja", "shift_jis"},
|
||||
{"kk", "windows-1251"},
|
||||
{"ko", "euc-kr"},
|
||||
{"ku", "windows-1254"},
|
||||
{"ky", "windows-1251"},
|
||||
{"lt", "windows-1257"},
|
||||
{"lv", "windows-1257"},
|
||||
{"mk", "windows-1251"},
|
||||
{"pl", "iso-8859-2"},
|
||||
{"ru", "windows-1251"},
|
||||
{"sah", "windows-1251"},
|
||||
{"sk", "windows-1250"},
|
||||
{"sl", "iso-8859-2"},
|
||||
{"sr", "windows-1251"},
|
||||
{"tg", "windows-1251"},
|
||||
{"th", "windows-874"},
|
||||
{"tr", "windows-1254"},
|
||||
{"tt", "windows-1251"},
|
||||
{"uk", "windows-1251"},
|
||||
{"vi", "windows-1258"},
|
||||
{"zh-hans", "gb18030"},
|
||||
{"zh-hant", "big5"},
|
||||
}
|
86
vendor/golang.org/x/text/encoding/htmlindex/htmlindex.go
generated
vendored
Normal file
86
vendor/golang.org/x/text/encoding/htmlindex/htmlindex.go
generated
vendored
Normal file
@ -0,0 +1,86 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:generate go run gen.go
|
||||
|
||||
// Package htmlindex maps character set encoding names to Encodings as
|
||||
// recommended by the W3C for use in HTML 5. See http://www.w3.org/TR/encoding.
|
||||
package htmlindex
|
||||
|
||||
// TODO: perhaps have a "bare" version of the index (used by this package) that
|
||||
// is not pre-loaded with all encodings. Global variables in encodings prevent
|
||||
// the linker from being able to purge unneeded tables. This means that
|
||||
// referencing all encodings, as this package does for the default index, links
|
||||
// in all encodings unconditionally.
|
||||
//
|
||||
// This issue can be solved by either solving the linking issue (see
|
||||
// https://github.com/golang/go/issues/6330) or refactoring the encoding tables
|
||||
// (e.g. moving the tables to internal packages that do not use global
|
||||
// variables).
|
||||
|
||||
// TODO: allow canonicalizing names
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/language"
|
||||
)
|
||||
|
||||
var (
|
||||
errInvalidName = errors.New("htmlindex: invalid encoding name")
|
||||
errUnknown = errors.New("htmlindex: unknown Encoding")
|
||||
errUnsupported = errors.New("htmlindex: this encoding is not supported")
|
||||
)
|
||||
|
||||
var (
|
||||
matcherOnce sync.Once
|
||||
matcher language.Matcher
|
||||
)
|
||||
|
||||
// LanguageDefault returns the canonical name of the default encoding for a
|
||||
// given language.
|
||||
func LanguageDefault(tag language.Tag) string {
|
||||
matcherOnce.Do(func() {
|
||||
tags := []language.Tag{}
|
||||
for _, t := range strings.Split(locales, " ") {
|
||||
tags = append(tags, language.MustParse(t))
|
||||
}
|
||||
matcher = language.NewMatcher(tags)
|
||||
})
|
||||
_, i, _ := matcher.Match(tag)
|
||||
return canonical[localeMap[i]] // Default is Windows-1252.
|
||||
}
|
||||
|
||||
// Get returns an Encoding for one of the names listed in
|
||||
// http://www.w3.org/TR/encoding using the Default Index. Matching is case-
|
||||
// insensitive.
|
||||
func Get(name string) (encoding.Encoding, error) {
|
||||
x, ok := nameMap[strings.ToLower(strings.TrimSpace(name))]
|
||||
if !ok {
|
||||
return nil, errInvalidName
|
||||
}
|
||||
return encodings[x], nil
|
||||
}
|
||||
|
||||
// Name reports the canonical name of the given Encoding. It will return
|
||||
// an error if e is not associated with a supported encoding scheme.
|
||||
func Name(e encoding.Encoding) (string, error) {
|
||||
id, ok := e.(identifier.Interface)
|
||||
if !ok {
|
||||
return "", errUnknown
|
||||
}
|
||||
mib, _ := id.ID()
|
||||
if mib == 0 {
|
||||
return "", errUnknown
|
||||
}
|
||||
v, ok := mibMap[mib]
|
||||
if !ok {
|
||||
return "", errUnsupported
|
||||
}
|
||||
return canonical[v], nil
|
||||
}
|
144
vendor/golang.org/x/text/encoding/htmlindex/htmlindex_test.go
generated
vendored
Normal file
144
vendor/golang.org/x/text/encoding/htmlindex/htmlindex_test.go
generated
vendored
Normal file
@ -0,0 +1,144 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package htmlindex
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/charmap"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/encoding/unicode"
|
||||
"golang.org/x/text/language"
|
||||
)
|
||||
|
||||
func TestGet(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
name string
|
||||
canonical string
|
||||
err error
|
||||
}{
|
||||
{"utf-8", "utf-8", nil},
|
||||
{" utf-8 ", "utf-8", nil},
|
||||
{" l5 ", "windows-1254", nil},
|
||||
{"latin5 ", "windows-1254", nil},
|
||||
{"latin 5", "", errInvalidName},
|
||||
{"latin-5", "", errInvalidName},
|
||||
} {
|
||||
enc, err := Get(tc.name)
|
||||
if err != tc.err {
|
||||
t.Errorf("%d: error was %v; want %v", i, err, tc.err)
|
||||
}
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if got, err := Name(enc); got != tc.canonical {
|
||||
t.Errorf("%d: Name(Get(%q)) = %q; want %q (%v)", i, tc.name, got, tc.canonical, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestTables(t *testing.T) {
|
||||
for name, index := range nameMap {
|
||||
got, err := Get(name)
|
||||
if err != nil {
|
||||
t.Errorf("%s:err: expected non-nil error", name)
|
||||
}
|
||||
if want := encodings[index]; got != want {
|
||||
t.Errorf("%s:encoding: got %v; want %v", name, got, want)
|
||||
}
|
||||
mib, _ := got.(identifier.Interface).ID()
|
||||
if mibMap[mib] != index {
|
||||
t.Errorf("%s:mibMab: got %d; want %d", name, mibMap[mib], index)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestName(t *testing.T) {
|
||||
for i, tc := range []struct {
|
||||
desc string
|
||||
enc encoding.Encoding
|
||||
name string
|
||||
err error
|
||||
}{{
|
||||
"defined encoding",
|
||||
charmap.ISO8859_2,
|
||||
"iso-8859-2",
|
||||
nil,
|
||||
}, {
|
||||
"defined Unicode encoding",
|
||||
unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM),
|
||||
"utf-16be",
|
||||
nil,
|
||||
}, {
|
||||
"undefined Unicode encoding in HTML standard",
|
||||
unicode.UTF16(unicode.BigEndian, unicode.UseBOM),
|
||||
"",
|
||||
errUnsupported,
|
||||
}, {
|
||||
"undefined other encoding in HTML standard",
|
||||
charmap.CodePage437,
|
||||
"",
|
||||
errUnsupported,
|
||||
}, {
|
||||
"unknown encoding",
|
||||
encoding.Nop,
|
||||
"",
|
||||
errUnknown,
|
||||
}} {
|
||||
name, err := Name(tc.enc)
|
||||
if name != tc.name || err != tc.err {
|
||||
t.Errorf("%d:%s: got %q, %v; want %q, %v", i, tc.desc, name, err, tc.name, tc.err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLanguageDefault(t *testing.T) {
|
||||
for _, tc := range []struct{ tag, want string }{
|
||||
{"und", "windows-1252"}, // The default value.
|
||||
{"ar", "windows-1256"},
|
||||
{"ba", "windows-1251"},
|
||||
{"be", "windows-1251"},
|
||||
{"bg", "windows-1251"},
|
||||
{"cs", "windows-1250"},
|
||||
{"el", "iso-8859-7"},
|
||||
{"et", "windows-1257"},
|
||||
{"fa", "windows-1256"},
|
||||
{"he", "windows-1255"},
|
||||
{"hr", "windows-1250"},
|
||||
{"hu", "iso-8859-2"},
|
||||
{"ja", "shift_jis"},
|
||||
{"kk", "windows-1251"},
|
||||
{"ko", "euc-kr"},
|
||||
{"ku", "windows-1254"},
|
||||
{"ky", "windows-1251"},
|
||||
{"lt", "windows-1257"},
|
||||
{"lv", "windows-1257"},
|
||||
{"mk", "windows-1251"},
|
||||
{"pl", "iso-8859-2"},
|
||||
{"ru", "windows-1251"},
|
||||
{"sah", "windows-1251"},
|
||||
{"sk", "windows-1250"},
|
||||
{"sl", "iso-8859-2"},
|
||||
{"sr", "windows-1251"},
|
||||
{"tg", "windows-1251"},
|
||||
{"th", "windows-874"},
|
||||
{"tr", "windows-1254"},
|
||||
{"tt", "windows-1251"},
|
||||
{"uk", "windows-1251"},
|
||||
{"vi", "windows-1258"},
|
||||
{"zh-hans", "gb18030"},
|
||||
{"zh-hant", "big5"},
|
||||
// Variants and close approximates of the above.
|
||||
{"ar_EG", "windows-1256"},
|
||||
{"bs", "windows-1250"}, // Bosnian Latin maps to Croatian.
|
||||
// Use default fallback in case of miss.
|
||||
{"nl", "windows-1252"},
|
||||
} {
|
||||
if got := LanguageDefault(language.MustParse(tc.tag)); got != tc.want {
|
||||
t.Errorf("LanguageDefault(%s) = %s; want %s", tc.tag, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
105
vendor/golang.org/x/text/encoding/htmlindex/map.go
generated
vendored
Normal file
105
vendor/golang.org/x/text/encoding/htmlindex/map.go
generated
vendored
Normal file
@ -0,0 +1,105 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package htmlindex
|
||||
|
||||
import (
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/charmap"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/encoding/japanese"
|
||||
"golang.org/x/text/encoding/korean"
|
||||
"golang.org/x/text/encoding/simplifiedchinese"
|
||||
"golang.org/x/text/encoding/traditionalchinese"
|
||||
"golang.org/x/text/encoding/unicode"
|
||||
)
|
||||
|
||||
// mibMap maps a MIB identifier to an htmlEncoding index.
|
||||
var mibMap = map[identifier.MIB]htmlEncoding{
|
||||
identifier.UTF8: utf8,
|
||||
identifier.UTF16BE: utf16be,
|
||||
identifier.UTF16LE: utf16le,
|
||||
identifier.IBM866: ibm866,
|
||||
identifier.ISOLatin2: iso8859_2,
|
||||
identifier.ISOLatin3: iso8859_3,
|
||||
identifier.ISOLatin4: iso8859_4,
|
||||
identifier.ISOLatinCyrillic: iso8859_5,
|
||||
identifier.ISOLatinArabic: iso8859_6,
|
||||
identifier.ISOLatinGreek: iso8859_7,
|
||||
identifier.ISOLatinHebrew: iso8859_8,
|
||||
identifier.ISO88598I: iso8859_8I,
|
||||
identifier.ISOLatin6: iso8859_10,
|
||||
identifier.ISO885913: iso8859_13,
|
||||
identifier.ISO885914: iso8859_14,
|
||||
identifier.ISO885915: iso8859_15,
|
||||
identifier.ISO885916: iso8859_16,
|
||||
identifier.KOI8R: koi8r,
|
||||
identifier.KOI8U: koi8u,
|
||||
identifier.Macintosh: macintosh,
|
||||
identifier.MacintoshCyrillic: macintoshCyrillic,
|
||||
identifier.Windows874: windows874,
|
||||
identifier.Windows1250: windows1250,
|
||||
identifier.Windows1251: windows1251,
|
||||
identifier.Windows1252: windows1252,
|
||||
identifier.Windows1253: windows1253,
|
||||
identifier.Windows1254: windows1254,
|
||||
identifier.Windows1255: windows1255,
|
||||
identifier.Windows1256: windows1256,
|
||||
identifier.Windows1257: windows1257,
|
||||
identifier.Windows1258: windows1258,
|
||||
identifier.XUserDefined: xUserDefined,
|
||||
identifier.GBK: gbk,
|
||||
identifier.GB18030: gb18030,
|
||||
identifier.Big5: big5,
|
||||
identifier.EUCPkdFmtJapanese: eucjp,
|
||||
identifier.ISO2022JP: iso2022jp,
|
||||
identifier.ShiftJIS: shiftJIS,
|
||||
identifier.EUCKR: euckr,
|
||||
identifier.Replacement: replacement,
|
||||
}
|
||||
|
||||
// encodings maps the internal htmlEncoding to an Encoding.
|
||||
// TODO: consider using a reusable index in encoding/internal.
|
||||
var encodings = [numEncodings]encoding.Encoding{
|
||||
utf8: unicode.UTF8,
|
||||
ibm866: charmap.CodePage866,
|
||||
iso8859_2: charmap.ISO8859_2,
|
||||
iso8859_3: charmap.ISO8859_3,
|
||||
iso8859_4: charmap.ISO8859_4,
|
||||
iso8859_5: charmap.ISO8859_5,
|
||||
iso8859_6: charmap.ISO8859_6,
|
||||
iso8859_7: charmap.ISO8859_7,
|
||||
iso8859_8: charmap.ISO8859_8,
|
||||
iso8859_8I: charmap.ISO8859_8I,
|
||||
iso8859_10: charmap.ISO8859_10,
|
||||
iso8859_13: charmap.ISO8859_13,
|
||||
iso8859_14: charmap.ISO8859_14,
|
||||
iso8859_15: charmap.ISO8859_15,
|
||||
iso8859_16: charmap.ISO8859_16,
|
||||
koi8r: charmap.KOI8R,
|
||||
koi8u: charmap.KOI8U,
|
||||
macintosh: charmap.Macintosh,
|
||||
windows874: charmap.Windows874,
|
||||
windows1250: charmap.Windows1250,
|
||||
windows1251: charmap.Windows1251,
|
||||
windows1252: charmap.Windows1252,
|
||||
windows1253: charmap.Windows1253,
|
||||
windows1254: charmap.Windows1254,
|
||||
windows1255: charmap.Windows1255,
|
||||
windows1256: charmap.Windows1256,
|
||||
windows1257: charmap.Windows1257,
|
||||
windows1258: charmap.Windows1258,
|
||||
macintoshCyrillic: charmap.MacintoshCyrillic,
|
||||
gbk: simplifiedchinese.GBK,
|
||||
gb18030: simplifiedchinese.GB18030,
|
||||
big5: traditionalchinese.Big5,
|
||||
eucjp: japanese.EUCJP,
|
||||
iso2022jp: japanese.ISO2022JP,
|
||||
shiftJIS: japanese.ShiftJIS,
|
||||
euckr: korean.EUCKR,
|
||||
replacement: encoding.Replacement,
|
||||
utf16be: unicode.UTF16(unicode.BigEndian, unicode.IgnoreBOM),
|
||||
utf16le: unicode.UTF16(unicode.LittleEndian, unicode.IgnoreBOM),
|
||||
xUserDefined: charmap.XUserDefined,
|
||||
}
|
352
vendor/golang.org/x/text/encoding/htmlindex/tables.go
generated
vendored
Normal file
352
vendor/golang.org/x/text/encoding/htmlindex/tables.go
generated
vendored
Normal file
@ -0,0 +1,352 @@
|
||||
// This file was generated by go generate; DO NOT EDIT
|
||||
|
||||
package htmlindex
|
||||
|
||||
type htmlEncoding byte
|
||||
|
||||
const (
|
||||
utf8 htmlEncoding = iota
|
||||
ibm866
|
||||
iso8859_2
|
||||
iso8859_3
|
||||
iso8859_4
|
||||
iso8859_5
|
||||
iso8859_6
|
||||
iso8859_7
|
||||
iso8859_8
|
||||
iso8859_8I
|
||||
iso8859_10
|
||||
iso8859_13
|
||||
iso8859_14
|
||||
iso8859_15
|
||||
iso8859_16
|
||||
koi8r
|
||||
koi8u
|
||||
macintosh
|
||||
windows874
|
||||
windows1250
|
||||
windows1251
|
||||
windows1252
|
||||
windows1253
|
||||
windows1254
|
||||
windows1255
|
||||
windows1256
|
||||
windows1257
|
||||
windows1258
|
||||
macintoshCyrillic
|
||||
gbk
|
||||
gb18030
|
||||
big5
|
||||
eucjp
|
||||
iso2022jp
|
||||
shiftJIS
|
||||
euckr
|
||||
replacement
|
||||
utf16be
|
||||
utf16le
|
||||
xUserDefined
|
||||
numEncodings
|
||||
)
|
||||
|
||||
var canonical = [numEncodings]string{
|
||||
"utf-8",
|
||||
"ibm866",
|
||||
"iso-8859-2",
|
||||
"iso-8859-3",
|
||||
"iso-8859-4",
|
||||
"iso-8859-5",
|
||||
"iso-8859-6",
|
||||
"iso-8859-7",
|
||||
"iso-8859-8",
|
||||
"iso-8859-8-i",
|
||||
"iso-8859-10",
|
||||
"iso-8859-13",
|
||||
"iso-8859-14",
|
||||
"iso-8859-15",
|
||||
"iso-8859-16",
|
||||
"koi8-r",
|
||||
"koi8-u",
|
||||
"macintosh",
|
||||
"windows-874",
|
||||
"windows-1250",
|
||||
"windows-1251",
|
||||
"windows-1252",
|
||||
"windows-1253",
|
||||
"windows-1254",
|
||||
"windows-1255",
|
||||
"windows-1256",
|
||||
"windows-1257",
|
||||
"windows-1258",
|
||||
"x-mac-cyrillic",
|
||||
"gbk",
|
||||
"gb18030",
|
||||
"big5",
|
||||
"euc-jp",
|
||||
"iso-2022-jp",
|
||||
"shift_jis",
|
||||
"euc-kr",
|
||||
"replacement",
|
||||
"utf-16be",
|
||||
"utf-16le",
|
||||
"x-user-defined",
|
||||
}
|
||||
|
||||
var nameMap = map[string]htmlEncoding{
|
||||
"unicode-1-1-utf-8": utf8,
|
||||
"utf-8": utf8,
|
||||
"utf8": utf8,
|
||||
"866": ibm866,
|
||||
"cp866": ibm866,
|
||||
"csibm866": ibm866,
|
||||
"ibm866": ibm866,
|
||||
"csisolatin2": iso8859_2,
|
||||
"iso-8859-2": iso8859_2,
|
||||
"iso-ir-101": iso8859_2,
|
||||
"iso8859-2": iso8859_2,
|
||||
"iso88592": iso8859_2,
|
||||
"iso_8859-2": iso8859_2,
|
||||
"iso_8859-2:1987": iso8859_2,
|
||||
"l2": iso8859_2,
|
||||
"latin2": iso8859_2,
|
||||
"csisolatin3": iso8859_3,
|
||||
"iso-8859-3": iso8859_3,
|
||||
"iso-ir-109": iso8859_3,
|
||||
"iso8859-3": iso8859_3,
|
||||
"iso88593": iso8859_3,
|
||||
"iso_8859-3": iso8859_3,
|
||||
"iso_8859-3:1988": iso8859_3,
|
||||
"l3": iso8859_3,
|
||||
"latin3": iso8859_3,
|
||||
"csisolatin4": iso8859_4,
|
||||
"iso-8859-4": iso8859_4,
|
||||
"iso-ir-110": iso8859_4,
|
||||
"iso8859-4": iso8859_4,
|
||||
"iso88594": iso8859_4,
|
||||
"iso_8859-4": iso8859_4,
|
||||
"iso_8859-4:1988": iso8859_4,
|
||||
"l4": iso8859_4,
|
||||
"latin4": iso8859_4,
|
||||
"csisolatincyrillic": iso8859_5,
|
||||
"cyrillic": iso8859_5,
|
||||
"iso-8859-5": iso8859_5,
|
||||
"iso-ir-144": iso8859_5,
|
||||
"iso8859-5": iso8859_5,
|
||||
"iso88595": iso8859_5,
|
||||
"iso_8859-5": iso8859_5,
|
||||
"iso_8859-5:1988": iso8859_5,
|
||||
"arabic": iso8859_6,
|
||||
"asmo-708": iso8859_6,
|
||||
"csiso88596e": iso8859_6,
|
||||
"csiso88596i": iso8859_6,
|
||||
"csisolatinarabic": iso8859_6,
|
||||
"ecma-114": iso8859_6,
|
||||
"iso-8859-6": iso8859_6,
|
||||
"iso-8859-6-e": iso8859_6,
|
||||
"iso-8859-6-i": iso8859_6,
|
||||
"iso-ir-127": iso8859_6,
|
||||
"iso8859-6": iso8859_6,
|
||||
"iso88596": iso8859_6,
|
||||
"iso_8859-6": iso8859_6,
|
||||
"iso_8859-6:1987": iso8859_6,
|
||||
"csisolatingreek": iso8859_7,
|
||||
"ecma-118": iso8859_7,
|
||||
"elot_928": iso8859_7,
|
||||
"greek": iso8859_7,
|
||||
"greek8": iso8859_7,
|
||||
"iso-8859-7": iso8859_7,
|
||||
"iso-ir-126": iso8859_7,
|
||||
"iso8859-7": iso8859_7,
|
||||
"iso88597": iso8859_7,
|
||||
"iso_8859-7": iso8859_7,
|
||||
"iso_8859-7:1987": iso8859_7,
|
||||
"sun_eu_greek": iso8859_7,
|
||||
"csiso88598e": iso8859_8,
|
||||
"csisolatinhebrew": iso8859_8,
|
||||
"hebrew": iso8859_8,
|
||||
"iso-8859-8": iso8859_8,
|
||||
"iso-8859-8-e": iso8859_8,
|
||||
"iso-ir-138": iso8859_8,
|
||||
"iso8859-8": iso8859_8,
|
||||
"iso88598": iso8859_8,
|
||||
"iso_8859-8": iso8859_8,
|
||||
"iso_8859-8:1988": iso8859_8,
|
||||
"visual": iso8859_8,
|
||||
"csiso88598i": iso8859_8I,
|
||||
"iso-8859-8-i": iso8859_8I,
|
||||
"logical": iso8859_8I,
|
||||
"csisolatin6": iso8859_10,
|
||||
"iso-8859-10": iso8859_10,
|
||||
"iso-ir-157": iso8859_10,
|
||||
"iso8859-10": iso8859_10,
|
||||
"iso885910": iso8859_10,
|
||||
"l6": iso8859_10,
|
||||
"latin6": iso8859_10,
|
||||
"iso-8859-13": iso8859_13,
|
||||
"iso8859-13": iso8859_13,
|
||||
"iso885913": iso8859_13,
|
||||
"iso-8859-14": iso8859_14,
|
||||
"iso8859-14": iso8859_14,
|
||||
"iso885914": iso8859_14,
|
||||
"csisolatin9": iso8859_15,
|
||||
"iso-8859-15": iso8859_15,
|
||||
"iso8859-15": iso8859_15,
|
||||
"iso885915": iso8859_15,
|
||||
"iso_8859-15": iso8859_15,
|
||||
"l9": iso8859_15,
|
||||
"iso-8859-16": iso8859_16,
|
||||
"cskoi8r": koi8r,
|
||||
"koi": koi8r,
|
||||
"koi8": koi8r,
|
||||
"koi8-r": koi8r,
|
||||
"koi8_r": koi8r,
|
||||
"koi8-ru": koi8u,
|
||||
"koi8-u": koi8u,
|
||||
"csmacintosh": macintosh,
|
||||
"mac": macintosh,
|
||||
"macintosh": macintosh,
|
||||
"x-mac-roman": macintosh,
|
||||
"dos-874": windows874,
|
||||
"iso-8859-11": windows874,
|
||||
"iso8859-11": windows874,
|
||||
"iso885911": windows874,
|
||||
"tis-620": windows874,
|
||||
"windows-874": windows874,
|
||||
"cp1250": windows1250,
|
||||
"windows-1250": windows1250,
|
||||
"x-cp1250": windows1250,
|
||||
"cp1251": windows1251,
|
||||
"windows-1251": windows1251,
|
||||
"x-cp1251": windows1251,
|
||||
"ansi_x3.4-1968": windows1252,
|
||||
"ascii": windows1252,
|
||||
"cp1252": windows1252,
|
||||
"cp819": windows1252,
|
||||
"csisolatin1": windows1252,
|
||||
"ibm819": windows1252,
|
||||
"iso-8859-1": windows1252,
|
||||
"iso-ir-100": windows1252,
|
||||
"iso8859-1": windows1252,
|
||||
"iso88591": windows1252,
|
||||
"iso_8859-1": windows1252,
|
||||
"iso_8859-1:1987": windows1252,
|
||||
"l1": windows1252,
|
||||
"latin1": windows1252,
|
||||
"us-ascii": windows1252,
|
||||
"windows-1252": windows1252,
|
||||
"x-cp1252": windows1252,
|
||||
"cp1253": windows1253,
|
||||
"windows-1253": windows1253,
|
||||
"x-cp1253": windows1253,
|
||||
"cp1254": windows1254,
|
||||
"csisolatin5": windows1254,
|
||||
"iso-8859-9": windows1254,
|
||||
"iso-ir-148": windows1254,
|
||||
"iso8859-9": windows1254,
|
||||
"iso88599": windows1254,
|
||||
"iso_8859-9": windows1254,
|
||||
"iso_8859-9:1989": windows1254,
|
||||
"l5": windows1254,
|
||||
"latin5": windows1254,
|
||||
"windows-1254": windows1254,
|
||||
"x-cp1254": windows1254,
|
||||
"cp1255": windows1255,
|
||||
"windows-1255": windows1255,
|
||||
"x-cp1255": windows1255,
|
||||
"cp1256": windows1256,
|
||||
"windows-1256": windows1256,
|
||||
"x-cp1256": windows1256,
|
||||
"cp1257": windows1257,
|
||||
"windows-1257": windows1257,
|
||||
"x-cp1257": windows1257,
|
||||
"cp1258": windows1258,
|
||||
"windows-1258": windows1258,
|
||||
"x-cp1258": windows1258,
|
||||
"x-mac-cyrillic": macintoshCyrillic,
|
||||
"x-mac-ukrainian": macintoshCyrillic,
|
||||
"chinese": gbk,
|
||||
"csgb2312": gbk,
|
||||
"csiso58gb231280": gbk,
|
||||
"gb2312": gbk,
|
||||
"gb_2312": gbk,
|
||||
"gb_2312-80": gbk,
|
||||
"gbk": gbk,
|
||||
"iso-ir-58": gbk,
|
||||
"x-gbk": gbk,
|
||||
"gb18030": gb18030,
|
||||
"big5": big5,
|
||||
"big5-hkscs": big5,
|
||||
"cn-big5": big5,
|
||||
"csbig5": big5,
|
||||
"x-x-big5": big5,
|
||||
"cseucpkdfmtjapanese": eucjp,
|
||||
"euc-jp": eucjp,
|
||||
"x-euc-jp": eucjp,
|
||||
"csiso2022jp": iso2022jp,
|
||||
"iso-2022-jp": iso2022jp,
|
||||
"csshiftjis": shiftJIS,
|
||||
"ms932": shiftJIS,
|
||||
"ms_kanji": shiftJIS,
|
||||
"shift-jis": shiftJIS,
|
||||
"shift_jis": shiftJIS,
|
||||
"sjis": shiftJIS,
|
||||
"windows-31j": shiftJIS,
|
||||
"x-sjis": shiftJIS,
|
||||
"cseuckr": euckr,
|
||||
"csksc56011987": euckr,
|
||||
"euc-kr": euckr,
|
||||
"iso-ir-149": euckr,
|
||||
"korean": euckr,
|
||||
"ks_c_5601-1987": euckr,
|
||||
"ks_c_5601-1989": euckr,
|
||||
"ksc5601": euckr,
|
||||
"ksc_5601": euckr,
|
||||
"windows-949": euckr,
|
||||
"csiso2022kr": replacement,
|
||||
"hz-gb-2312": replacement,
|
||||
"iso-2022-cn": replacement,
|
||||
"iso-2022-cn-ext": replacement,
|
||||
"iso-2022-kr": replacement,
|
||||
"utf-16be": utf16be,
|
||||
"utf-16": utf16le,
|
||||
"utf-16le": utf16le,
|
||||
"x-user-defined": xUserDefined,
|
||||
}
|
||||
|
||||
var localeMap = []htmlEncoding{
|
||||
windows1252, // und
|
||||
windows1256, // ar
|
||||
windows1251, // ba
|
||||
windows1251, // be
|
||||
windows1251, // bg
|
||||
windows1250, // cs
|
||||
iso8859_7, // el
|
||||
windows1257, // et
|
||||
windows1256, // fa
|
||||
windows1255, // he
|
||||
windows1250, // hr
|
||||
iso8859_2, // hu
|
||||
shiftJIS, // ja
|
||||
windows1251, // kk
|
||||
euckr, // ko
|
||||
windows1254, // ku
|
||||
windows1251, // ky
|
||||
windows1257, // lt
|
||||
windows1257, // lv
|
||||
windows1251, // mk
|
||||
iso8859_2, // pl
|
||||
windows1251, // ru
|
||||
windows1251, // sah
|
||||
windows1250, // sk
|
||||
iso8859_2, // sl
|
||||
windows1251, // sr
|
||||
windows1251, // tg
|
||||
windows874, // th
|
||||
windows1254, // tr
|
||||
windows1251, // tt
|
||||
windows1251, // uk
|
||||
windows1258, // vi
|
||||
gb18030, // zh-hans
|
||||
big5, // zh-hant
|
||||
}
|
||||
|
||||
const locales = "und ar ba be bg cs el et fa he hr hu ja kk ko ku ky lt lv mk pl ru sah sk sl sr tg th tr tt uk vi zh-hans zh-hant"
|
26
vendor/golang.org/x/text/encoding/ianaindex/example_test.go
generated
vendored
Normal file
26
vendor/golang.org/x/text/encoding/ianaindex/example_test.go
generated
vendored
Normal file
@ -0,0 +1,26 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package ianaindex_test
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"golang.org/x/text/encoding/charmap"
|
||||
"golang.org/x/text/encoding/ianaindex"
|
||||
)
|
||||
|
||||
func ExampleIndex() {
|
||||
fmt.Println(ianaindex.MIME.Name(charmap.ISO8859_7))
|
||||
|
||||
fmt.Println(ianaindex.IANA.Name(charmap.ISO8859_7))
|
||||
|
||||
e, _ := ianaindex.IANA.Get("cp437")
|
||||
fmt.Println(ianaindex.IANA.Name(e))
|
||||
|
||||
// TODO: Output:
|
||||
// ISO-8859-7
|
||||
// ISO8859_7:1987
|
||||
// IBM437
|
||||
}
|
64
vendor/golang.org/x/text/encoding/ianaindex/ianaindex.go
generated
vendored
Normal file
64
vendor/golang.org/x/text/encoding/ianaindex/ianaindex.go
generated
vendored
Normal file
@ -0,0 +1,64 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package ianaindex maps names to Encodings as specified by the IANA registry.
|
||||
// This includes both the MIME and IANA names.
|
||||
//
|
||||
// See http://www.iana.org/assignments/character-sets/character-sets.xhtml for
|
||||
// more details.
|
||||
package ianaindex
|
||||
|
||||
import (
|
||||
"golang.org/x/text/encoding"
|
||||
)
|
||||
|
||||
// TODO: allow users to specify their own aliases?
|
||||
// TODO: allow users to specify their own indexes?
|
||||
// TODO: allow canonicalizing names
|
||||
|
||||
// NOTE: only use these top-level variables if we can get the linker to drop
|
||||
// the indexes when they are not used. Make them a function or perhaps only
|
||||
// support MIME otherwise.
|
||||
|
||||
var (
|
||||
// MIME is an index to map MIME names. It does not support aliases.
|
||||
MIME *Index
|
||||
|
||||
// IANA is an index that supports all names and aliases using IANA names as
|
||||
// the canonical identifier.
|
||||
IANA *Index
|
||||
)
|
||||
|
||||
// Index maps names registered by IANA to Encodings.
|
||||
type Index struct {
|
||||
}
|
||||
|
||||
// Get returns an Encoding for IANA-registered names. Matching is
|
||||
// case-insensitive.
|
||||
func (x *Index) Get(name string) (encoding.Encoding, error) {
|
||||
panic("TODO: implement")
|
||||
}
|
||||
|
||||
// Name reports the canonical name of the given Encoding. It will return an
|
||||
// error if the e is not associated with a known encoding scheme.
|
||||
func (x *Index) Name(e encoding.Encoding) (string, error) {
|
||||
panic("TODO: implement")
|
||||
}
|
||||
|
||||
// TODO: the coverage of this index is rather spotty. Allowing users to set
|
||||
// encodings would allow:
|
||||
// - users to increase coverage
|
||||
// - allow a partially loaded set of encodings in case the user doesn't need to
|
||||
// them all.
|
||||
// - write an OS-specific wrapper for supported encodings and set them.
|
||||
// The exact definition of Set depends a bit on if and how we want to let users
|
||||
// write their own Encoding implementations. Also, it is not possible yet to
|
||||
// only partially load the encodings without doing some refactoring. Until this
|
||||
// is solved, we might as well not support Set.
|
||||
// // Set sets the e to be used for the encoding scheme identified by name. Only
|
||||
// // canonical names may be used. An empty name assigns e to its internally
|
||||
// // associated encoding scheme.
|
||||
// func (x *Index) Set(name string, e encoding.Encoding) error {
|
||||
// panic("TODO: implement")
|
||||
// }
|
137
vendor/golang.org/x/text/encoding/internal/identifier/gen.go
generated
vendored
Normal file
137
vendor/golang.org/x/text/encoding/internal/identifier/gen.go
generated
vendored
Normal file
@ -0,0 +1,137 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/text/internal/gen"
|
||||
)
|
||||
|
||||
type registry struct {
|
||||
XMLName xml.Name `xml:"registry"`
|
||||
Updated string `xml:"updated"`
|
||||
Registry []struct {
|
||||
ID string `xml:"id,attr"`
|
||||
Record []struct {
|
||||
Name string `xml:"name"`
|
||||
Xref []struct {
|
||||
Type string `xml:"type,attr"`
|
||||
Data string `xml:"data,attr"`
|
||||
} `xml:"xref"`
|
||||
Desc struct {
|
||||
Data string `xml:",innerxml"`
|
||||
// Any []struct {
|
||||
// Data string `xml:",chardata"`
|
||||
// } `xml:",any"`
|
||||
// Data string `xml:",chardata"`
|
||||
} `xml:"description,"`
|
||||
MIB string `xml:"value"`
|
||||
Alias []string `xml:"alias"`
|
||||
MIME string `xml:"preferred_alias"`
|
||||
} `xml:"record"`
|
||||
} `xml:"registry"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
r := gen.OpenIANAFile("assignments/character-sets/character-sets.xml")
|
||||
reg := ®istry{}
|
||||
if err := xml.NewDecoder(r).Decode(®); err != nil && err != io.EOF {
|
||||
log.Fatalf("Error decoding charset registry: %v", err)
|
||||
}
|
||||
if len(reg.Registry) == 0 || reg.Registry[0].ID != "character-sets-1" {
|
||||
log.Fatalf("Unexpected ID %s", reg.Registry[0].ID)
|
||||
}
|
||||
|
||||
w := &bytes.Buffer{}
|
||||
fmt.Fprintf(w, "const (\n")
|
||||
for _, rec := range reg.Registry[0].Record {
|
||||
constName := ""
|
||||
for _, a := range rec.Alias {
|
||||
if strings.HasPrefix(a, "cs") && strings.IndexByte(a, '-') == -1 {
|
||||
// Some of the constant definitions have comments in them. Strip those.
|
||||
constName = strings.Title(strings.SplitN(a[2:], "\n", 2)[0])
|
||||
}
|
||||
}
|
||||
if constName == "" {
|
||||
switch rec.MIB {
|
||||
case "2085":
|
||||
constName = "HZGB2312" // Not listed as alias for some reason.
|
||||
default:
|
||||
log.Fatalf("No cs alias defined for %s.", rec.MIB)
|
||||
}
|
||||
}
|
||||
if rec.MIME != "" {
|
||||
rec.MIME = fmt.Sprintf(" (MIME: %s)", rec.MIME)
|
||||
}
|
||||
fmt.Fprintf(w, "// %s is the MIB identifier with IANA name %s%s.\n//\n", constName, rec.Name, rec.MIME)
|
||||
if len(rec.Desc.Data) > 0 {
|
||||
fmt.Fprint(w, "// ")
|
||||
d := xml.NewDecoder(strings.NewReader(rec.Desc.Data))
|
||||
inElem := true
|
||||
attr := ""
|
||||
for {
|
||||
t, err := d.Token()
|
||||
if err != nil {
|
||||
if err != io.EOF {
|
||||
log.Fatal(err)
|
||||
}
|
||||
break
|
||||
}
|
||||
switch x := t.(type) {
|
||||
case xml.CharData:
|
||||
attr = "" // Don't need attribute info.
|
||||
a := bytes.Split([]byte(x), []byte("\n"))
|
||||
for i, b := range a {
|
||||
if b = bytes.TrimSpace(b); len(b) != 0 {
|
||||
if !inElem && i > 0 {
|
||||
fmt.Fprint(w, "\n// ")
|
||||
}
|
||||
inElem = false
|
||||
fmt.Fprintf(w, "%s ", string(b))
|
||||
}
|
||||
}
|
||||
case xml.StartElement:
|
||||
if x.Name.Local == "xref" {
|
||||
inElem = true
|
||||
use := false
|
||||
for _, a := range x.Attr {
|
||||
if a.Name.Local == "type" {
|
||||
use = use || a.Value != "person"
|
||||
}
|
||||
if a.Name.Local == "data" && use {
|
||||
attr = a.Value + " "
|
||||
}
|
||||
}
|
||||
}
|
||||
case xml.EndElement:
|
||||
inElem = false
|
||||
fmt.Fprint(w, attr)
|
||||
}
|
||||
}
|
||||
fmt.Fprint(w, "\n")
|
||||
}
|
||||
for _, x := range rec.Xref {
|
||||
switch x.Type {
|
||||
case "rfc":
|
||||
fmt.Fprintf(w, "// Reference: %s\n", strings.ToUpper(x.Data))
|
||||
case "uri":
|
||||
fmt.Fprintf(w, "// Reference: %s\n", x.Data)
|
||||
}
|
||||
}
|
||||
fmt.Fprintf(w, "%s MIB = %s\n", constName, rec.MIB)
|
||||
fmt.Fprintln(w)
|
||||
}
|
||||
fmt.Fprintln(w, ")")
|
||||
|
||||
gen.WriteGoFile("mib.go", "identifier", w.Bytes())
|
||||
}
|
81
vendor/golang.org/x/text/encoding/internal/identifier/identifier.go
generated
vendored
Normal file
81
vendor/golang.org/x/text/encoding/internal/identifier/identifier.go
generated
vendored
Normal file
@ -0,0 +1,81 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
//go:generate go run gen.go
|
||||
|
||||
// Package identifier defines the contract between implementations of Encoding
|
||||
// and Index by defining identifiers that uniquely identify standardized coded
|
||||
// character sets (CCS) and character encoding schemes (CES), which we will
|
||||
// together refer to as encodings, for which Encoding implementations provide
|
||||
// converters to and from UTF-8. This package is typically only of concern to
|
||||
// implementers of Indexes and Encodings.
|
||||
//
|
||||
// One part of the identifier is the MIB code, which is defined by IANA and
|
||||
// uniquely identifies a CCS or CES. Each code is associated with data that
|
||||
// references authorities, official documentation as well as aliases and MIME
|
||||
// names.
|
||||
//
|
||||
// Not all CESs are covered by the IANA registry. The "other" string that is
|
||||
// returned by ID can be used to identify other character sets or versions of
|
||||
// existing ones.
|
||||
//
|
||||
// It is recommended that each package that provides a set of Encodings provide
|
||||
// the All and Common variables to reference all supported encodings and
|
||||
// commonly used subset. This allows Index implementations to include all
|
||||
// available encodings without explicitly referencing or knowing about them.
|
||||
package identifier
|
||||
|
||||
// Note: this package is internal, but could be made public if there is a need
|
||||
// for writing third-party Indexes and Encodings.
|
||||
|
||||
// References:
|
||||
// - http://source.icu-project.org/repos/icu/icu/trunk/source/data/mappings/convrtrs.txt
|
||||
// - http://www.iana.org/assignments/character-sets/character-sets.xhtml
|
||||
// - http://www.iana.org/assignments/ianacharset-mib/ianacharset-mib
|
||||
// - http://www.ietf.org/rfc/rfc2978.txt
|
||||
// - http://www.unicode.org/reports/tr22/
|
||||
// - http://www.w3.org/TR/encoding/
|
||||
// - http://www.w3.org/TR/encoding/indexes/encodings.json
|
||||
// - https://encoding.spec.whatwg.org/
|
||||
// - https://tools.ietf.org/html/rfc6657#section-5
|
||||
|
||||
// Interface can be implemented by Encodings to define the CCS or CES for which
|
||||
// it implements conversions.
|
||||
type Interface interface {
|
||||
// ID returns an encoding identifier. Exactly one of the mib and other
|
||||
// values should be non-zero.
|
||||
//
|
||||
// In the usual case it is only necessary to indicate the MIB code. The
|
||||
// other string can be used to specify encodings for which there is no MIB,
|
||||
// such as "x-mac-dingbat".
|
||||
//
|
||||
// The other string may only contain the characters a-z, A-Z, 0-9, - and _.
|
||||
ID() (mib MIB, other string)
|
||||
|
||||
// NOTE: the restrictions on the encoding are to allow extending the syntax
|
||||
// with additional information such as versions, vendors and other variants.
|
||||
}
|
||||
|
||||
// A MIB identifies an encoding. It is derived from the IANA MIB codes and adds
|
||||
// some identifiers for some encodings that are not covered by the IANA
|
||||
// standard.
|
||||
//
|
||||
// See http://www.iana.org/assignments/ianacharset-mib.
|
||||
type MIB uint16
|
||||
|
||||
// These additional MIB types are not defined in IANA. They are added because
|
||||
// they are common and defined within the text repo.
|
||||
const (
|
||||
// Unofficial marks the start of encodings not registered by IANA.
|
||||
Unofficial MIB = 10000 + iota
|
||||
|
||||
// Replacement is the WhatWG replacement encoding.
|
||||
Replacement
|
||||
|
||||
// XUserDefined is the code for x-user-defined.
|
||||
XUserDefined
|
||||
|
||||
// MacintoshCyrillic is the code for x-mac-cyrillic.
|
||||
MacintoshCyrillic
|
||||
)
|
1621
vendor/golang.org/x/text/encoding/internal/identifier/mib.go
generated
vendored
Normal file
1621
vendor/golang.org/x/text/encoding/internal/identifier/mib.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
75
vendor/golang.org/x/text/encoding/internal/internal.go
generated
vendored
Normal file
75
vendor/golang.org/x/text/encoding/internal/internal.go
generated
vendored
Normal file
@ -0,0 +1,75 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package internal contains code that is shared among encoding implementations.
|
||||
package internal
|
||||
|
||||
import (
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
// Encoding is an implementation of the Encoding interface that adds the String
|
||||
// and ID methods to an existing encoding.
|
||||
type Encoding struct {
|
||||
encoding.Encoding
|
||||
Name string
|
||||
MIB identifier.MIB
|
||||
}
|
||||
|
||||
// _ verifies that Encoding implements identifier.Interface.
|
||||
var _ identifier.Interface = (*Encoding)(nil)
|
||||
|
||||
func (e *Encoding) String() string {
|
||||
return e.Name
|
||||
}
|
||||
|
||||
func (e *Encoding) ID() (mib identifier.MIB, other string) {
|
||||
return e.MIB, ""
|
||||
}
|
||||
|
||||
// SimpleEncoding is an Encoding that combines two Transformers.
|
||||
type SimpleEncoding struct {
|
||||
Decoder transform.Transformer
|
||||
Encoder transform.Transformer
|
||||
}
|
||||
|
||||
func (e *SimpleEncoding) NewDecoder() *encoding.Decoder {
|
||||
return &encoding.Decoder{Transformer: e.Decoder}
|
||||
}
|
||||
|
||||
func (e *SimpleEncoding) NewEncoder() *encoding.Encoder {
|
||||
return &encoding.Encoder{Transformer: e.Encoder}
|
||||
}
|
||||
|
||||
// FuncEncoding is an Encoding that combines two functions returning a new
|
||||
// Transformer.
|
||||
type FuncEncoding struct {
|
||||
Decoder func() transform.Transformer
|
||||
Encoder func() transform.Transformer
|
||||
}
|
||||
|
||||
func (e FuncEncoding) NewDecoder() *encoding.Decoder {
|
||||
return &encoding.Decoder{Transformer: e.Decoder()}
|
||||
}
|
||||
|
||||
func (e FuncEncoding) NewEncoder() *encoding.Encoder {
|
||||
return &encoding.Encoder{Transformer: e.Encoder()}
|
||||
}
|
||||
|
||||
// A RepertoireError indicates a rune is not in the repertoire of a destination
|
||||
// encoding. It is associated with an encoding-specific suggested replacement
|
||||
// byte.
|
||||
type RepertoireError byte
|
||||
|
||||
// Error implements the error interrface.
|
||||
func (r RepertoireError) Error() string {
|
||||
return "encoding: rune not supported by encoding."
|
||||
}
|
||||
|
||||
// Replacement returns the replacement string associated with this error.
|
||||
func (r RepertoireError) Replacement() byte { return byte(r) }
|
||||
|
||||
var ErrASCIIReplacement = RepertoireError(encoding.ASCIISub)
|
12
vendor/golang.org/x/text/encoding/japanese/all.go
generated
vendored
Normal file
12
vendor/golang.org/x/text/encoding/japanese/all.go
generated
vendored
Normal file
@ -0,0 +1,12 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package japanese
|
||||
|
||||
import (
|
||||
"golang.org/x/text/encoding"
|
||||
)
|
||||
|
||||
// All is a list of all defined encodings in this package.
|
||||
var All = []encoding.Encoding{EUCJP, ISO2022JP, ShiftJIS}
|
80
vendor/golang.org/x/text/encoding/japanese/all_test.go
generated
vendored
Normal file
80
vendor/golang.org/x/text/encoding/japanese/all_test.go
generated
vendored
Normal file
@ -0,0 +1,80 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package japanese
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
func dec(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
|
||||
return "Decode", e.NewDecoder(), nil
|
||||
}
|
||||
func enc(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
|
||||
return "Encode", e.NewEncoder(), internal.ErrASCIIReplacement
|
||||
}
|
||||
|
||||
func TestNonRepertoire(t *testing.T) {
|
||||
testCases := []struct {
|
||||
init func(e encoding.Encoding) (string, transform.Transformer, error)
|
||||
e encoding.Encoding
|
||||
src, want string
|
||||
}{
|
||||
{dec, EUCJP, "\xfe\xfc", "\ufffd"},
|
||||
{dec, ISO2022JP, "\x1b$B\x7e\x7e", "\ufffd"},
|
||||
{dec, ShiftJIS, "\xef\xfc", "\ufffd"},
|
||||
|
||||
{enc, EUCJP, "갂", ""},
|
||||
{enc, EUCJP, "a갂", "a"},
|
||||
{enc, EUCJP, "丌갂", "\x8f\xb0\xa4"},
|
||||
|
||||
{enc, ISO2022JP, "갂", ""},
|
||||
{enc, ISO2022JP, "a갂", "a"},
|
||||
{enc, ISO2022JP, "朗갂", "\x1b$BzF\x1b(B"}, // switch back to ASCII mode at end
|
||||
|
||||
{enc, ShiftJIS, "갂", ""},
|
||||
{enc, ShiftJIS, "a갂", "a"},
|
||||
{enc, ShiftJIS, "\u2190갂", "\x81\xa9"},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
dir, tr, wantErr := tc.init(tc.e)
|
||||
|
||||
dst, _, err := transform.String(tr, tc.src)
|
||||
if err != wantErr {
|
||||
t.Errorf("%s %v(%q): got %v; want %v", dir, tc.e, tc.src, err, wantErr)
|
||||
}
|
||||
if got := string(dst); got != tc.want {
|
||||
t.Errorf("%s %v(%q):\ngot %q\nwant %q", dir, tc.e, tc.src, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestCorrect(t *testing.T) {
|
||||
testCases := []struct {
|
||||
init func(e encoding.Encoding) (string, transform.Transformer, error)
|
||||
e encoding.Encoding
|
||||
src, want string
|
||||
}{
|
||||
{dec, ShiftJIS, "\x9f\xfc", "滌"},
|
||||
{dec, ShiftJIS, "\xfb\xfc", "髙"},
|
||||
{dec, ShiftJIS, "\xfa\xb1", "﨑"},
|
||||
{enc, ShiftJIS, "滌", "\x9f\xfc"},
|
||||
{enc, ShiftJIS, "﨑", "\xed\x95"},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
dir, tr, _ := tc.init(tc.e)
|
||||
|
||||
dst, _, err := transform.String(tr, tc.src)
|
||||
if err != nil {
|
||||
t.Errorf("%s %v(%q): got %v; want %v", dir, tc.e, tc.src, err, nil)
|
||||
}
|
||||
if got := string(dst); got != tc.want {
|
||||
t.Errorf("%s %v(%q):\ngot %q\nwant %q", dir, tc.e, tc.src, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
211
vendor/golang.org/x/text/encoding/japanese/eucjp.go
generated
vendored
Normal file
211
vendor/golang.org/x/text/encoding/japanese/eucjp.go
generated
vendored
Normal file
@ -0,0 +1,211 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package japanese
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
// EUCJP is the EUC-JP encoding.
|
||||
var EUCJP encoding.Encoding = &eucJP
|
||||
|
||||
var eucJP = internal.Encoding{
|
||||
&internal.SimpleEncoding{eucJPDecoder{}, eucJPEncoder{}},
|
||||
"EUC-JP",
|
||||
identifier.EUCPkdFmtJapanese,
|
||||
}
|
||||
|
||||
var errInvalidEUCJP = errors.New("japanese: invalid EUC-JP encoding")
|
||||
|
||||
type eucJPDecoder struct{ transform.NopResetter }
|
||||
|
||||
func (eucJPDecoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
loop:
|
||||
for ; nSrc < len(src); nSrc += size {
|
||||
switch c0 := src[nSrc]; {
|
||||
case c0 < utf8.RuneSelf:
|
||||
r, size = rune(c0), 1
|
||||
|
||||
case c0 == 0x8e:
|
||||
if nSrc+1 >= len(src) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
c1 := src[nSrc+1]
|
||||
if c1 < 0xa1 || 0xdf < c1 {
|
||||
err = errInvalidEUCJP
|
||||
break loop
|
||||
}
|
||||
r, size = rune(c1)+(0xff61-0xa1), 2
|
||||
|
||||
case c0 == 0x8f:
|
||||
if nSrc+2 >= len(src) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
c1 := src[nSrc+1]
|
||||
if c1 < 0xa1 || 0xfe < c1 {
|
||||
err = errInvalidEUCJP
|
||||
break loop
|
||||
}
|
||||
c2 := src[nSrc+2]
|
||||
if c2 < 0xa1 || 0xfe < c2 {
|
||||
err = errInvalidEUCJP
|
||||
break loop
|
||||
}
|
||||
r, size = '\ufffd', 3
|
||||
if i := int(c1-0xa1)*94 + int(c2-0xa1); i < len(jis0212Decode) {
|
||||
r = rune(jis0212Decode[i])
|
||||
if r == 0 {
|
||||
r = '\ufffd'
|
||||
}
|
||||
}
|
||||
|
||||
case 0xa1 <= c0 && c0 <= 0xfe:
|
||||
if nSrc+1 >= len(src) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
c1 := src[nSrc+1]
|
||||
if c1 < 0xa1 || 0xfe < c1 {
|
||||
err = errInvalidEUCJP
|
||||
break loop
|
||||
}
|
||||
r, size = '\ufffd', 2
|
||||
if i := int(c0-0xa1)*94 + int(c1-0xa1); i < len(jis0208Decode) {
|
||||
r = rune(jis0208Decode[i])
|
||||
if r == 0 {
|
||||
r = '\ufffd'
|
||||
}
|
||||
}
|
||||
|
||||
default:
|
||||
err = errInvalidEUCJP
|
||||
break loop
|
||||
}
|
||||
|
||||
if nDst+utf8.RuneLen(r) > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break loop
|
||||
}
|
||||
nDst += utf8.EncodeRune(dst[nDst:], r)
|
||||
}
|
||||
if atEOF && err == transform.ErrShortSrc {
|
||||
err = errInvalidEUCJP
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
|
||||
type eucJPEncoder struct{ transform.NopResetter }
|
||||
|
||||
func (eucJPEncoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
for ; nSrc < len(src); nSrc += size {
|
||||
r = rune(src[nSrc])
|
||||
|
||||
// Decode a 1-byte rune.
|
||||
if r < utf8.RuneSelf {
|
||||
size = 1
|
||||
|
||||
} else {
|
||||
// Decode a multi-byte rune.
|
||||
r, size = utf8.DecodeRune(src[nSrc:])
|
||||
if size == 1 {
|
||||
// All valid runes of size 1 (those below utf8.RuneSelf) were
|
||||
// handled above. We have invalid UTF-8 or we haven't seen the
|
||||
// full character yet.
|
||||
if !atEOF && !utf8.FullRune(src[nSrc:]) {
|
||||
err = transform.ErrShortSrc
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// func init checks that the switch covers all tables.
|
||||
switch {
|
||||
case encode0Low <= r && r < encode0High:
|
||||
if r = rune(encode0[r-encode0Low]); r != 0 {
|
||||
goto write2or3
|
||||
}
|
||||
case encode1Low <= r && r < encode1High:
|
||||
if r = rune(encode1[r-encode1Low]); r != 0 {
|
||||
goto write2or3
|
||||
}
|
||||
case encode2Low <= r && r < encode2High:
|
||||
if r = rune(encode2[r-encode2Low]); r != 0 {
|
||||
goto write2or3
|
||||
}
|
||||
case encode3Low <= r && r < encode3High:
|
||||
if r = rune(encode3[r-encode3Low]); r != 0 {
|
||||
goto write2or3
|
||||
}
|
||||
case encode4Low <= r && r < encode4High:
|
||||
if r = rune(encode4[r-encode4Low]); r != 0 {
|
||||
goto write2or3
|
||||
}
|
||||
case encode5Low <= r && r < encode5High:
|
||||
if 0xff61 <= r && r < 0xffa0 {
|
||||
goto write2
|
||||
}
|
||||
if r = rune(encode5[r-encode5Low]); r != 0 {
|
||||
goto write2or3
|
||||
}
|
||||
}
|
||||
err = internal.ErrASCIIReplacement
|
||||
break
|
||||
}
|
||||
|
||||
if nDst >= len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst] = uint8(r)
|
||||
nDst++
|
||||
continue
|
||||
|
||||
write2or3:
|
||||
if r>>tableShift == jis0208 {
|
||||
if nDst+2 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
} else {
|
||||
if nDst+3 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst] = 0x8f
|
||||
nDst++
|
||||
}
|
||||
dst[nDst+0] = 0xa1 + uint8(r>>codeShift)&codeMask
|
||||
dst[nDst+1] = 0xa1 + uint8(r)&codeMask
|
||||
nDst += 2
|
||||
continue
|
||||
|
||||
write2:
|
||||
if nDst+2 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst+0] = 0x8e
|
||||
dst[nDst+1] = uint8(r - (0xff61 - 0xa1))
|
||||
nDst += 2
|
||||
continue
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
|
||||
func init() {
|
||||
// Check that the hard-coded encode switch covers all tables.
|
||||
if numEncodeTables != 6 {
|
||||
panic("bad numEncodeTables")
|
||||
}
|
||||
}
|
296
vendor/golang.org/x/text/encoding/japanese/iso2022jp.go
generated
vendored
Normal file
296
vendor/golang.org/x/text/encoding/japanese/iso2022jp.go
generated
vendored
Normal file
@ -0,0 +1,296 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package japanese
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
// ISO2022JP is the ISO-2022-JP encoding.
|
||||
var ISO2022JP encoding.Encoding = &iso2022JP
|
||||
|
||||
var iso2022JP = internal.Encoding{
|
||||
internal.FuncEncoding{iso2022JPNewDecoder, iso2022JPNewEncoder},
|
||||
"ISO-2022-JP",
|
||||
identifier.ISO2022JP,
|
||||
}
|
||||
|
||||
func iso2022JPNewDecoder() transform.Transformer {
|
||||
return new(iso2022JPDecoder)
|
||||
}
|
||||
|
||||
func iso2022JPNewEncoder() transform.Transformer {
|
||||
return new(iso2022JPEncoder)
|
||||
}
|
||||
|
||||
var errInvalidISO2022JP = errors.New("japanese: invalid ISO-2022-JP encoding")
|
||||
|
||||
const (
|
||||
asciiState = iota
|
||||
katakanaState
|
||||
jis0208State
|
||||
jis0212State
|
||||
)
|
||||
|
||||
const asciiEsc = 0x1b
|
||||
|
||||
type iso2022JPDecoder int
|
||||
|
||||
func (d *iso2022JPDecoder) Reset() {
|
||||
*d = asciiState
|
||||
}
|
||||
|
||||
func (d *iso2022JPDecoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
loop:
|
||||
for ; nSrc < len(src); nSrc += size {
|
||||
c0 := src[nSrc]
|
||||
if c0 >= utf8.RuneSelf {
|
||||
err = errInvalidISO2022JP
|
||||
break loop
|
||||
}
|
||||
|
||||
if c0 == asciiEsc {
|
||||
if nSrc+2 >= len(src) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
size = 3
|
||||
c1 := src[nSrc+1]
|
||||
c2 := src[nSrc+2]
|
||||
switch {
|
||||
case c1 == '$' && (c2 == '@' || c2 == 'B'):
|
||||
*d = jis0208State
|
||||
continue
|
||||
case c1 == '$' && c2 == '(':
|
||||
if nSrc+3 >= len(src) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
size = 4
|
||||
if src[nSrc]+3 == 'D' {
|
||||
*d = jis0212State
|
||||
continue
|
||||
}
|
||||
case c1 == '(' && (c2 == 'B' || c2 == 'J'):
|
||||
*d = asciiState
|
||||
continue
|
||||
case c1 == '(' && c2 == 'I':
|
||||
*d = katakanaState
|
||||
continue
|
||||
}
|
||||
err = errInvalidISO2022JP
|
||||
break loop
|
||||
}
|
||||
|
||||
switch *d {
|
||||
case asciiState:
|
||||
r, size = rune(c0), 1
|
||||
|
||||
case katakanaState:
|
||||
if c0 < 0x21 || 0x60 <= c0 {
|
||||
err = errInvalidISO2022JP
|
||||
break loop
|
||||
}
|
||||
r, size = rune(c0)+(0xff61-0x21), 1
|
||||
|
||||
default:
|
||||
if c0 == 0x0a {
|
||||
*d = asciiState
|
||||
r, size = rune(c0), 1
|
||||
break
|
||||
}
|
||||
if nSrc+1 >= len(src) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
size = 2
|
||||
c1 := src[nSrc+1]
|
||||
i := int(c0-0x21)*94 + int(c1-0x21)
|
||||
if *d == jis0208State && i < len(jis0208Decode) {
|
||||
r = rune(jis0208Decode[i])
|
||||
} else if *d == jis0212State && i < len(jis0212Decode) {
|
||||
r = rune(jis0212Decode[i])
|
||||
} else {
|
||||
r = '\ufffd'
|
||||
break
|
||||
}
|
||||
if r == 0 {
|
||||
r = '\ufffd'
|
||||
}
|
||||
}
|
||||
|
||||
if nDst+utf8.RuneLen(r) > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break loop
|
||||
}
|
||||
nDst += utf8.EncodeRune(dst[nDst:], r)
|
||||
}
|
||||
if atEOF && err == transform.ErrShortSrc {
|
||||
err = errInvalidISO2022JP
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
|
||||
type iso2022JPEncoder int
|
||||
|
||||
func (e *iso2022JPEncoder) Reset() {
|
||||
*e = asciiState
|
||||
}
|
||||
|
||||
func (e *iso2022JPEncoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
for ; nSrc < len(src); nSrc += size {
|
||||
r = rune(src[nSrc])
|
||||
|
||||
// Decode a 1-byte rune.
|
||||
if r < utf8.RuneSelf {
|
||||
size = 1
|
||||
|
||||
} else {
|
||||
// Decode a multi-byte rune.
|
||||
r, size = utf8.DecodeRune(src[nSrc:])
|
||||
if size == 1 {
|
||||
// All valid runes of size 1 (those below utf8.RuneSelf) were
|
||||
// handled above. We have invalid UTF-8 or we haven't seen the
|
||||
// full character yet.
|
||||
if !atEOF && !utf8.FullRune(src[nSrc:]) {
|
||||
err = transform.ErrShortSrc
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
// func init checks that the switch covers all tables.
|
||||
//
|
||||
// http://encoding.spec.whatwg.org/#iso-2022-jp says that "the index jis0212
|
||||
// is not used by the iso-2022-jp encoder due to lack of widespread support".
|
||||
//
|
||||
// TODO: do we have to special-case U+00A5 and U+203E, as per
|
||||
// http://encoding.spec.whatwg.org/#iso-2022-jp
|
||||
// Doing so would mean that "\u00a5" would not be preserved
|
||||
// after an encode-decode round trip.
|
||||
switch {
|
||||
case encode0Low <= r && r < encode0High:
|
||||
if r = rune(encode0[r-encode0Low]); r>>tableShift == jis0208 {
|
||||
goto writeJIS
|
||||
}
|
||||
case encode1Low <= r && r < encode1High:
|
||||
if r = rune(encode1[r-encode1Low]); r>>tableShift == jis0208 {
|
||||
goto writeJIS
|
||||
}
|
||||
case encode2Low <= r && r < encode2High:
|
||||
if r = rune(encode2[r-encode2Low]); r>>tableShift == jis0208 {
|
||||
goto writeJIS
|
||||
}
|
||||
case encode3Low <= r && r < encode3High:
|
||||
if r = rune(encode3[r-encode3Low]); r>>tableShift == jis0208 {
|
||||
goto writeJIS
|
||||
}
|
||||
case encode4Low <= r && r < encode4High:
|
||||
if r = rune(encode4[r-encode4Low]); r>>tableShift == jis0208 {
|
||||
goto writeJIS
|
||||
}
|
||||
case encode5Low <= r && r < encode5High:
|
||||
if 0xff61 <= r && r < 0xffa0 {
|
||||
goto writeKatakana
|
||||
}
|
||||
if r = rune(encode5[r-encode5Low]); r>>tableShift == jis0208 {
|
||||
goto writeJIS
|
||||
}
|
||||
}
|
||||
|
||||
// Switch back to ASCII state in case of error so that an ASCII
|
||||
// replacement character can be written in the correct state.
|
||||
if *e != asciiState {
|
||||
if nDst+3 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
*e = asciiState
|
||||
dst[nDst+0] = asciiEsc
|
||||
dst[nDst+1] = '('
|
||||
dst[nDst+2] = 'B'
|
||||
nDst += 3
|
||||
}
|
||||
err = internal.ErrASCIIReplacement
|
||||
break
|
||||
}
|
||||
|
||||
if *e != asciiState {
|
||||
if nDst+4 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
*e = asciiState
|
||||
dst[nDst+0] = asciiEsc
|
||||
dst[nDst+1] = '('
|
||||
dst[nDst+2] = 'B'
|
||||
nDst += 3
|
||||
} else if nDst >= len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst] = uint8(r)
|
||||
nDst++
|
||||
continue
|
||||
|
||||
writeJIS:
|
||||
if *e != jis0208State {
|
||||
if nDst+5 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
*e = jis0208State
|
||||
dst[nDst+0] = asciiEsc
|
||||
dst[nDst+1] = '$'
|
||||
dst[nDst+2] = 'B'
|
||||
nDst += 3
|
||||
} else if nDst+2 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst+0] = 0x21 + uint8(r>>codeShift)&codeMask
|
||||
dst[nDst+1] = 0x21 + uint8(r)&codeMask
|
||||
nDst += 2
|
||||
continue
|
||||
|
||||
writeKatakana:
|
||||
if *e != katakanaState {
|
||||
if nDst+4 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
*e = katakanaState
|
||||
dst[nDst+0] = asciiEsc
|
||||
dst[nDst+1] = '('
|
||||
dst[nDst+2] = 'I'
|
||||
nDst += 3
|
||||
} else if nDst >= len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst] = uint8(r - (0xff61 - 0x21))
|
||||
nDst++
|
||||
continue
|
||||
}
|
||||
if atEOF && err == nil && *e != asciiState {
|
||||
if nDst+3 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
} else {
|
||||
*e = asciiState
|
||||
dst[nDst+0] = asciiEsc
|
||||
dst[nDst+1] = '('
|
||||
dst[nDst+2] = 'B'
|
||||
nDst += 3
|
||||
}
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
161
vendor/golang.org/x/text/encoding/japanese/maketables.go
generated
vendored
Normal file
161
vendor/golang.org/x/text/encoding/japanese/maketables.go
generated
vendored
Normal file
@ -0,0 +1,161 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// +build ignore
|
||||
|
||||
package main
|
||||
|
||||
// This program generates tables.go:
|
||||
// go run maketables.go | gofmt > tables.go
|
||||
|
||||
// TODO: Emoji extensions?
|
||||
// http://www.unicode.org/faq/emoji_dingbats.html
|
||||
// http://www.unicode.org/Public/UNIDATA/EmojiSources.txt
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type entry struct {
|
||||
jisCode, table int
|
||||
}
|
||||
|
||||
func main() {
|
||||
fmt.Printf("// generated by go run maketables.go; DO NOT EDIT\n\n")
|
||||
fmt.Printf("// Package japanese provides Japanese encodings such as EUC-JP and Shift JIS.\n")
|
||||
fmt.Printf(`package japanese // import "golang.org/x/text/encoding/japanese"` + "\n\n")
|
||||
|
||||
reverse := [65536]entry{}
|
||||
for i := range reverse {
|
||||
reverse[i].table = -1
|
||||
}
|
||||
|
||||
tables := []struct {
|
||||
url string
|
||||
name string
|
||||
}{
|
||||
{"http://encoding.spec.whatwg.org/index-jis0208.txt", "0208"},
|
||||
{"http://encoding.spec.whatwg.org/index-jis0212.txt", "0212"},
|
||||
}
|
||||
for i, table := range tables {
|
||||
res, err := http.Get(table.url)
|
||||
if err != nil {
|
||||
log.Fatalf("%q: Get: %v", table.url, err)
|
||||
}
|
||||
defer res.Body.Close()
|
||||
|
||||
mapping := [65536]uint16{}
|
||||
|
||||
scanner := bufio.NewScanner(res.Body)
|
||||
for scanner.Scan() {
|
||||
s := strings.TrimSpace(scanner.Text())
|
||||
if s == "" || s[0] == '#' {
|
||||
continue
|
||||
}
|
||||
x, y := 0, uint16(0)
|
||||
if _, err := fmt.Sscanf(s, "%d 0x%x", &x, &y); err != nil {
|
||||
log.Fatalf("%q: could not parse %q", table.url, s)
|
||||
}
|
||||
if x < 0 || 120*94 <= x {
|
||||
log.Fatalf("%q: JIS code %d is out of range", table.url, x)
|
||||
}
|
||||
mapping[x] = y
|
||||
if reverse[y].table == -1 {
|
||||
reverse[y] = entry{jisCode: x, table: i}
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
log.Fatalf("%q: scanner error: %v", table.url, err)
|
||||
}
|
||||
|
||||
fmt.Printf("// jis%sDecode is the decoding table from JIS %s code to Unicode.\n// It is defined at %s\n",
|
||||
table.name, table.name, table.url)
|
||||
fmt.Printf("var jis%sDecode = [...]uint16{\n", table.name)
|
||||
for i, m := range mapping {
|
||||
if m != 0 {
|
||||
fmt.Printf("\t%d: 0x%04X,\n", i, m)
|
||||
}
|
||||
}
|
||||
fmt.Printf("}\n\n")
|
||||
}
|
||||
|
||||
// Any run of at least separation continuous zero entries in the reverse map will
|
||||
// be a separate encode table.
|
||||
const separation = 1024
|
||||
|
||||
intervals := []interval(nil)
|
||||
low, high := -1, -1
|
||||
for i, v := range reverse {
|
||||
if v.table == -1 {
|
||||
continue
|
||||
}
|
||||
if low < 0 {
|
||||
low = i
|
||||
} else if i-high >= separation {
|
||||
if high >= 0 {
|
||||
intervals = append(intervals, interval{low, high})
|
||||
}
|
||||
low = i
|
||||
}
|
||||
high = i + 1
|
||||
}
|
||||
if high >= 0 {
|
||||
intervals = append(intervals, interval{low, high})
|
||||
}
|
||||
sort.Sort(byDecreasingLength(intervals))
|
||||
|
||||
fmt.Printf("const (\n")
|
||||
fmt.Printf("\tjis0208 = 1\n")
|
||||
fmt.Printf("\tjis0212 = 2\n")
|
||||
fmt.Printf("\tcodeMask = 0x7f\n")
|
||||
fmt.Printf("\tcodeShift = 7\n")
|
||||
fmt.Printf("\ttableShift = 14\n")
|
||||
fmt.Printf(")\n\n")
|
||||
|
||||
fmt.Printf("const numEncodeTables = %d\n\n", len(intervals))
|
||||
fmt.Printf("// encodeX are the encoding tables from Unicode to JIS code,\n")
|
||||
fmt.Printf("// sorted by decreasing length.\n")
|
||||
for i, v := range intervals {
|
||||
fmt.Printf("// encode%d: %5d entries for runes in [%5d, %5d).\n", i, v.len(), v.low, v.high)
|
||||
}
|
||||
fmt.Printf("//\n")
|
||||
fmt.Printf("// The high two bits of the value record whether the JIS code comes from the\n")
|
||||
fmt.Printf("// JIS0208 table (high bits == 1) or the JIS0212 table (high bits == 2).\n")
|
||||
fmt.Printf("// The low 14 bits are two 7-bit unsigned integers j1 and j2 that form the\n")
|
||||
fmt.Printf("// JIS code (94*j1 + j2) within that table.\n")
|
||||
fmt.Printf("\n")
|
||||
|
||||
for i, v := range intervals {
|
||||
fmt.Printf("const encode%dLow, encode%dHigh = %d, %d\n\n", i, i, v.low, v.high)
|
||||
fmt.Printf("var encode%d = [...]uint16{\n", i)
|
||||
for j := v.low; j < v.high; j++ {
|
||||
x := reverse[j]
|
||||
if x.table == -1 {
|
||||
continue
|
||||
}
|
||||
fmt.Printf("\t%d - %d: jis%s<<14 | 0x%02X<<7 | 0x%02X,\n",
|
||||
j, v.low, tables[x.table].name, x.jisCode/94, x.jisCode%94)
|
||||
}
|
||||
fmt.Printf("}\n\n")
|
||||
}
|
||||
}
|
||||
|
||||
// interval is a half-open interval [low, high).
|
||||
type interval struct {
|
||||
low, high int
|
||||
}
|
||||
|
||||
func (i interval) len() int { return i.high - i.low }
|
||||
|
||||
// byDecreasingLength sorts intervals by decreasing length.
|
||||
type byDecreasingLength []interval
|
||||
|
||||
func (b byDecreasingLength) Len() int { return len(b) }
|
||||
func (b byDecreasingLength) Less(i, j int) bool { return b[i].len() > b[j].len() }
|
||||
func (b byDecreasingLength) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
|
189
vendor/golang.org/x/text/encoding/japanese/shiftjis.go
generated
vendored
Normal file
189
vendor/golang.org/x/text/encoding/japanese/shiftjis.go
generated
vendored
Normal file
@ -0,0 +1,189 @@
|
||||
// Copyright 2013 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package japanese
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"unicode/utf8"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal"
|
||||
"golang.org/x/text/encoding/internal/identifier"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
// ShiftJIS is the Shift JIS encoding, also known as Code Page 932 and
|
||||
// Windows-31J.
|
||||
var ShiftJIS encoding.Encoding = &shiftJIS
|
||||
|
||||
var shiftJIS = internal.Encoding{
|
||||
&internal.SimpleEncoding{shiftJISDecoder{}, shiftJISEncoder{}},
|
||||
"Shift JIS",
|
||||
identifier.ShiftJIS,
|
||||
}
|
||||
|
||||
var errInvalidShiftJIS = errors.New("japanese: invalid Shift JIS encoding")
|
||||
|
||||
type shiftJISDecoder struct{ transform.NopResetter }
|
||||
|
||||
func (shiftJISDecoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
loop:
|
||||
for ; nSrc < len(src); nSrc += size {
|
||||
switch c0 := src[nSrc]; {
|
||||
case c0 < utf8.RuneSelf:
|
||||
r, size = rune(c0), 1
|
||||
|
||||
case 0xa1 <= c0 && c0 < 0xe0:
|
||||
r, size = rune(c0)+(0xff61-0xa1), 1
|
||||
|
||||
case (0x81 <= c0 && c0 < 0xa0) || (0xe0 <= c0 && c0 < 0xfd):
|
||||
if c0 <= 0x9f {
|
||||
c0 -= 0x70
|
||||
} else {
|
||||
c0 -= 0xb0
|
||||
}
|
||||
c0 = 2*c0 - 0x21
|
||||
|
||||
if nSrc+1 >= len(src) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
c1 := src[nSrc+1]
|
||||
switch {
|
||||
case c1 < 0x40:
|
||||
err = errInvalidShiftJIS
|
||||
break loop
|
||||
case c1 < 0x7f:
|
||||
c0--
|
||||
c1 -= 0x40
|
||||
case c1 == 0x7f:
|
||||
err = errInvalidShiftJIS
|
||||
break loop
|
||||
case c1 < 0x9f:
|
||||
c0--
|
||||
c1 -= 0x41
|
||||
case c1 < 0xfd:
|
||||
c1 -= 0x9f
|
||||
default:
|
||||
err = errInvalidShiftJIS
|
||||
break loop
|
||||
}
|
||||
r, size = '\ufffd', 2
|
||||
if i := int(c0)*94 + int(c1); i < len(jis0208Decode) {
|
||||
r = rune(jis0208Decode[i])
|
||||
if r == 0 {
|
||||
r = '\ufffd'
|
||||
}
|
||||
}
|
||||
|
||||
default:
|
||||
err = errInvalidShiftJIS
|
||||
break loop
|
||||
}
|
||||
|
||||
if nDst+utf8.RuneLen(r) > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break loop
|
||||
}
|
||||
nDst += utf8.EncodeRune(dst[nDst:], r)
|
||||
}
|
||||
if atEOF && err == transform.ErrShortSrc {
|
||||
err = errInvalidShiftJIS
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
||||
|
||||
type shiftJISEncoder struct{ transform.NopResetter }
|
||||
|
||||
func (shiftJISEncoder) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
|
||||
r, size := rune(0), 0
|
||||
loop:
|
||||
for ; nSrc < len(src); nSrc += size {
|
||||
r = rune(src[nSrc])
|
||||
|
||||
// Decode a 1-byte rune.
|
||||
if r < utf8.RuneSelf {
|
||||
size = 1
|
||||
|
||||
} else {
|
||||
// Decode a multi-byte rune.
|
||||
r, size = utf8.DecodeRune(src[nSrc:])
|
||||
if size == 1 {
|
||||
// All valid runes of size 1 (those below utf8.RuneSelf) were
|
||||
// handled above. We have invalid UTF-8 or we haven't seen the
|
||||
// full character yet.
|
||||
if !atEOF && !utf8.FullRune(src[nSrc:]) {
|
||||
err = transform.ErrShortSrc
|
||||
break loop
|
||||
}
|
||||
}
|
||||
|
||||
// func init checks that the switch covers all tables.
|
||||
switch {
|
||||
case encode0Low <= r && r < encode0High:
|
||||
if r = rune(encode0[r-encode0Low]); r>>tableShift == jis0208 {
|
||||
goto write2
|
||||
}
|
||||
case encode1Low <= r && r < encode1High:
|
||||
if r = rune(encode1[r-encode1Low]); r>>tableShift == jis0208 {
|
||||
goto write2
|
||||
}
|
||||
case encode2Low <= r && r < encode2High:
|
||||
if r = rune(encode2[r-encode2Low]); r>>tableShift == jis0208 {
|
||||
goto write2
|
||||
}
|
||||
case encode3Low <= r && r < encode3High:
|
||||
if r = rune(encode3[r-encode3Low]); r>>tableShift == jis0208 {
|
||||
goto write2
|
||||
}
|
||||
case encode4Low <= r && r < encode4High:
|
||||
if r = rune(encode4[r-encode4Low]); r>>tableShift == jis0208 {
|
||||
goto write2
|
||||
}
|
||||
case encode5Low <= r && r < encode5High:
|
||||
if 0xff61 <= r && r < 0xffa0 {
|
||||
r -= 0xff61 - 0xa1
|
||||
goto write1
|
||||
}
|
||||
if r = rune(encode5[r-encode5Low]); r>>tableShift == jis0208 {
|
||||
goto write2
|
||||
}
|
||||
}
|
||||
err = internal.ErrASCIIReplacement
|
||||
break
|
||||
}
|
||||
|
||||
write1:
|
||||
if nDst >= len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break
|
||||
}
|
||||
dst[nDst] = uint8(r)
|
||||
nDst++
|
||||
continue
|
||||
|
||||
write2:
|
||||
j1 := uint8(r>>codeShift) & codeMask
|
||||
j2 := uint8(r) & codeMask
|
||||
if nDst+2 > len(dst) {
|
||||
err = transform.ErrShortDst
|
||||
break loop
|
||||
}
|
||||
if j1 <= 61 {
|
||||
dst[nDst+0] = 129 + j1/2
|
||||
} else {
|
||||
dst[nDst+0] = 193 + j1/2
|
||||
}
|
||||
if j1&1 == 0 {
|
||||
dst[nDst+1] = j2 + j2/63 + 64
|
||||
} else {
|
||||
dst[nDst+1] = j2 + 159
|
||||
}
|
||||
nDst += 2
|
||||
continue
|
||||
}
|
||||
return nDst, nSrc, err
|
||||
}
|
26971
vendor/golang.org/x/text/encoding/japanese/tables.go
generated
vendored
Normal file
26971
vendor/golang.org/x/text/encoding/japanese/tables.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
47
vendor/golang.org/x/text/encoding/korean/all_test.go
generated
vendored
Normal file
47
vendor/golang.org/x/text/encoding/korean/all_test.go
generated
vendored
Normal file
@ -0,0 +1,47 @@
|
||||
// Copyright 2015 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package korean
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"golang.org/x/text/encoding"
|
||||
"golang.org/x/text/encoding/internal"
|
||||
"golang.org/x/text/transform"
|
||||
)
|
||||
|
||||
func dec(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
|
||||
return "Decode", e.NewDecoder(), nil
|
||||
}
|
||||
func enc(e encoding.Encoding) (dir string, t transform.Transformer, err error) {
|
||||
return "Encode", e.NewEncoder(), internal.ErrASCIIReplacement
|
||||
}
|
||||
|
||||
func TestNonRepertoire(t *testing.T) {
|
||||
testCases := []struct {
|
||||
init func(e encoding.Encoding) (string, transform.Transformer, error)
|
||||
e encoding.Encoding
|
||||
src, want string
|
||||
}{
|
||||
{dec, EUCKR, "\xfe\xfe", "\ufffd"},
|
||||
// {dec, EUCKR, "א", "\ufffd"}, // TODO: why is this different?
|
||||
|
||||
{enc, EUCKR, "א", ""},
|
||||
{enc, EUCKR, "aא", "a"},
|
||||
{enc, EUCKR, "\uac00א", "\xb0\xa1"},
|
||||
// TODO: should we also handle Jamo?
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
dir, tr, wantErr := tc.init(tc.e)
|
||||
|
||||
dst, _, err := transform.String(tr, tc.src)
|
||||
if err != wantErr {
|
||||
t.Errorf("%s %v(%q): got %v; want %v", dir, tc.e, tc.src, err, wantErr)
|
||||
}
|
||||
if got := string(dst); got != tc.want {
|
||||
t.Errorf("%s %v(%q):\ngot %q\nwant %q", dir, tc.e, tc.src, got, tc.want)
|
||||
}
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user