README.md in embulk-input-sqlserver-0.10.0 vs README.md in embulk-input-sqlserver-0.10.1

- old
+ new

@@ -7,11 +7,12 @@ * **Plugin type**: input * **Resume supported**: yes ## Configuration -- **driver_path**: path to the jar file of Microsoft SQL Server JDBC driver. If not set, open-source driver (jTDS driver) is used (string) +- **driver_path**: path to the jar file of Microsoft SQL Server JDBC driver. If not set, the bundled JDBC driver (Microsoft SQL Server JDBC driver 7.2.2) will be used. (string) +- **driver_type**: the current version of embulk-input-sqlserver will use Microsoft SQL Server JDBC driver in default, but version 0.10.0 or older will use jTDS driver in default. You can still use jTDS driver by setting this option to "jtds". (string, default: "mssql-jdbc") - **host**: database host name (string, required if url is not set) - **port**: database port number (integer, default: 1433) - **integratedSecutiry**: whether to use integrated authentication or not. The `sqljdbc_auth.dll` must be located on Java library path if using integrated authentication. : (boolean, default: false) ``` rem C:\drivers\sqljdbc_auth.dll @@ -36,17 +37,18 @@ - **connect_timeout**: timeout for the driver to connect. 0 means the default of SQL Server (15 by default). (integer (seconds), default: 300) - **application_name**: application name used to identify a connection in profiling and logging tools. (string, default: "embulk-input-sqlserver") - **socket_timeout**: timeout for executing the query. 0 means no timeout. (integer (seconds), default: 1800) - **options**: extra JDBC properties (hash, default: {}) - **incremental**: if true, enables incremental loading. See next section for details (boolean, default: false) -- **incremental_columns**: column names for incremental loading (array of strings, default: use primary keys) +- **incremental_columns**: column names for incremental loading (array of strings, default: use primary keys). Columns of integer types, string types and `datetime2` are supported. - **last_record**: values of the last record for incremental loading (array of objects, default: load all records) - **default_timezone**: If the sql type of a column is `date`/`time`/`datetime` and the embulk type is `string`, column values are formatted int this default_timezone. You can overwrite timezone for each columns using column_options option. (string, default: `UTC`) - **default_column_options**: advanced: column_options for each JDBC type as default. key-value pairs where key is a JDBC type (e.g. 'DATE', 'BIGINT') and value is same as column_options's value. - **column_options**: advanced: key-value pairs where key is a column name and value is options for the column. - **value_type**: embulk get values from database as this value_type. Typically, the value_type determines `getXXX` method of `java.sql.PreparedStatement`. (string, default: depends on the sql type of the column. Available values options are: `long`, `double`, `float`, `decimal`, `boolean`, `string`, `json`, `date`, `time`, `timestamp`) + NOTE: the default value_type for DATE, TIME and DATETIME2 is `string`, because jTDS driver, default JDBC driver for older embulk-input-sqlserver, returns Types.VARCHAR as JDBC type for these types. - **type**: Column values are converted to this embulk type. Available values options are: `boolean`, `long`, `double`, `string`, `json`, `timestamp`). By default, the embulk type is determined according to the sql type of the column (or value_type if specified). - **timestamp_format**: If the sql type of the column is `date`/`time`/`datetime` and the embulk type is `string`, column values are formatted by this timestamp_format. And if the embulk type is `timestamp`, this timestamp_format may be used in the output plugin. For example, stdout plugin use the timestamp_format, but *csv formatter plugin doesn't use*. (string, default : `%Y-%m-%d` for `date`, `%H:%M:%S` for `time`, `%Y-%m-%d %H:%M:%S` for `timestamp`) - **timezone**: If the sql type of the column is `date`/`time`/`datetime` and the embulk type is `string`, column values are formatted in this timezone. @@ -69,16 +71,16 @@ ORDER BY updated_at, id ``` When bulk data loading finishes successfully, it outputs `last_record: ` paramater as config-diff so that next execution uses it. -At the next execution, when `last_record: ` is also set, this plugin generates additional WHERE conditions to load records larger than the last record. For example, if `last_record: ["2017-01-01 00:32:12", 5291]` is set, +At the next execution, when `last_record: ` is also set, this plugin generates additional WHERE conditions to load records larger than the last record. For example, if `last_record: ["2017-01-01 00:32:12.4876590", 5291]` is set, ``` SELECT * FROM ( ...original query is here... ) -WHERE updated_at > '2017-01-01 00:32:12' OR (updated_at = '2017-01-01 00:32:12' AND id > 5291) +WHERE updated_at > '2017-01-01 00:32:12.4876590' OR (updated_at = '2017-01-01 00:32:12.4876590' AND id > 5291) ORDER BY updated_at, id ``` Then, it updates `last_record: ` so that next execution uses the updated last_record.