druid sqlServer批量导入数据sql命令bulk报ERROR错误,但是能够正确执行。

91zkwejq  于 4个月前  发布在  Druid
关注(0)|答案(1)|浏览(65)

druid版本1.0.19

异常信息如下

103706 ERROR [2018-09-03 15:04:15] merge sql error, dbType sqlserver, sql :
BULK INSERT aaa FROM 'C:/xxx/xxx/xxx/xxx/aaa.csv' WITH ( FIRSTROW=2, FIELDTERMINATOR=',', ROWTERMINATOR='\n', KEEPNULLS);
com.alibaba.druid.sql.parser.ParserException: syntax error, error in :'BULK INSERT aaa FROM 'C',expect IDENTIFIER, actual IDENTIFIER BULK
at com.alibaba.druid.sql.parser.SQLParser.printError(SQLParser.java:232)
at com.alibaba.druid.sql.parser.SQLStatementParser.parseStatementList(SQLStatementParser.java:407)
at com.alibaba.druid.sql.parser.SQLStatementParser.parseStatementList(SQLStatementParser.java:145)
at com.alibaba.druid.sql.parser.SQLStatementParser.parseStatementList(SQLStatementParser.java:140)
at com.alibaba.druid.sql.visitor.ParameterizedOutputVisitorUtils.parameterize(ParameterizedOutputVisitorUtils.java:53)
at com.alibaba.druid.filter.stat.StatFilter.mergeSql(StatFilter.java:145)
at com.alibaba.druid.filter.stat.StatFilter.createSqlStat(StatFilter.java:630)
at com.alibaba.druid.filter.stat.StatFilter.internalBeforeStatementExecute(StatFilter.java:397)
at com.alibaba.druid.filter.stat.StatFilter.statementExecuteBefore(StatFilter.java:345)
at com.alibaba.druid.filter.FilterEventAdapter.statement_execute(FilterEventAdapter.java:185)
at com.alibaba.druid.filter.FilterChainImpl.statement_execute(FilterChainImpl.java:2487)
at com.alibaba.druid.proxy.jdbc.StatementProxyImpl.execute(StatementProxyImpl.java:137)
at com.alibaba.druid.pool.DruidPooledStatement.execute(DruidPooledStatement.java:421)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:435)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:443)
at com.hexin.sync.task.download.FIleIntoDBThread.run(FIleIntoDBThread.java:98)
at java.lang.Thread.run(Unknown Source)
136792 ERROR [2018-09-03 15:04:48] slow sql 33084 millis.
BULK INSERT aaa FROM 'C:/xxx/xxx/xxx/xxx/aaa.csv' WITH ( FIRSTROW=2, FIELDTERMINATOR=',', ROWTERMINATOR='\n', KEEPNULLS);
[]

sqlServer官方文档关于 bulk语法 如下:

BULK INSERT
[ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ]
FROM 'data_file'
[ WITH
(
[ [ , ] BATCHSIZE = batch_size ]
[ [ , ] CHECK_CONSTRAINTS ]
[ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ]
[ [ , ] DATAFILETYPE =
{ 'char' | 'native'| 'widechar' | 'widenative' } ]
[ [ , ] DATASOURCE = 'data_source_name' ]
[ [ , ] ERRORFILE = 'file_name' ]
[ [ , ] ERRORFILE_DATA_SOURCE = 'data_source_name' ]
[ [ , ] FIRSTROW = first_row ]
[ [ , ] FIRE_TRIGGERS ]
[ [ , ] FORMATFILE_DATASOURCE = 'data_source_name' ]
[ [ , ] KEEPIDENTITY ]
[ [ , ] KEEPNULLS ]
[ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ]
[ [ , ] LASTROW = last_row ]
[ [ , ] MAXERRORS = max_errors ]
[ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ]
[ [ , ] ROWS_PER_BATCH = rows_per_batch ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
[ [ , ] TABLOCK ]

-- input file format options
[ [ , ] FORMAT = 'CSV' ]
[ [ , ] FIELDQUOTE = 'quote_characters']
[ [ , ] FORMATFILE = 'format_file_path' ]
[ [ , ] FIELDTERMINATOR = 'field_terminator' ]
[ [ , ] ROWTERMINATOR = 'row_terminator' ]
)]

nzkunb0c

nzkunb0c1#

postgresql 批量导入数据,命令copy也是同样的错误。

异常如下:

66498 ERROR [2018-09-05 16:55:08] merge sql error, dbType postgresql, sql :
COPY aaa FROM '/xxx/xxx/xxx/xxx/xxx/aaa.csv' WITH csv header;
com.alibaba.druid.sql.parser.ParserException: syntax error, error in :'COPY aaa FROM '/xxx/xxx',expect IDENTIFIER, actual IDENTIFIER COPY
at com.alibaba.druid.sql.parser.SQLParser.printError(SQLParser.java:232)
at com.alibaba.druid.sql.parser.SQLStatementParser.parseStatementList(SQLStatementParser.java:407)
at com.alibaba.druid.sql.parser.SQLStatementParser.parseStatementList(SQLStatementParser.java:145)
at com.alibaba.druid.sql.parser.SQLStatementParser.parseStatementList(SQLStatementParser.java:140)
at com.alibaba.druid.sql.visitor.ParameterizedOutputVisitorUtils.parameterize(ParameterizedOutputVisitorUtils.java:53)
at com.alibaba.druid.filter.stat.StatFilter.mergeSql(StatFilter.java:145)
at com.alibaba.druid.filter.stat.StatFilter.createSqlStat(StatFilter.java:630)
at com.alibaba.druid.filter.stat.StatFilter.internalBeforeStatementExecute(StatFilter.java:397)
at com.alibaba.druid.filter.stat.StatFilter.statementExecuteBefore(StatFilter.java:345)
at com.alibaba.druid.filter.FilterEventAdapter.statement_execute(FilterEventAdapter.java:185)
at com.alibaba.druid.filter.FilterChainImpl.statement_execute(FilterChainImpl.java:2487)
at com.alibaba.druid.proxy.jdbc.StatementProxyImpl.execute(StatementProxyImpl.java:137)
at com.alibaba.druid.pool.DruidPooledStatement.execute(DruidPooledStatement.java:421)
at org.springframework.jdbc.core.JdbcTemplate$1ExecuteStatementCallback.doInStatement(JdbcTemplate.java:435)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:408)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:443)
at com.hexin.sync.task.download.FIleIntoDBThread.run(FIleIntoDBThread.java:98)
at java.lang.Thread.run(Thread.java:745)
69776 INFO [2018-09-05 16:55:11] 文件/xxx/xxx/xxx/xxx/xxx/aaa.csv入库成功,耗时3281ms

官方copy命令文档

COPY table_name [ ( column_name [, ...] ) ]
FROM { 'filename' | PROGRAM 'command' | STDIN }
[ [ WITH ] ( option [, ...] ) ]

COPY { table_name [ ( column_name [, ...] ) ] | ( query ) }
TO { 'filename' | PROGRAM 'command' | STDOUT }
[ [ WITH ] ( option [, ...] ) ]

where option can be one of:
FORMAT format_name
OIDS [ boolean ]
FREEZE [ boolean ]
DELIMITER 'delimiter_character'
NULL 'null_string'
HEADER [ boolean ]
QUOTE 'quote_character'
ESCAPE 'escape_character'
FORCE_QUOTE { ( column_name [, ...] ) | * }
FORCE_NOT_NULL ( column_name [, ...] )
FORCE_NULL ( column_name [, ...] )
ENCODING 'encoding_name'

相关问题