数据库

DB2恢复数据:import VS load

刚开始学习DB2的时候 一直都以为DB2的导入导出语句只有export to与import from
但是 后来在一次数据库挂了之后 发现可以使用一条更优质的语句 ?restore用来进行数据库的恢复
然后 ?昨天在进行数据库恢复的时候 总是报日志已满的错误 ?在询问了公司相关人员之后 发现db2除了之前的那两句之外 还可以使用load来进行数据恢复….

Import和Load 都可以将数据导入到DB2服务器中,但是2者是有很大区别的。
Import 其实执行了SQL 的INSERT 操作。和INSERT 操作一样,Import 执行的时候会激活触发器,所有的约束会强制实现,而且会使用数据库的缓冲池。类似Oracle sql*loader工具的常规路径导入。

Load 工具可以更快的将数据文件导入到数据表中。Load 工具不会与DB2 数据引擎发生交互,所以当使用Load 工具时,不会触发触发器也不会使用缓冲池,而且必须单独实现数据表的约束。Import 工具执行起来比Load 慢是因为它是低层次的数据操作工具,它分
LOAD,BUILD,DELETE 三个阶段对硬盘上的数据页面来进行直接的处理。Load工具类似Oracle sql*loader工具的直接路径导入。

通过下面一个例子可以说明这一点:

db2 => connect @
 Database Connection Information
 Database server = DB2/LINUX 9.7.0
 SQL authorization ID = FAVORINFO
 Local database alias = DBTEST
 db2 => create table test(id int not null primary key,name varchar(20)) @
 DB20000I The SQL command completed successfully.
 db2 => commit @
 DB20000I The SQL command completed successfully.
 db2 => create trigger tri_test
 db2 (cont.) => no cascade
 db2 (cont.) => before insert on test
 db2 (cont.) => referencing new as n
 db2 (cont.) => for each row
 db2 (cont.) => begin atomic
 db2 (cont.) => set n.name=n.name||' [import test]';
 db2 (cont.) => end @
 DB20000I The SQL command completed successfully.
 db2 => commit @
 DB20000I The SQL command completed successfully.
 db2 => insert into test values(1,'a') @
 DB20000I The SQL command completed successfully.
 db2 => commit @ 
 DB20000I The SQL command completed successfully.
 db2 => select * from test @
 ID NAME
 ----------- --------------------
 1 a [import test]
 1 record(s) selected.
 db2 => ! cat test.txt @
 1,"aa"
 2,"bb"
 3,"cc"
 4,"dd"
 5,"ee"
 6,"ff"
 db2 => import from test.txt of del insert into test @
 SQL3109N The utility is beginning to load data from file "test.txt".
 SQL3148W A row from the input file was not inserted into the table. SQLCODE "-803" was returned.
 SQL0803N One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the
 primary key, unique constraint or unique index identified by "1" constrains table "FAVORINFO.TEST" from having duplicate values for the index key.
 SQLSTATE=23505
 SQL3185W The previous error occurred while processing data from row "1" of the input file.
 SQL3110N The utility has completed processing. "6" rows were read from the input file.
 SQL3221W ...Begin COMMIT WORK. Input Record Count = "6".
 SQL3222W ...COMMIT of any database changes was successful.
 SQL3149N "6" rows were processed from the input file. "5" rows were successfully inserted into the table. "1" rows were rejected.
 Number of rows read = 6
 Number of rows skipped = 0
 Number of rows inserted = 5
 Number of rows updated = 0
 Number of rows rejected = 1
 Number of rows committed = 6
 db2 => select * from test @
 ID NAME
 ----------- --------------------
 1 a [import test]
 2 bb [import test]
 3 cc [import test]
 4 dd [import test]
 5 ee [import test]
 6 ff [import test]
 6 record(s) selected.

可以看到文件中的记录1由于主键冲突而被拒绝导入,日志文件显示 Number of rows rejected = 1 ,
其他导入的记录也都触发了触发器操作。
下面看看Load工具的情况:

 db2 => delete from test where id>1 @
 DB20000I The SQL command completed successfully.
 db2 => commit @ 
 DB20000I The SQL command completed successfully.
 db2 => select * from test @
 ID NAME
 ----------- --------------------
 1 a [import test]
 1 record(s) selected.

首先删掉导入的记录,只保留ID=1的记录。

 db2 => ! cat test.txt
 db2 (cont.) => @
 1,"aa"
 2,"bb"
 3,"cc"
 4,"dd"
 5,"ee"
 6,"ff"
 db2 => load from test.txt of del insert into test @
 SQL3501W The table space(s) in which the table resides will not be placed in backup pending state since forward recovery is disabled for the database.
 SQL3109N The utility is beginning to load data from file "/home/favorinfo/test.txt".
 SQL3500W The utility is beginning the "LOAD" phase at time "2013-11-04 10:11:44.784072".
 SQL3519W Begin Load Consistency Point. Input record count = "0".
 SQL3520W Load Consistency Point was successful.
 SQL3110N The utility has completed processing. "6" rows were read from the input file.
 SQL3519W Begin Load Consistency Point. Input record count = "6".
 SQL3520W Load Consistency Point was successful.
 SQL3515W The utility has finished the "LOAD" phase at time "2013-11-04 10:11:44.808464".
 SQL3500W The utility is beginning the "BUILD" phase at time "2013-11-04 10:11:44.809206".
 SQL3213I The indexing mode is "REBUILD".
 SQL3515W The utility has finished the "BUILD" phase at time "2013-11-04 10:11:44.914248". 
 SQL3500W The utility is beginning the "DELETE" phase at time "2013-11-04 10:11:44.952664".
 SQL3509W The utility has deleted "1" rows from the table.
 SQL3515W The utility has finished the "DELETE" phase at time "2013-11-04 10:11:44.979506".
 Number of rows read = 6
 Number of rows skipped = 0
 Number of rows loaded = 6
 Number of rows rejected = 0
 Number of rows deleted = 1
 Number of rows committed = 6
 db2 => select * from test @
 ID NAME
 ----------- --------------------
 1 a [import test]
 2 bb
 3 cc
 4 dd
 5 ee
 6 ff
 6 record(s) selected.

很明显导入结果没有触发触发器操作,而且并没有拒绝任何行,相反的是有一条记录被删除了,这是因为Load会把所有的满足条件记录导入到表中,在 load的DELETE阶段将会删掉重复的记录行。

使用load命令,首先要明白当前你要导入的库是否在本地。如果不在本地,就要使用 load client from …命令,否则load from..即可。
无论何时,我们应该优先选用load命令来完成导入数据,因为他的效率比import要高,而且高很多。

但是,在日常生活中因为各种原因,只能使用import导入数据。
如果直接使用import from c:/data.ixf of ixf insert into table.你会发现速度很慢,到被导入的数据达到上千后,明显感觉的到。平均导入100条数据的时候为1分钟左右。

更好的解决办法是使用参数:compound (范围0-100)。

import from [fileAddress]of [type]  modifiry by compound=100  insert into table.

平均导入100条数据的时间为1秒。导入效率提高100倍。
当然,在导入的过程中你可以100条或1000条左右数据就自动提交一次,看数据量大小。一般控制在3万条以下提交一次适宜
如:

import from ./data.ixf of ixf  commitcount 1000  modifiry by compound=100  insert into table