I'm updating every row in a 200mb table. I'm trying to work out why it
running so slowly and have found that it increases the size of the log file
by 625mb an then empties it, I believe, but leaves it at the same size. I've
got the recovery mode set to simple which I thought was not meant to use the
log file. Can someone tell me what's going on and if there's some method of
speeding this up.
Thanks
MichaelHi Michael,
Simple recovery mode still uses the log file during the transaction, but
then truncates it after the transaction. The log file is used during the
transaction such that the transaction can be rolled back, if necessary (e.g.
if it hits an error or if you cancel the update halfway through). To battle
transaction log growth during large transactions, many people split the
transactions into batches.
Please refer to the following search for many threads related to this topic:
http://groups.google.com/groups?as_...r />
=2005&saf
e=off
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
--
"Michael C" <mculley@.NOSPAMoptushome.com.au> wrote in message
news:evIv9OVzFHA.3152@.TK2MSFTNGP10.phx.gbl...
> I'm updating every row in a 200mb table. I'm trying to work out why it
> running so slowly and have found that it increases the size of the log
> file by 625mb an then empties it, I believe, but leaves it at the same
> size. I've got the recovery mode set to simple which I thought was not
> meant to use the log file. Can someone tell me what's going on and if
> there's some method of speeding this up.
> Thanks
> Michael
>|||Every transaction is logged--even if you use the simple recovery mode. That
way the transaction can be rolled back.
To speed up the update you need to split it into smaller chunks. If the
individual updates are small enough, they will fit into the transaction log
without making it grow. I'm assuming that the growth is probably why the
update is running so slow.
How do you split it up? It depends. If you are setting a column to a
specific value, then you can use something like:
SET ROWCOUNT 1000
AGAIN:
UPDATE tableName SET columnName = <newColumnValue> WHERE columnName !=
<newColumnValue>
IF @.@.ROWCOUNT > 0 GOTO AGAIN
SET ROWCOUNT 0
(Sometimes it's better to use > or < instead of != to speed things up even
more.)
There are many ways to do this, but I'd need a better description of what
you're trying to do and DDL for the table.
"Michael C" <mculley@.NOSPAMoptushome.com.au> wrote in message
news:evIv9OVzFHA.3152@.TK2MSFTNGP10.phx.gbl...
> I'm updating every row in a 200mb table. I'm trying to work out why it
> running so slowly and have found that it increases the size of the log
> file by 625mb an then empties it, I believe, but leaves it at the same
> size. I've got the recovery mode set to simple which I thought was not
> meant to use the log file. Can someone tell me what's going on and if
> there's some method of speeding this up.
> Thanks
> Michael
>|||Hi,
its better to update records in smaller chunk then deleting as whole limit
row / record size for deleting in 1000 or as you like , so that you can
control T-LOG file size
Refer following thread :
http://www.sql-server-performance.c...?TOPIC_ID=10706
;-)
Regards
Showing posts with label itrunning. Show all posts
Showing posts with label itrunning. Show all posts
Friday, March 9, 2012
Subscribe to:
Posts (Atom)