Hi,
I really would appreciate any help on this. Can a very
large transaction log have a negative impact on write
performance' We were having a problem with VERY slow
writes (30 seconds each) to a database. Reads were very
fast (0 ms). The app is running on a Windows 2000/SQL
2000 Active/Passive cluster. Each server has 2 G of ram
and disk space is absolutely not an issue. The only other
applications that run on the server are Mcaffee (but this
is disabled from the data/log directories).
We noticed that the log file for the affected database was
EXTREMELY large (3G). TONS of leftover disk space for the
log. There's not a whole lot of updates that occur in
this database. The database was in Full recovery mode.
We usually rely on nightly backups due to the sparse
update/inserts/deletes.
After encountering this problem, I failed over the node
(restarted the SQL services, and writes became fast again,
but this was only temporary (lasted a couple hours), which
led me to believe that it could be a memory issue.
Anyway, after that, I made the following changes: 1.
Backed up the database, then the log, shrank the log
(now .99MB) and temporarily put set the recovery mode to
simple. 2. Reduced the max SQL server memory setting to 1
G.
Writes are again very fast. I have seen no problems for a
day and a half under a normal load. It would really help
to know if the large tlog could have actually been the
problem? Or the if the memory settings may have done
the trick and why? Then we can rest assured that the
issue will not crop
up again later!
Thanks,
ChrisThe log could be a problem. Obviously all the uncommitted transactions are
sitting in there, and writing in sequence could be tying up things.
Thinking along those lines, what you need to do is think of a good recovery
model or consider moving the transaction log to a separate disk where there
is nothing on it. I wish I could cite a source better than Transcender
study questions to say that the recommendation I've seen is to put a
transaction log on a mirrored set, but you can find some discussions doing
google searches.
A good recovery model depends on whether you want up to the minute restores
or you can stand to lose everything from the last full /differential backup.
The frequent transaction log backups or truncations will keep the size down
to a manageable one.
If you need up-to-the-minute restores, then consider setting your database
to the full recovery model, and schedule frequent transaction log backups to
disk. Then you can restore additional logs on top of the last full database
restore.
If you can stand to lose everything from the last full backup, then consider
using the simple recovery model (select into/bulkcopy true, trunc log on
chkpt true), which will automatically empty your transaction log at every
checkpoint the database does. Then make sure your full backups and
differential backups are happening at appropriate intervals. Good luck.
--
*******************************************************************
Andy S.
MCSE NT/2000, MCDBA SQL 7/2000
andymcdba1@.NOMORESPAM.yahoo.com
Please remove NOMORESPAM before replying.
Always keep your antivirus and Microsoft software
up to date with the latest definitions and product updates.
Be suspicious of every email attachment, I will never send
or post anything other than the text of a http:// link nor
post the link directly to a file for downloading.
This posting is provided "as is" with no warranties
and confers no rights.
*******************************************************************
"Christian Bouche" <cbouche@.gohealthcast.com> wrote in message
news:2a1801c3fc91$500f3fe0$a101280a@.phx.gbl...
> Hi,
> I really would appreciate any help on this. Can a very
> large transaction log have a negative impact on write
> performance' We were having a problem with VERY slow
> writes (30 seconds each) to a database. Reads were very
> fast (0 ms). The app is running on a Windows 2000/SQL
> 2000 Active/Passive cluster. Each server has 2 G of ram
> and disk space is absolutely not an issue. The only other
> applications that run on the server are Mcaffee (but this
> is disabled from the data/log directories).
> We noticed that the log file for the affected database was
> EXTREMELY large (3G). TONS of leftover disk space for the
> log. There's not a whole lot of updates that occur in
> this database. The database was in Full recovery mode.
> We usually rely on nightly backups due to the sparse
> update/inserts/deletes.
> After encountering this problem, I failed over the node
> (restarted the SQL services, and writes became fast again,
> but this was only temporary (lasted a couple hours), which
> led me to believe that it could be a memory issue.
> Anyway, after that, I made the following changes: 1.
> Backed up the database, then the log, shrank the log
> (now .99MB) and temporarily put set the recovery mode to
> simple. 2. Reduced the max SQL server memory setting to 1
> G.
> Writes are again very fast. I have seen no problems for a
> day and a half under a normal load. It would really help
> to know if the large tlog could have actually been the
> problem? Or the if the memory settings may have done
> the trick and why? Then we can rest assured that the
> issue will not crop
> up again later!
> Thanks,
> Chris
>|||Thank you so much for your help.
Chris
>--Original Message--
>The log could be a problem. Obviously all the
uncommitted transactions are
>sitting in there, and writing in sequence could be tying
up things.
>Thinking along those lines, what you need to do is think
of a good recovery
>model or consider moving the transaction log to a
separate disk where there
>is nothing on it. I wish I could cite a source better
than Transcender
>study questions to say that the recommendation I've seen
is to put a
>transaction log on a mirrored set, but you can find some
discussions doing
>google searches.
>A good recovery model depends on whether you want up to
the minute restores
>or you can stand to lose everything from the last
full /differential backup.
>The frequent transaction log backups or truncations will
keep the size down
>to a manageable one.
>If you need up-to-the-minute restores, then consider
setting your database
>to the full recovery model, and schedule frequent
transaction log backups to
>disk. Then you can restore additional logs on top of the
last full database
>restore.
>If you can stand to lose everything from the last full
backup, then consider
>using the simple recovery model (select into/bulkcopy
true, trunc log on
>chkpt true), which will automatically empty your
transaction log at every
>checkpoint the database does. Then make sure your full
backups and
>differential backups are happening at appropriate
intervals. Good luck.
>--
>**********************************************************
*********
>Andy S.
>MCSE NT/2000, MCDBA SQL 7/2000
>andymcdba1@.NOMORESPAM.yahoo.com
>Please remove NOMORESPAM before replying.
>Always keep your antivirus and Microsoft software
>up to date with the latest definitions and product
updates.
>Be suspicious of every email attachment, I will never send
>or post anything other than the text of a http:// link nor
>post the link directly to a file for downloading.
>This posting is provided "as is" with no warranties
>and confers no rights.
>**********************************************************
*********
>"Christian Bouche" <cbouche@.gohealthcast.com> wrote in
message
>news:2a1801c3fc91$500f3fe0$a101280a@.phx.gbl...
>> Hi,
>> I really would appreciate any help on this. Can a very
>> large transaction log have a negative impact on write
>> performance' We were having a problem with VERY slow
>> writes (30 seconds each) to a database. Reads were very
>> fast (0 ms). The app is running on a Windows 2000/SQL
>> 2000 Active/Passive cluster. Each server has 2 G of ram
>> and disk space is absolutely not an issue. The only
other
>> applications that run on the server are Mcaffee (but
this
>> is disabled from the data/log directories).
>> We noticed that the log file for the affected database
was
>> EXTREMELY large (3G). TONS of leftover disk space for
the
>> log. There's not a whole lot of updates that occur in
>> this database. The database was in Full recovery mode.
>> We usually rely on nightly backups due to the sparse
>> update/inserts/deletes.
>> After encountering this problem, I failed over the node
>> (restarted the SQL services, and writes became fast
again,
>> but this was only temporary (lasted a couple hours),
which
>> led me to believe that it could be a memory issue.
>> Anyway, after that, I made the following changes: 1.
>> Backed up the database, then the log, shrank the log
>> (now .99MB) and temporarily put set the recovery mode to
>> simple. 2. Reduced the max SQL server memory setting
to 1
>> G.
>> Writes are again very fast. I have seen no problems
for a
>> day and a half under a normal load. It would really
help
>> to know if the large tlog could have actually been the
>> problem? Or the if the memory settings may have done
>> the trick and why? Then we can rest assured that the
>> issue will not crop
>> up again later!
>> Thanks,
>> Chris
>
>.
>
Friday, March 9, 2012
Large transaction log causing write performance hit
Labels:
appreciate,
causing,
database,
hit,
impact,
log,
microsoft,
mysql,
negative,
oracle,
performance,
server,
sql,
transaction,
write
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment