I have a table thats about 3 gigs, using this table and a few others Im making another table. The problem is when making the new table my transaction log inflates so much that Im running out of disk space. What I can I do to prevent this or to keep the transaction log size under control?You need to either try SELECT * INTO new_table FROM old_table, or BCP out from the old table and BULK INSERT/BCP into the new one. When doing BCP IN/BULK INSERT make sure to specify BATCH SIZE, something like 10,000. Also use WITH TABLOCK while doing BULK INSERT.|||Hi there, one can also choose Bulk logged Recovery model for the database, BCPs and SELECT INTOs are minimally logged. Ur log file would not escalate rappidly in this case.|||would this work, can the from be a view?
Truncate table LATEST_VERSION_SERVICES
BULK INSERT LOG.[LATEST_VERSION_SERVICES]
FROM v_tbl_latest_version_services|||No, but you can BCP OUT from a view and then BULK INSERT FROM 'the_file_that_you_got_from_BCP_OUT_step'
No comments:
Post a Comment