Monday, February 20, 2012

Large insert, how to switch off transactions?

Hello, I've a database with 8 million rows on my development pc and I've got
about 8 GB free disk space.
I have 3000 duplicate records within the table.
I ran a 'distinct' query and put them into a temp table and I am pulling
them back in again but after 2 hours I have ran out of transaction space. I
still have my records in the p# temp table but can't do an sp_rename on it
(probably because it is temporary).
Any way to run an insert like this but not write a transaction log out?
Thanks, Kevin Munro.
My sql is :
select distinct propertyid,pointid,containerid,value
into #p from propertieslink
truncate table propertieslink
insert propertieslink (propertyid,pointid,containerid,value) select
propertyid,pointid,containerid,value from #pDo not do it using SELECT ... INTO ..., it consumes a lot of resources. Try:
select propertyid, pointid, containerid, value
into #p from propertieslink where 0 = 1
insert into #p
select distinct propertyid, pointid, containerid, value
from propertieslink
truncate table propertieslink
insert propertieslink (propertyid,pointid,containerid,value) select
propertyid,pointid,containerid,value from #p
go
-- or
select propertyid,pointid,containerid,value
into #p from propertieslink where 0 = 1
insert into #p
select propertyid, pointid, containerid, value
from propertieslink
group by
propertyid, pointid, containerid, value
having
count(*) > 1
delete
a
from
propertieslink as a
inner join
#p as b
on a.propertyid = b.propertyid
and a.pointid = b.pointid
and a.containerid = b.containerid
and a.value = b.value
insert propertieslink (propertyid,pointid,containerid,value) select
propertyid,pointid,containerid,value from #p
go
AMB
"Kevin Munro" wrote:

> Hello, I've a database with 8 million rows on my development pc and I've g
ot
> about 8 GB free disk space.
> I have 3000 duplicate records within the table.
> I ran a 'distinct' query and put them into a temp table and I am pulling
> them back in again but after 2 hours I have ran out of transaction space.
I
> still have my records in the p# temp table but can't do an sp_rename on it
> (probably because it is temporary).
> Any way to run an insert like this but not write a transaction log out?
> Thanks, Kevin Munro.
> My sql is :
> select distinct propertyid,pointid,containerid,value
> into #p from propertieslink
> truncate table propertieslink
> insert propertieslink (propertyid,pointid,containerid,value) select
> propertyid,pointid,containerid,value from #p
>
>|||If you are just trying to get rid of the 3000 duplicates, there are easier
ways to do it.
A clever query with HAVING COUNT(*) would do the trick.
Regards
Mike
"Alejandro Mesa" wrote:
> Do not do it using SELECT ... INTO ..., it consumes a lot of resources. Tr
y:
> select propertyid, pointid, containerid, value
> into #p from propertieslink where 0 = 1
> insert into #p
> select distinct propertyid, pointid, containerid, value
> from propertieslink
> truncate table propertieslink
> insert propertieslink (propertyid,pointid,containerid,value) select
> propertyid,pointid,containerid,value from #p
> go
> -- or
> select propertyid,pointid,containerid,value
> into #p from propertieslink where 0 = 1
> insert into #p
> select propertyid, pointid, containerid, value
> from propertieslink
> group by
> propertyid, pointid, containerid, value
> having
> count(*) > 1
> delete
> a
> from
> propertieslink as a
> inner join
> #p as b
> on a.propertyid = b.propertyid
> and a.pointid = b.pointid
> and a.containerid = b.containerid
> and a.value = b.value
> insert propertieslink (propertyid,pointid,containerid,value) select
> propertyid,pointid,containerid,value from #p
> go
> AMB
>
> "Kevin Munro" wrote:
>|||It was included as a possible solution (second one). Your comment is welcome
anyway.
Thanks,
Alejandro Mesa
"Mike Epprecht (SQL MVP)" wrote:
> If you are just trying to get rid of the 3000 duplicates, there are easier
> ways to do it.
> A clever query with HAVING COUNT(*) would do the trick.
> Regards
> Mike
> "Alejandro Mesa" wrote:
>|||Thanks for these suggestions, the one I thought of would have been ok in a
smaller table and I'll do the HAVING COUNT query.
I got around it by dropping the indexes and it only took 5 minutes. I am
now applying the indexes one at at time and this is somewhat
transactionalising it I think.
Kevin.
"Alejandro Mesa" <AlejandroMesa@.discussions.microsoft.com> wrote in message
news:590BB8B7-640D-4530-948A-D868A5B42477@.microsoft.com...
> It was included as a possible solution (second one). Your comment is
> welcome
> anyway.
> Thanks,
> Alejandro Mesa
>
> "Mike Epprecht (SQL MVP)" wrote:
>

No comments:

Post a Comment