Friday, March 30, 2012
launch an appl from a trigger ?
from a trigger when a new record is inserted into the
table or
a record is updated in the table?
If so, what is the correct syntax ?
THANKS!Understand that launching an application that requires the input from a =user can extremely critical ... This is because this app is invoked in =the same process of SQL Server and the credential of the user to =interact with desktop maynot be there ... Hence such operation can stop =the SQL Server service ... Hence avoid such operations ...
On the contrary you like to send a popup using the network messenger =service then use the xp_cmdshell command ... But it is not recommended =in production code ...
-- HTH,
Vinod Kumar
MCSE, DBA, MCAD
http://www.extremeexperts.com/
"amy" <achen@.comverge.com> wrote in message =news:0d7f01c3576d$ed56db10$a401280a@.phx.gbl...
> Is there a way to launch an application(c:\hello.exe)
> from a trigger when a new record is inserted into the > table or > a record is updated in the table?
> > If so, what is the correct syntax ?
> > THANKS!|||Amy,
You can use the xp_cmdshell extended procedure to start an application from
the command prompt. For exact syntax, do please refer to Books OnLine.
--
Dejan Sarka, SQL Server MVP
FAQ from Neil & others at: http://www.sqlserverfaq.com
Please reply only to the newsgroups.
PASS - the definitive, global community
for SQL Server professionals - http://www.sqlpass.org
"amy" <achen@.comverge.com> wrote in message
news:0d7f01c3576d$ed56db10$a401280a@.phx.gbl...
> Is there a way to launch an application(c:\hello.exe)
> from a trigger when a new record is inserted into the
> table or
> a record is updated in the table?
> If so, what is the correct syntax ?
> THANKS!|||I would investigate having the .exe be part of a "job"
under DTS Local Packages and having the trigger start
that job - not really sure if this is possible. If it
did work, it would get around the issues raised here
about having the .exe hang things up, etc.
>--Original Message--
>Amy,
>What are you trying to do ? Ofcourse you can launch an
application by using
>master..xp_cmdshell, as in
>exec master..xp_cmdshell "dtsrun /?"
>But for every insert and update? well , no..I wont do
that.Also, if this
>hello.exe has any GUI as in exec
master..xp_cmdshell "notepad" ,
>master..xp_cmdshell would hang leading to timeout
problems.
>--
>Dinesh.
>SQL Server FAQ at
>http://www.tkdinesh.com
>"amy" <achen@.comverge.com> wrote in message
>news:0d7f01c3576d$ed56db10$a401280a@.phx.gbl...
>> Is there a way to launch an application(c:\hello.exe)
>> from a trigger when a new record is inserted into the
>> table or
>> a record is updated in the table?
>> If so, what is the correct syntax ?
>> THANKS!
>
>.
>|||Steve,
Since you replied to my post...
It may work or may be we can find a workaround but I still wont do that.In a
heavy OLTP system, triggers are always a overhead for the DML and a trigger
which calls a exe, gui or no gui , would be a recipe for disaster.
--
Dinesh.
SQL Server FAQ at
http://www.tkdinesh.com
"Steve Z" <szlamany@.antarescomputing.com> wrote in message
news:0c5801c35772$9a1f0bc0$a601280a@.phx.gbl...
> I would investigate having the .exe be part of a "job"
> under DTS Local Packages and having the trigger start
> that job - not really sure if this is possible. If it
> did work, it would get around the issues raised here
> about having the .exe hang things up, etc.
> >--Original Message--
> >Amy,
> >
> >What are you trying to do ? Ofcourse you can launch an
> application by using
> >master..xp_cmdshell, as in
> >
> >exec master..xp_cmdshell "dtsrun /?"
> >
> >But for every insert and update? well , no..I wont do
> that.Also, if this
> >hello.exe has any GUI as in exec
> master..xp_cmdshell "notepad" ,
> >master..xp_cmdshell would hang leading to timeout
> problems.
> >
> >--
> >Dinesh.
> >SQL Server FAQ at
> >http://www.tkdinesh.com
> >
> >"amy" <achen@.comverge.com> wrote in message
> >news:0d7f01c3576d$ed56db10$a401280a@.phx.gbl...
> >> Is there a way to launch an application(c:\hello.exe)
> >> from a trigger when a new record is inserted into the
> >> table or
> >> a record is updated in the table?
> >>
> >> If so, what is the correct syntax ?
> >>
> >> THANKS!
> >
> >
> >.
> >|||I agree that other paths might serve the same purpose,
but I'm not sure what the need really is. It would make
more sense to me to have a "scheduled" job run regularly
(every x minutes) and sweep up handling all the "rows"
inserted since the last sweep. Or code the .exe in C and
make it an extended stored procedure - run right from
inside the trigger. Or simply code the need in T-Sql and
get rid of the .exe requirement...
>--Original Message--
>Steve,
>Since you replied to my post...
>It may work or may be we can find a workaround but I
still wont do that.In a
>heavy OLTP system, triggers are always a overhead for
the DML and a trigger
>which calls a exe, gui or no gui , would be a recipe for
disaster.
>--
>Dinesh.
>SQL Server FAQ at
>http://www.tkdinesh.com
>"Steve Z" <szlamany@.antarescomputing.com> wrote in
message
>news:0c5801c35772$9a1f0bc0$a601280a@.phx.gbl...
>> I would investigate having the .exe be part of a "job"
>> under DTS Local Packages and having the trigger start
>> that job - not really sure if this is possible. If it
>> did work, it would get around the issues raised here
>> about having the .exe hang things up, etc.
>> >--Original Message--
>> >Amy,
>> >
>> >What are you trying to do ? Ofcourse you can launch an
>> application by using
>> >master..xp_cmdshell, as in
>> >
>> >exec master..xp_cmdshell "dtsrun /?"
>> >
>> >But for every insert and update? well , no..I wont do
>> that.Also, if this
>> >hello.exe has any GUI as in exec
>> master..xp_cmdshell "notepad" ,
>> >master..xp_cmdshell would hang leading to timeout
>> problems.
>> >
>> >--
>> >Dinesh.
>> >SQL Server FAQ at
>> >http://www.tkdinesh.com
>> >
>> >"amy" <achen@.comverge.com> wrote in message
>> >news:0d7f01c3576d$ed56db10$a401280a@.phx.gbl...
>> >> Is there a way to launch an application
(c:\hello.exe)
>> >> from a trigger when a new record is inserted into
the
>> >> table or
>> >> a record is updated in the table?
>> >>
>> >> If so, what is the correct syntax ?
>> >>
>> >> THANKS!
>> >
>> >
>> >.
>> >
>
>.
>|||Steve,
Well if you are asking which of the requirement to take then, my =suggestion would be to use the T-SQL method and a jobs framework ... It =is simply because you are closer to the data and getting out of process =of SQL and then get the job done is something not advisable ...
Now trigger can prove costlier as this will fire with every insert =operation ... But would make it more live data ...
-- HTH,
Vinod Kumar
MCSE, DBA, MCAD
http://www.extremeexperts.com/
"Steve Z" <szlamany@.antarescomputing.com> wrote in message =news:0eb201c35779$6045c3b0$a401280a@.phx.gbl...
> I agree that other paths might serve the same purpose, > but I'm not sure what the need really is. It would make > more sense to me to have a "scheduled" job run regularly > (every x minutes) and sweep up handling all the "rows" > inserted since the last sweep. Or code the .exe in C and > make it an extended stored procedure - run right from > inside the trigger. Or simply code the need in T-Sql and > get rid of the .exe requirement...
> >--Original Message--
> >Steve,
> >
> >Since you replied to my post...
> >
> >It may work or may be we can find a workaround but I > still wont do that.In a
> >heavy OLTP system, triggers are always a overhead for > the DML and a trigger
> >which calls a exe, gui or no gui , would be a recipe for > disaster.
> >
> >-- > >Dinesh.
> >SQL Server FAQ at
> >http://www.tkdinesh.com
> >
> >"Steve Z" <szlamany@.antarescomputing.com> wrote in > message
> >news:0c5801c35772$9a1f0bc0$a601280a@.phx.gbl...
> >> I would investigate having the .exe be part of a "job"
> >> under DTS Local Packages and having the trigger start
> >> that job - not really sure if this is possible. If it
> >> did work, it would get around the issues raised here
> >> about having the .exe hang things up, etc.
> >>
> >> >--Original Message--
> >> >Amy,
> >> >
> >> >What are you trying to do ? Ofcourse you can launch an
> >> application by using
> >> >master..xp_cmdshell, as in
> >> >
> >> >exec master..xp_cmdshell "dtsrun /?"
> >> >
> >> >But for every insert and update? well , no..I wont do
> >> that.Also, if this
> >> >hello.exe has any GUI as in exec
> >> master..xp_cmdshell "notepad" ,
> >> >master..xp_cmdshell would hang leading to timeout
> >> problems.
> >> >
> >> >-- > >> >Dinesh.
> >> >SQL Server FAQ at
> >> >http://www.tkdinesh.com
> >> >
> >> >"amy" <achen@.comverge.com> wrote in message
> >> >news:0d7f01c3576d$ed56db10$a401280a@.phx.gbl...
> >> >> Is there a way to launch an application
> (c:\hello.exe)
> >> >> from a trigger when a new record is inserted into > the
> >> >> table or
> >> >> a record is updated in the table?
> >> >>
> >> >> If so, what is the correct syntax ?
> >> >>
> >> >> THANKS!
> >> >
> >> >
> >> >.
> >> >
> >
> >
> >.
> >
Wednesday, March 28, 2012
Latest updated date of a Stored Procedure
Tx
Gkhttp://www.dbforums.com/archives/t317417.html
Monday, March 26, 2012
LastProcessed property on AMO processable objects
The LastProcessed property is updated at the end of a successful process.
During the processing the LastProcessed date will reflect the end of the previous processing (as will the data). When the processing completes successfully the transaction is committed, the new data is made available for querying and the LastProcessed property is updated.
|||Thanks. That's what I figured, I just wanted to make sure.sqlFriday, March 23, 2012
last updated date on stored procs
I've change quite a few stored procs and wondered if there's any way of
getting a last updated date. I only want to upload the changed procs to the
live server and don't want to transfer all of them.
Cheers
JamesThere is no easy way to do this because there no 'last changed date' stored
with the sp... There is a schema_ver field which changes when the sp is
altered... You would have to keep a copy yourself in a table and compare it
to the field in sysobjects...You could also add local variables to each SP
with a last changed date that must be incremented by the changer...but that
is all a pain in the butt... There are programs like(Red Gate ) which can
compare differences and let you know... But the bottom line is that you get
no help from SQL...
"James Brett" <james.brett@.unified.co.uk> wrote in message
news:%233A4OlYiDHA.2748@.TK2MSFTNGP11.phx.gbl...
> Hi
> I've change quite a few stored procs and wondered if there's any way of
> getting a last updated date. I only want to upload the changed procs to
the
> live server and don't want to transfer all of them.
> Cheers
> James
>|||Not if you are altering procedures. Only the creation date
is available. SQL Server doesn't track the modification
dates. You could do something like use a third party tool to
compare the differences between development and production
to find the stored procedures that are different between the
two environments. But you can easily have modified objects
in dev that really aren't intended to be moved to the live
server so you have to be careful using this approach.
Red Gate has tools to compare databases:
http://www.red-gate.com/
-Sue
On Fri, 3 Oct 2003 09:41:20 +0100, "James Brett"
<james.brett@.unified.co.uk> wrote:
>Hi
>I've change quite a few stored procs and wondered if there's any way of
>getting a last updated date. I only want to upload the changed procs to the
>live server and don't want to transfer all of them.
>Cheers
>James
>|||Hey Wayne,
FYI...Didn't mean to step on your response or anything. It
will be nice when the servers are back in shape and we can
see other replies posted with less latency before posting
our own.
The joys of Swen...
-Sue
On Fri, 3 Oct 2003 08:18:21 -0400, "Wayne Snyder"
<wsnyder@.computeredservices.com> wrote:
>There is no easy way to do this because there no 'last changed date' stored
>with the sp... There is a schema_ver field which changes when the sp is
>altered... You would have to keep a copy yourself in a table and compare it
>to the field in sysobjects...You could also add local variables to each SP
>with a last changed date that must be incremented by the changer...but that
>is all a pain in the butt... There are programs like(Red Gate ) which can
>compare differences and let you know... But the bottom line is that you get
>no help from SQL...
>"James Brett" <james.brett@.unified.co.uk> wrote in message
>news:%233A4OlYiDHA.2748@.TK2MSFTNGP11.phx.gbl...
>> Hi
>> I've change quite a few stored procs and wondered if there's any way of
>> getting a last updated date. I only want to upload the changed procs to
>the
>> live server and don't want to transfer all of them.
>> Cheers
>> James
>>
>
last updated date
I'm trying to retrieve data of previous day from my 'sales' table. Sales
table has:
order#, order date, cust#, sales, qty
I have a metadates table: one colummn
date
my script is
select sum(sales), sum(qty) from sales where order_date=(select top 1 date
from metadates order by date desc)
But I got time out all the time.
I'm considering the Christmas holiday, when I come back on Dec 28, the job
runs every day, should pick up the sales on Dec 23rd instead of Dec 27.
Please help,
Thanks so much,
SarahPerhaps this will run faster:
declare @.prev_workday as datetime
select @.prev_workday = (select top 1 date from metadates order by date desc)
select sum(sales), sum(qty) from sales where order_date = @.prev_workday
So, if Dec 24 - 27 are not business days, then do not include them in the
metadates table.
"SG" <sguo@.coopervision.ca> wrote in message
news:OcxHvbCAGHA.272@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I'm trying to retrieve data of previous day from my 'sales' table. Sales
> table has:
> order#, order date, cust#, sales, qty
> I have a metadates table: one colummn
> date
> my script is
> select sum(sales), sum(qty) from sales where order_date=(select top 1 date
> from metadates order by date desc)
> But I got time out all the time.
> I'm considering the Christmas holiday, when I come back on Dec 28, the job
> runs every day, should pick up the sales on Dec 23rd instead of Dec 27.
> Please help,
> Thanks so much,
> Sarah
>|||"SG" <sguo@.coopervision.ca> wrote in message
news:OcxHvbCAGHA.272@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I'm trying to retrieve data of previous day from my 'sales' table. Sales
> table has:
> order#, order date, cust#, sales, qty
> I have a metadates table: one colummn
> date
> my script is
> select sum(sales), sum(qty) from sales where order_date=(select top 1 date
> from metadates order by date desc)
> But I got time out all the time.
> I'm considering the Christmas holiday, when I come back on Dec 28, the job
> runs every day, should pick up the sales on Dec 23rd instead of Dec 27.
> Please help,
> Thanks so much,
> Sarah
This may give you a better idea of what the date table should look like.
http://www.aspfaq.com/show.asp?id=2519sql
last update of database
OS - W2K or NT
Is there a sp or any way to check when a database is last accessed by or
updated by user ?
TIANo, SQL Server doesn't track this information, unless you want to use a
third party tool to review the transaction logs. You can put code into your
own procedures, or triggers on tables if you for some reason allow direct
access to tables, to track whatever components of the transaction you like.
--
Aaron Bertrand
SQL Server MVP
http://www.aspfaq.com/
"pohkeat" <pohkeat@.hotmail.com> wrote in message
news:e7DI$JeNEHA.556@.tk2msftngp13.phx.gbl...
> DB - SQL2000 or SQL7
> OS - W2K or NT
>
> Is there a sp or any way to check when a database is last accessed by or
> updated by user ?
> TIA
>
>
Last Update date/time
Does SQL Server 2000 store somewhere the date/time that a table was last
modified? How about when the data in the table was last updated?
How can I view this information if so?
Thanks in advance,
Mike> Does SQL Server 2000 store somewhere the date/time that a table was last
> modified? How about when the data in the table was last updated?
No and no.
Your options:
- auditing software, e.g. Entegra from www.lumigent.com
- setting up your own triggers
- profiler
- reading the log after the fact
--
This is my signature. It is a general reminder.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.|||You could use a TIMESTAMP attribute, but this has been turned into a table
modification version number in stead of the old DATE/TIME value once
supplied. It can tell you if a row has been modified before or after other
rows though.
Sincerely,
Anthony Thomas
"Aaron [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:eWRWXgtNFHA.3716@.TK2MSFTNGP14.phx.gbl...
> Does SQL Server 2000 store somewhere the date/time that a table was last
> modified? How about when the data in the table was last updated?
No and no.
Your options:
- auditing software, e.g. Entegra from www.lumigent.com
- setting up your own triggers
- profiler
- reading the log after the fact
--
This is my signature. It is a general reminder.
Please post DDL, sample data and desired results.
See http://www.aspfaq.com/5006 for info.sql
Last time table was updated?
Hi Mitch,
Without there being a timestamp or datetime column on the actual table, there's no way for you to dynamically determine the date/time a row was inserted.
You can, however query the system catalog to determine the date the table was created.
As you mention that the tables themselves are created by an application, obviously you'd be hesitant to make any changes to the existing schema; you could, however create generic triggers on each of the tables in question that would in turn insert a flag into a control table to indicate the current "active" table"
CREATE TRIGGER tr_TableHere_ins
ON TableSourceA
FOR INSERT
AS
INSERT INTO ControlTable (InsertDate, TableSource)
VALUES(GetDate(), 'TableSourceA' )
Or Similar...
Cheers,
Rob
Last time a table was updated - SQL 2005
updated or deleted for a table?
Thanks.Hi
Tony has posted this sometime ago
In SQL Server 2005 you can use the sys.dm_db_index_usage_stats data
management view, look at the column last_user_update, you can also see when
the table was last accessed (last_user_seek and last_user_scan - do a MAX on
them).
select *
from sys.dm_db_index_usage_stats
where database_id = db_id( 'readpasttest' )
<Wing9897@.hotmail.com> wrote in message
news:1190639143.805215.112750@.d55g2000hsg.googlegroups.com...
> How can one determine the last time when records have been inserted,
> updated or deleted for a table?
> Thanks.
>|||Wing,
The sys.dm_db_index_usage_stats is very helpful in an ongoing operational
period, e.g. answering questions such as whether the table had been updated
in the last hour.
However, if you have tables that are only updated rarely such as every few
weeks or month, this is probably not the answer for you. These counters are
initialized to empty whenever SQL Server service is restarted. Also, a
database detach or Auto-close removes the rows for that database.
RLF
<Wing9897@.hotmail.com> wrote in message
news:1190639143.805215.112750@.d55g2000hsg.googlegroups.com...
> How can one determine the last time when records have been inserted,
> updated or deleted for a table?
> Thanks.
>
Wednesday, March 21, 2012
Last synchronisation date for a (merge)-publication
Is there any table where I can find the last updated (last synced) date
and time for a subscription?
I am trying to warn a user if his/her subscription did not synchronise
since an xxx number of days but I can't locate the date/time for the
last successfull synchronisation.
Thx,
Ferry
you can query the history tables in the distribution database.
Hilary Cotter
Director of Text Mining and Database Strategy
RelevantNOISE.Com - Dedicated to mining blogs for business intelligence.
This posting is my own and doesn't necessarily represent RelevantNoise's
positions, strategies or opinions.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
Looking for a FAQ on Indexing Services/SQL FTS
http://www.indexserverfaq.com
"Ferry" <ferryswi A@.T ferryswi D.O.T tmfweb D.O.T nl> wrote in message
news:mn.a56e7d63bfca7a9f.23024@.TferryswiD.O.Ttmfwe bD.O.Tnl...
> Hi!
> Is there any table where I can find the last updated (last synced) date
> and time for a subscription?
> I am trying to warn a user if his/her subscription did not synchronise
> since an xxx number of days but I can't locate the date/time for the last
> successfull synchronisation.
> Thx,
> Ferry
>
Monday, March 12, 2012
Last Date Used
data base when last accessed, updated or modified. Does anyone know of
a tool that performs this function?
Ernie
A Plan without Action is a DayDream
Action without a Plan is a Nightmare
*** Sent via Developersdex http://www.codecomments.com ***
Hi,
SQL Server will not store this information. For doing this you need to
enable the profiler, store the output into a table and then do the
analysis manually to get the report.
Thanks
Hari
SQL Server MVP
"Ernie" <nospan@.devdex.com> wrote in message
news:OUTJ%23INUFHA.584@.TK2MSFTNGP15.phx.gbl...
> I am trying to generate a report listing data base and tables within the
> data base when last accessed, updated or modified. Does anyone know of
> a tool that performs this function?
> Ernie
> A Plan without Action is a DayDream
> Action without a Plan is a Nightmare
> *** Sent via Developersdex http://www.codecomments.com ***
Last Date Stored Proc Updated??
I see the Created date on the list of stored procs, but really want a
Date Last Updated. After changing code for 3 hours, I tend to forget
which procs I've worked on, and which need to be move to production.
any simple way to keep track of the last procs played with?
thanks in advance...
john@.ViridianTech.comDo you mean you've used ALTER PROC and you want to see when it was
ALTERed? If so, then this isn't currently possible in MSSQL (although
it is in SQL 2005).
In any case, you should hopefully be using some sort of source control
system to store your object creation scripts, and a deployment process
which would help to track your changes. You might also want to consider
a database comparison tool, which can quickly show you the differences
between databases.
Simon|||Actually, we're not using any source control on the stored procs, like
we do on the project source code. How would you do that? is there
something in MSSQL for that? the .NWT IDE for VB make it easy to
integrate with Source Safe, what do you use for stored procs, views and
table creation scripts? You advice would be much appreciated.
john|||<John@.ViridianTech.com> wrote in message
news:1120484863.856415.206040@.o13g2000cwo.googlegr oups.com...
> Actually, we're not using any source control on the stored procs, like
> we do on the project source code. How would you do that? is there
> something in MSSQL for that? the .NWT IDE for VB make it easy to
> integrate with Source Safe, what do you use for stored procs, views and
> table creation scripts? You advice would be much appreciated.
> john
Personally, I simply check code out of VSS and work with it in Query
Analyzer - there's no source control integration in the MSSQL tools
themselves. I believe Visual Studio has some sort of support for SQL code
and VSS, although I don't use VS often myself, so I may be wrong about that.
Even using just QA and VSS, a few scripts can make things easier - the
Customize menu in QA allows you to pass a few useful parameters to batch
files or other programs, so it's not too difficult to script checking in and
out of VSS (SQL 2005 has source control integration in the Management
Studio).
Erland has an interesting toolset for working with SQL source code and VSS,
written in Perl, which might be worth looking at if you want to develop your
own solution or just need some ideas about managing and deploying SQL code:
http://www.abaris.se/abaperls/index.html
Simon|||(John@.ViridianTech.com) writes:
> Actually, we're not using any source control on the stored procs, like
> we do on the project source code. How would you do that?
You just do it!
> is there something in MSSQL for that? the .NWT IDE for VB make it easy
> to integrate with Source Safe,
Hrmpf! Nothing in Visual Studio is easy. (I understand less and less of
it for each new version they come out with.) And in our shop, you may
use the SourceSafe integration for the VB code, but if you mess up,
our build people will tell you to stop doing it.
The absolutely best too to work with SourceSafe is the VSS Explorer.
> what do you use for stored procs, views and
> table creation scripts? You advice would be much appreciated.
Actually, we don't even use QA for editing, but use Textpad instead,
simply because it's a better editor.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||If you're not going to have access to a source control tool or
methodology any time soon, here's a workaround; but there's nothing
foo-proof about this, so you still have to be careful:
Just drop and create your stored procedures instead of altering them.
This modifies the crdate in sysobjects, which is reflected in
Enterprise Manager.
drop procedure proc_mytest
go
create procedure proc_mytest
as
< whatever
hth,
victor dileo|||vjdileo (vic_technews@.yahoo.com) writes:
> If you're not going to have access to a source control tool or
> methodology any time soon, here's a workaround; but there's nothing
> foo-proof about this, so you still have to be careful:
> Just drop and create your stored procedures instead of altering them.
> This modifies the crdate in sysobjects, which is reflected in
> Enterprise Manager.
And there is actually a way of detecting that a procedure have
been altered. sysobjects.schema_ver is incremented with 16 each
you alter the procedure.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Guys, there really is a great way of working with your SQL code in just
the same way as you do for your application code and it's called DB
Ghost (www.dbghost.com). It's the ONLY SQL Server tool on the market
that can build a brand new database from a set of object creation
scripts taking care of all dependencies. This database is then used by
D Ghost to compare and upgrade any target database.
The upshot of our approach is that you have ALL database objects as
'create' scripts under source control and just modify those. You work
in a familiar manner and the source control system becomes your friend
rather than a necessary evil. Most other approaches to having SQL in
source control involve having to do two things for each update:
1. Update the create script.
2. Write an ALTER script.
DB Ghost does away with the second of these not just for stored
procedures but for EVERY database object.
Honestly, you may think this approach cannot work but it does and our
customers use phrases like 'religious experience' when they talk about
it. You just have to take the time to 'get' it...|||We just do a generate sql script for all procs (separate file for each)
and add them to a source safe project and use that through visual
studio. Pretty easy and works great. If you make the project a database
project in visual studio it will let you execute them from there as
well.
Wednesday, March 7, 2012
Large table -- split into smaller ones?
I have to import about 26 million rows, each row about 1Kb in size into
our sql server. This data will be READONLY and might be updated every
quarter.
I am yet to import it and do some performance testing.
what should be the best approach to handling this much data...
should I
* have one large table?
OR
*Horizontally partition data into multiple tables?
thanks for your time
GKIt's generally best to implement a single table unless you have a
compelling reason to do otherwise. A single large table with
appropriate indexes often performs quite well without the additional
administrative complexity of partitioning.
The main advantages of horizontal partitioning are related to admin
tasks like data loads, bulk deletes and index creates. For example,
you'll need about 30GB of free space to build a clustered index on your
table but only a fraction of that amount if partitioned. On the other
hand, if you can load data sorted in clustered index sequence, you can
load with the index in place and forego the index create entirely.
--
Hope this helps.
Dan Guzman
SQL Server MVP
--
SQL FAQ links (courtesy Neil Pike):
http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
http://www.sqlserverfaq.com
http://www.mssqlserver.com/faq
--
"GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
news:edyKf7gqDHA.2496@.TK2MSFTNGP09.phx.gbl...
> Hi All,
> I have to import about 26 million rows, each row about 1Kb in size
into
> our sql server. This data will be READONLY and might be updated every
> quarter.
> I am yet to import it and do some performance testing.
> what should be the best approach to handling this much data...
> should I
> * have one large table?
> OR
> *Horizontally partition data into multiple tables?
> thanks for your time
> GK
>
>|||Thanks Dan,
I understand your point about administrative overheads, but I guess
they are much reduced if I use Partitioned views in SQL server 2000. Would
you advise using that?
As far as loading the data in clustered index sequene is concerned, are you
advising me to, say load data sorted on the clustered index column(s)?
Excuse me if this question is stupid but how do I load with index in place?
Also I didn't understand why it takes so much space to create index for a
large table and less space if the table were to be split up. Does index
creation need thatmuch space just while building the index and the finally
created index take lesser space?
Thank you for your efforts.
GK
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:OmoikLmqDHA.2216@.TK2MSFTNGP12.phx.gbl...
> It's generally best to implement a single table unless you have a
> compelling reason to do otherwise. A single large table with
> appropriate indexes often performs quite well without the additional
> administrative complexity of partitioning.
> The main advantages of horizontal partitioning are related to admin
> tasks like data loads, bulk deletes and index creates. For example,
> you'll need about 30GB of free space to build a clustered index on your
> table but only a fraction of that amount if partitioned. On the other
> hand, if you can load data sorted in clustered index sequence, you can
> load with the index in place and forego the index create entirely.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> --
> SQL FAQ links (courtesy Neil Pike):
> http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
> http://www.sqlserverfaq.com
> http://www.mssqlserver.com/faq
> --
> "GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
> news:edyKf7gqDHA.2496@.TK2MSFTNGP09.phx.gbl...
> > Hi All,
> > I have to import about 26 million rows, each row about 1Kb in size
> into
> > our sql server. This data will be READONLY and might be updated every
> > quarter.
> > I am yet to import it and do some performance testing.
> > what should be the best approach to handling this much data...
> > should I
> > * have one large table?
> > OR
> > *Horizontally partition data into multiple tables?
> >
> > thanks for your time
> > GK
> >
> >
> >
>|||Just found this good msdn article on partioning data.
http://msdn.microsoft.com/library/default.asp?URL=/library/techart/PartitionsInDW.htm
and it says...(although they are talking about a data warehouse...mine will
not be a data warehouse exactly but it wont be an OLTP either. It'll be a
readonly table with quarterly updates)
<quote>
Query Speed
Query speed should not be considered a reason to partition the data
warehouse relational database. Query performance is similar for partitioned
and non-partitioned fact tables. When the partitioned database is properly
designed, the relational engine will include in a query plan only the
partition(s) necessary to resolve that query. For example, if the database
is partitioned by month and a query is conditioned on January 2000, the
query plan will include only the partition for January 2000. The resulting
query will perform well against the partitioned table, about the same as
against a properly indexed combined table with a clustered index on the
partitioning key.
</quote>
"GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
news:%23%23bRH5oqDHA.2772@.TK2MSFTNGP10.phx.gbl...
> Thanks Dan,
> I understand your point about administrative overheads, but I guess
> they are much reduced if I use Partitioned views in SQL server 2000. Would
> you advise using that?
> As far as loading the data in clustered index sequene is concerned, are
you
> advising me to, say load data sorted on the clustered index column(s)?
> Excuse me if this question is stupid but how do I load with index in
place?
> Also I didn't understand why it takes so much space to create index for a
> large table and less space if the table were to be split up. Does index
> creation need thatmuch space just while building the index and the finally
> created index take lesser space?
> Thank you for your efforts.
> GK
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:OmoikLmqDHA.2216@.TK2MSFTNGP12.phx.gbl...
> > It's generally best to implement a single table unless you have a
> > compelling reason to do otherwise. A single large table with
> > appropriate indexes often performs quite well without the additional
> > administrative complexity of partitioning.
> >
> > The main advantages of horizontal partitioning are related to admin
> > tasks like data loads, bulk deletes and index creates. For example,
> > you'll need about 30GB of free space to build a clustered index on your
> > table but only a fraction of that amount if partitioned. On the other
> > hand, if you can load data sorted in clustered index sequence, you can
> > load with the index in place and forego the index create entirely.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > --
> > SQL FAQ links (courtesy Neil Pike):
> >
> > http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
> > http://www.sqlserverfaq.com
> > http://www.mssqlserver.com/faq
> > --
> >
> > "GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
> > news:edyKf7gqDHA.2496@.TK2MSFTNGP09.phx.gbl...
> > > Hi All,
> > > I have to import about 26 million rows, each row about 1Kb in size
> > into
> > > our sql server. This data will be READONLY and might be updated every
> > > quarter.
> > > I am yet to import it and do some performance testing.
> > > what should be the best approach to handling this much data...
> > > should I
> > > * have one large table?
> > > OR
> > > *Horizontally partition data into multiple tables?
> > >
> > > thanks for your time
> > > GK
> > >
> > >
> > >
> >
> >
>|||> I understand your point about administrative overheads, but I guess
> they are much reduced if I use Partitioned views in SQL server 2000.
Would
> you advise using that?
It depends on the problem you are trying to solve. The article you
referenced in your other post does a pretty good job of pointing out the
pros and cons. If your primary concern is query performance, then PVs
aren't the answer. Personally, wouldn't change my design to accommodate
partitioning unless the additional complexity is more than offset by the
benefits of load speed and maintenance. IMHO, your quarterly data load
probably doesn't justify the use of PVs.
> As far as loading the data in clustered index sequene is concerned,
are you
> advising me to, say load data sorted on the clustered index column(s)?
> Excuse me if this question is stupid but how do I load with index in
place?
Just create the table and its clustered index. If you then load (bulk
insert) data in sequence by the clustered index, load performance will
be good and you won't need to create the clustered index afterward.
Non-clustered indexes can be created after the load.
> Also I didn't understand why it takes so much space to create index
for a
> large table and less space if the table were to be split up. Does
index
> creation need thatmuch space just while building the index and the
finally
> created index take lesser space?
The space requirement to build the clustered index is a consideration
with large tables. You need free space of about 120% of the original
table size. This is because the entire table is rebuilt during the
create so space for old and new data is needed plus some sort work
space. After the create, space for the old data is released and the end
result is that slightly more space than the original heap is allocated
to accommodate the non-leaf nodes of the clustered index.
With multiple smaller tables, you can build each clustered index
separately. Consequently, you'll only need free space to accommodate
120% of the largest table in the lot.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
news:%23%23bRH5oqDHA.2772@.TK2MSFTNGP10.phx.gbl...
> Thanks Dan,
> I understand your point about administrative overheads, but I
guess
> they are much reduced if I use Partitioned views in SQL server 2000.
Would
> you advise using that?
> As far as loading the data in clustered index sequene is concerned,
are you
> advising me to, say load data sorted on the clustered index column(s)?
> Excuse me if this question is stupid but how do I load with index in
place?
> Also I didn't understand why it takes so much space to create index
for a
> large table and less space if the table were to be split up. Does
index
> creation need thatmuch space just while building the index and the
finally
> created index take lesser space?
> Thank you for your efforts.
> GK
> "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> news:OmoikLmqDHA.2216@.TK2MSFTNGP12.phx.gbl...
> > It's generally best to implement a single table unless you have a
> > compelling reason to do otherwise. A single large table with
> > appropriate indexes often performs quite well without the additional
> > administrative complexity of partitioning.
> >
> > The main advantages of horizontal partitioning are related to admin
> > tasks like data loads, bulk deletes and index creates. For example,
> > you'll need about 30GB of free space to build a clustered index on
your
> > table but only a fraction of that amount if partitioned. On the
other
> > hand, if you can load data sorted in clustered index sequence, you
can
> > load with the index in place and forego the index create entirely.
> >
> > --
> > Hope this helps.
> >
> > Dan Guzman
> > SQL Server MVP
> >
> > --
> > SQL FAQ links (courtesy Neil Pike):
> >
> > http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
> > http://www.sqlserverfaq.com
> > http://www.mssqlserver.com/faq
> > --
> >
> > "GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
> > news:edyKf7gqDHA.2496@.TK2MSFTNGP09.phx.gbl...
> > > Hi All,
> > > I have to import about 26 million rows, each row about 1Kb in
size
> > into
> > > our sql server. This data will be READONLY and might be updated
every
> > > quarter.
> > > I am yet to import it and do some performance testing.
> > > what should be the best approach to handling this much data...
> > > should I
> > > * have one large table?
> > > OR
> > > *Horizontally partition data into multiple tables?
> > >
> > > thanks for your time
> > > GK
> > >
> > >
> > >
> >
> >
>|||That was very helpful.
Thanks Dan.
"Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
news:OdauywrqDHA.2568@.TK2MSFTNGP09.phx.gbl...
> > I understand your point about administrative overheads, but I guess
> > they are much reduced if I use Partitioned views in SQL server 2000.
> Would
> > you advise using that?
> It depends on the problem you are trying to solve. The article you
> referenced in your other post does a pretty good job of pointing out the
> pros and cons. If your primary concern is query performance, then PVs
> aren't the answer. Personally, wouldn't change my design to accommodate
> partitioning unless the additional complexity is more than offset by the
> benefits of load speed and maintenance. IMHO, your quarterly data load
> probably doesn't justify the use of PVs.
> > As far as loading the data in clustered index sequene is concerned,
> are you
> > advising me to, say load data sorted on the clustered index column(s)?
> > Excuse me if this question is stupid but how do I load with index in
> place?
> Just create the table and its clustered index. If you then load (bulk
> insert) data in sequence by the clustered index, load performance will
> be good and you won't need to create the clustered index afterward.
> Non-clustered indexes can be created after the load.
> > Also I didn't understand why it takes so much space to create index
> for a
> > large table and less space if the table were to be split up. Does
> index
> > creation need thatmuch space just while building the index and the
> finally
> > created index take lesser space?
> The space requirement to build the clustered index is a consideration
> with large tables. You need free space of about 120% of the original
> table size. This is because the entire table is rebuilt during the
> create so space for old and new data is needed plus some sort work
> space. After the create, space for the old data is released and the end
> result is that slightly more space than the original heap is allocated
> to accommodate the non-leaf nodes of the clustered index.
> With multiple smaller tables, you can build each clustered index
> separately. Consequently, you'll only need free space to accommodate
> 120% of the largest table in the lot.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
>
> "GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
> news:%23%23bRH5oqDHA.2772@.TK2MSFTNGP10.phx.gbl...
> > Thanks Dan,
> > I understand your point about administrative overheads, but I
> guess
> > they are much reduced if I use Partitioned views in SQL server 2000.
> Would
> > you advise using that?
> >
> > As far as loading the data in clustered index sequene is concerned,
> are you
> > advising me to, say load data sorted on the clustered index column(s)?
> > Excuse me if this question is stupid but how do I load with index in
> place?
> >
> > Also I didn't understand why it takes so much space to create index
> for a
> > large table and less space if the table were to be split up. Does
> index
> > creation need thatmuch space just while building the index and the
> finally
> > created index take lesser space?
> >
> > Thank you for your efforts.
> > GK
> > "Dan Guzman" <danguzman@.nospam-earthlink.net> wrote in message
> > news:OmoikLmqDHA.2216@.TK2MSFTNGP12.phx.gbl...
> > > It's generally best to implement a single table unless you have a
> > > compelling reason to do otherwise. A single large table with
> > > appropriate indexes often performs quite well without the additional
> > > administrative complexity of partitioning.
> > >
> > > The main advantages of horizontal partitioning are related to admin
> > > tasks like data loads, bulk deletes and index creates. For example,
> > > you'll need about 30GB of free space to build a clustered index on
> your
> > > table but only a fraction of that amount if partitioned. On the
> other
> > > hand, if you can load data sorted in clustered index sequence, you
> can
> > > load with the index in place and forego the index create entirely.
> > >
> > > --
> > > Hope this helps.
> > >
> > > Dan Guzman
> > > SQL Server MVP
> > >
> > > --
> > > SQL FAQ links (courtesy Neil Pike):
> > >
> > > http://www.ntfaq.com/Articles/Index.cfm?DepartmentID=800
> > > http://www.sqlserverfaq.com
> > > http://www.mssqlserver.com/faq
> > > --
> > >
> > > "GK" <ksgkrishremovethis@.yahoo.co.uk> wrote in message
> > > news:edyKf7gqDHA.2496@.TK2MSFTNGP09.phx.gbl...
> > > > Hi All,
> > > > I have to import about 26 million rows, each row about 1Kb in
> size
> > > into
> > > > our sql server. This data will be READONLY and might be updated
> every
> > > > quarter.
> > > > I am yet to import it and do some performance testing.
> > > > what should be the best approach to handling this much data...
> > > > should I
> > > > * have one large table?
> > > > OR
> > > > *Horizontally partition data into multiple tables?
> > > >
> > > > thanks for your time
> > > > GK
> > > >
> > > >
> > > >
> > >
> > >
> >
> >
>
Monday, February 20, 2012
Large Export of Data from one DB of one Structure to Another
Hi guys,
Hopefully this is the right place to ask.
Basically we have have two larges databases, one of which is updated from the other monthly.
For exaplination purposes:
DB1 = Source DB
DB2 = Destination DB
The problem that I require a soltion to is, how do I insert rows from a table in DB1 to DB2 and recover and store the identity of the new row against the ID of the existing row. This is so that I can then matain constraints when it comes to inserting rows into the next table and the next and so on.
This process of storing the ID's as lookups will need to be done for almost every table of which there are 20.
The best Idea we have at the minute is to create a table with two colums for each table (drop it and recreate it after each table has exported) that contains the two ID's, new and old.
This will require using a cursor for each row in the existing table, inserting it in the new table and the using @.@.Scope_Identity to get the new ID and then insert the two values into the temp table.
This too me feels like it will be very slow, particuarly when I bear in mind how much data we have.
Does anyone have any better ideas? (Sorry if the explaination isn't great, its difficult to get accross)
Thanks
Ed
--db1
CREATE TABLE MOVETO
(
iID uniqueidentifier,
iName nchar(50)
)
CREATE TABLE MOVETO_TABLETWO
(
iRowid int identity(1,1),
cAddress nvarchar(100)
)
--db2
CREATE TABLE MOVEFROM
(
TableTwo int Identity(1,1),
iID uniqueidentifier,
cAddress nvarchar(100)
)
CREATE PROCEDURE MOVEDATA
AS
BEGIN
CREATE #GETIDENTITY TABLE
(
iIdentity int identity(1,1)
oldidentity uniqueidentifier,
newidentity uniqueidentifier DEFAULT NEWID()
)
--get old and new identitys
INSERT INTO #GETIDENTITY(oldidentity)
SELECT iID
FROM MOVEFROM
WHERE ?
--now hookup new identitys to insert into MOVETO
INSERT INTO db2.MOVETO(iID,iName)
SELECT G.newidentity
FROM #GETIDENTITY G INNER JOIN MOVEFROM MF
ON G.oldidentity = MF.iID
--INSERT INTO MOVER_TABLETWO
INSERT INTO db2.MOVETO_TABLETWO(cAddress)
SELECT MF.cAddress
FROM #GETIDENTITY G INNER JOIN MOVEFROM MF
ON F.oldidentity = MF.iID
DROP TABLE #GETIDENTITY
END