Showing posts with label contains. Show all posts
Showing posts with label contains. Show all posts

Wednesday, March 28, 2012

latest order date

Hi,
I have an order table which contains the following fielde: 1). orderid (this
is the order number 2). clientid 3). orderdate.
I need to script so that I can find out those clientids which do not have
place an order for at least 90 days.
Can you tell me how to program it?Read the documentation. You want to select clientids which are NOT IN a
subquery that selects all clients that have ordered in the last 90 days. You
could also use a join where max order date is less than 90 days ago.
RR
"qjlee" <qjlee@.discussions.microsoft.com> wrote in message
news:B84E46A2-9291-4D17-A625-3DBADD9746B1@.microsoft.com...
> Hi,
> I have an order table which contains the following fielde: 1). orderid
(this
> is the order number 2). clientid 3). orderdate.
> I need to script so that I can find out those clientids which do not have
> place an order for at least 90 days.
> Can you tell me how to program it?
>
>|||Something like this should give you the client and their last order date.
declare @.DaysSinceOrder as numeric
set @.DaysSinceOrder = 90
select
clientid ,
max(orderdate)
from "YourTableHere"
group by clientid
having max(orderdate) < getdate()-@.DaysSinceOrder
"qjlee" wrote:

> Hi,
> I have an order table which contains the following fielde: 1). orderid (th
is
> is the order number 2). clientid 3). orderdate.
> I need to script so that I can find out those clientids which do not have
> place an order for at least 90 days.
> Can you tell me how to program it?
>
>|||Please post DDL, sample data and expected results
(http://www.aspfaq.com/etiquette.asp?id=5006 )
Since you have a ClientID column in your Orders table, I am guessing that
you have a Clients table somewhere. Here is a complete guess:
SELECT ClientID, ClientName
FROM Clients
WHERE ClientID NOT IN (SELECT ClientID FROM Orders WHERE DATEDIFF(d,
OrderDate, CURRENT_TIMESTAMP) <= 90)
"qjlee" <qjlee@.discussions.microsoft.com> wrote in message
news:B84E46A2-9291-4D17-A625-3DBADD9746B1@.microsoft.com...
> Hi,
> I have an order table which contains the following fielde: 1). orderid
> (this
> is the order number 2). clientid 3). orderdate.
> I need to script so that I can find out those clientids which do not have
> place an order for at least 90 days.
> Can you tell me how to program it?
>
>

Monday, March 19, 2012

LAst order date


I have a fact table that contains amongst other facts, the orderdate. There is 1 line per order.
I would like to show for each "sold from BP", the last order date, like i do in the example below for the "Transaction amount".
Can anyone give me a hint on how the MDX syntaxt should be?

select NON EMPTY {[Measures].[Transaction amount (EURO)]} ON COLUMNS,
NON EMPTY {[sold from BP.BP(ID)].[All sold from BP(ID)]} ON ROWS
from [finalizedtransactionscube]
where [Transaction type.Transaction type].[PCI+PCC]


Hopefully this gives you an idea on how to proceed with this:

Code Snippet

withmember [measures].[x] as

MAX(

EXISTS(

[Ship Date].[Date].[Date].Members,

{Product.SubCategory.CurrentMember},

'Internet Sales'

),

[Ship Date].[Date].CurrentMember.MemberValue

)

select

[x] on 0,

TAIL(Product.SubCategory.SubCategory.Members,10) on 1

from [Adventure Works]

I use TAIL on Axis(1) to just limit the data returned. The calculated member at the top is where all the work is performed.

What this says is build a set of ship date members associated with the current product subcategory based on the relationship between these two dimensions as defined in the Internet Sales measure group. Take the max value for ship date from this set and return that as the measure's value. The MAX() function was the tricky part.

Good luck,
Bryan

Last Name Only

I am fairly new to Crystal Reports and need some help with a report design. I am trying to extract just the Last Name from a field that contains both First and Last Name in one string.
Any help would be greatly appreciated.
Thanks,
Hanseleg. Create a formula and write this
stringVar sp;
sp:="Test1 test2";
mid(sp,instr(sp," ",1)+1,len(sp))

Wednesday, March 7, 2012

Large Textboxes and paging in PDF files

I've got a textbox bound to a text sql server field that contains about four
paragraphs of information. When I export to pdf the page breaks don't work
correctly. It starts a new page unless the entire contents of the field will
fit on the existing page. I would like it to fit what it can on the existing
page and then flow over to the next page as one would expect.
Another thing I noticed is that if I shrink the textbox to about 3.5 inches
then it works as I would like except that it's in a narrow column. Anything
wider than that and the page break issue pops up again. Thanks.I am currently struggling with exactly the same issue. The reason that
making the textbox narrower seems to fix the page break issue is, I believe,
because the resultant narrow column of text is longer than one page can
accommodate, therefore RS knows it has to break the field anyway and so it
just renders it without throwing a new page first. If the textbox can fit
completely on one page then RS bumps it over to the next page. This would be
fine as long as you can turn it off from a property but it doesn't seem like
you can.
Has anyone out there got a way of fixing this or working around it?
Cs.
"MR" wrote:
> I've got a textbox bound to a text sql server field that contains about four
> paragraphs of information. When I export to pdf the page breaks don't work
> correctly. It starts a new page unless the entire contents of the field will
> fit on the existing page. I would like it to fit what it can on the existing
> page and then flow over to the next page as one would expect.
> Another thing I noticed is that if I shrink the textbox to about 3.5 inches
> then it works as I would like except that it's in a narrow column. Anything
> wider than that and the page break issue pops up again. Thanks.

Large text Insert

Hi,

I have a large text which contains special characters and I need to insert it in "ErvTxt" field which is a varchar field.

I get an error:

Error:Line 1: Incorrect syntax near '1'.
Unclosed quotation mark before the character string ', 19 )'.
At line 4467SQL statement: INSERT INTO tbl1 (ZPERS, Status, ErvTxt, VENTS, DNR, ANAME, Fall, Zust, GER, Empf, BET, SSK, GA, JAHR, PZ, LAUFNR, RVLAUFNR, EDAT, ETIME, VDAT, UTLAUFNR, Bew, AINR, ErvNr, PrintStatus, FristNr, HDAT, HTIME, RVExportiert, CrDat, CrPers, GATT, AZ, NR) VALUES ( 'STS', -1, '0123\'1\'00\'0138-33/02(1)\'J\'F\'452\'R020311\'KL01\'ZB\'01\'C\'00001\'03\'i\'000000000\'430200001\'20030116\'0905\'\'000000460\'20030116\'1100\'\n0242\'3\'15\'00\'00000001000\'\'\'-\'00000000000100\'\'\'\'\'\'\'\'00100?¥00000000000100?¥00000000000100?¥0000000000010000000000015699200000\'0000000363?¥00000000001000?¥00000000001000 rrrrrrrrrrrr ssss\'\'RI20030\'101012003010\'1200301160\'0\'\'\'\'\'\'\'\'\'\'\'\'\'\'\'\'\'\'\'\'\n', '', 0, '0138-33/02(1)', 'J', 'F', 97, 'R020311', 'KL01', 'ZB', 1, 3, 'i', 0, 430200001, '2003-01-16 12:00:00', '1899-12-30 09:05:00', NULL, 460, 0, 2927, 1744, 0, 0, '2003-01-16 12:00:00', '1899-12-30 11:00:00', NULL, '2003-01-16 04:04:04', 'STS', 'C', '00001', 19 )

Can anyone tell me what's wrong?

Thanks.

You need to escape the single quotes inside the varchar/string value.

Large table search performance

I have a large table with 300,000 rows and it will grow in the future
The table contains member profile details and it has many columns (about 70)
The stored procedure that do the search use many columns to search the results
That why the ‘Where’ is very long and the all operation take to many time, some time I get Time out exception form the SQL server.
I’ll be happy to get some ideas how to do this search.
I think maybe to divide the table to many tables by State.


Something interesting is that the operation takes long time for the first search and then the second and the third are much faster, and after a few min that I’m not do anything again the first time is very slow

70 columns in a table is probably too many. It is hard to say. But there are few options to consider:

1. You can partition the table vertically. For example, split the table vertically into two - one with the most frequently accessed columns and another with the infrequent ones. This will make each table smaller in terms of row size and queries will be faster than accessing a single wide table. This of course complicates the data modifications since you have to issue two DMLs as opposed to one but it depends on your application and nature of the table

2. You can use partitioning (horizontal). But it depends on the choice of your partitioning key whether it is part of every search criteria or at least one of the primary search conditions. You get some manageability benefit also.

Anyway, the approach depends on your needs. You can find whitepapers in MSDN that discuss partitioning for example. Please take a look at that also. You can leave the table as is also and try to optimize the queries by say adding more indexes or creating additional statistics or restricting the search columns. Each has it's pros and cons. For example, adding more indexes will slow down data modification operations.

The reason why the queries are faster after the first time is due to plan caching. Looks like you are generating lot of ad-hoc queries due to variable number of search conditions and this is expected. Of course, the plan might get removed from cache later due to memory pressure or lack of reusability. It is hard to tell without knowing the type of queries and the nature of the workload.

|||

What type of partition will give me the best performance vertical or horizontal?

In horizontal partition what is the best way to do it:

How mane tables?

Witch column on each table?

The query that do all the problems is the search query and it use 14 columns in the ‘WHERE’ statement, maybe it best to divide the table into 2 table that one will hold the 14 columns for the search query and the second table will hold the rest of the columns is this will give best performance?

Large table search performance

I have a large table with 300,000 rows and it will grow in the future
The table contains member profile details and it has many columns (about 70)
The stored procedure that do the search use many columns to search the results
That why the ‘Where’ is very long and the all operation take to many time, some time I get Time out exception form the SQL server.
I’ll be happy to get some ideas how to do this search.
I think maybe to divide the table to many tables by State.


Something interesting is that the operation takes long time for the first search and then the second and the third are much faster, and after a few min that I’m not do anything again the first time is very slow

70 columns in a table is probably too many. It is hard to say. But there are few options to consider:

1. You can partition the table vertically. For example, split the table vertically into two - one with the most frequently accessed columns and another with the infrequent ones. This will make each table smaller in terms of row size and queries will be faster than accessing a single wide table. This of course complicates the data modifications since you have to issue two DMLs as opposed to one but it depends on your application and nature of the table

2. You can use partitioning (horizontal). But it depends on the choice of your partitioning key whether it is part of every search criteria or at least one of the primary search conditions. You get some manageability benefit also.

Anyway, the approach depends on your needs. You can find whitepapers in MSDN that discuss partitioning for example. Please take a look at that also. You can leave the table as is also and try to optimize the queries by say adding more indexes or creating additional statistics or restricting the search columns. Each has it's pros and cons. For example, adding more indexes will slow down data modification operations.

The reason why the queries are faster after the first time is due to plan caching. Looks like you are generating lot of ad-hoc queries due to variable number of search conditions and this is expected. Of course, the plan might get removed from cache later due to memory pressure or lack of reusability. It is hard to tell without knowing the type of queries and the nature of the workload.

|||

What type of partition will give me the best performance vertical or horizontal?

In horizontal partition what is the best way to do it:

How mane tables?

Witch column on each table?

The query that do all the problems is the search query and it use 14 columns in the ‘WHERE’ statement, maybe it best to divide the table into 2 table that one will hold the 14 columns for the search query and the second table will hold the rest of the columns is this will give best performance?

Monday, February 20, 2012

Large Insert Causing Problem With TempDB

Hello,

I have an SSIS package that basically inserts a large amount of data into a SQL Server table. The table contains sixty five columns, and a single load of data can contain two million records.

The 'loads' are split up into several 'daily' flat files. The package uses a ForEachFile loop to process each of the files. As each file is processed, the data from the files is loaded into a SQL Server table (destination).

Apparently, as the package is running, tempDB begins to consume a lot of disk space. The data file for TempDB on this particular server is configured to grow in 50mb increments with unrestricted file growth. During the last run of the package, the data file grew to 17GB. I ran the following and got the data file size down to 50mb;

USE TempDb

GO

DBCC SHRINKFILE(tempdev, 1)

Should I consider incorporating this code as part of the package, or is there something else I should consider to configure the SSIS package so that I don't run into space problems with TempDB?

Thank you for your help!

cdun2

Be sure you are loading using "fast load" and set the Maximum Insert Commit Size to a reasonable value. (100,000 perhaps)|||

Thanks. I'll check into this.

cdun2