Showing posts with label details. Show all posts
Showing posts with label details. Show all posts

Wednesday, March 21, 2012

Last report item or RowNumber for details grouping

I have a report with details grouping on table. What i need to do is put row number only on Parent row and skip the child row. When i use RowNumber("GroupName") of course it gives me a current RowNumber. Is there a way to count only parents?

This post may help.

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=903428&SiteID=1

cheers,

Andrew

|||

Hello,

Try putting this in the same row as your parent group row:

=RunningValue(Fields!GroupFieldName.Value, CountDistinct, nothing)

Jarret

Wednesday, March 7, 2012

Large table search performance

I have a large table with 300,000 rows and it will grow in the future
The table contains member profile details and it has many columns (about 70)
The stored procedure that do the search use many columns to search the results
That why the ‘Where’ is very long and the all operation take to many time, some time I get Time out exception form the SQL server.
I’ll be happy to get some ideas how to do this search.
I think maybe to divide the table to many tables by State.


Something interesting is that the operation takes long time for the first search and then the second and the third are much faster, and after a few min that I’m not do anything again the first time is very slow

70 columns in a table is probably too many. It is hard to say. But there are few options to consider:

1. You can partition the table vertically. For example, split the table vertically into two - one with the most frequently accessed columns and another with the infrequent ones. This will make each table smaller in terms of row size and queries will be faster than accessing a single wide table. This of course complicates the data modifications since you have to issue two DMLs as opposed to one but it depends on your application and nature of the table

2. You can use partitioning (horizontal). But it depends on the choice of your partitioning key whether it is part of every search criteria or at least one of the primary search conditions. You get some manageability benefit also.

Anyway, the approach depends on your needs. You can find whitepapers in MSDN that discuss partitioning for example. Please take a look at that also. You can leave the table as is also and try to optimize the queries by say adding more indexes or creating additional statistics or restricting the search columns. Each has it's pros and cons. For example, adding more indexes will slow down data modification operations.

The reason why the queries are faster after the first time is due to plan caching. Looks like you are generating lot of ad-hoc queries due to variable number of search conditions and this is expected. Of course, the plan might get removed from cache later due to memory pressure or lack of reusability. It is hard to tell without knowing the type of queries and the nature of the workload.

|||

What type of partition will give me the best performance vertical or horizontal?

In horizontal partition what is the best way to do it:

How mane tables?

Witch column on each table?

The query that do all the problems is the search query and it use 14 columns in the ‘WHERE’ statement, maybe it best to divide the table into 2 table that one will hold the 14 columns for the search query and the second table will hold the rest of the columns is this will give best performance?

Large table search performance

I have a large table with 300,000 rows and it will grow in the future
The table contains member profile details and it has many columns (about 70)
The stored procedure that do the search use many columns to search the results
That why the ‘Where’ is very long and the all operation take to many time, some time I get Time out exception form the SQL server.
I’ll be happy to get some ideas how to do this search.
I think maybe to divide the table to many tables by State.


Something interesting is that the operation takes long time for the first search and then the second and the third are much faster, and after a few min that I’m not do anything again the first time is very slow

70 columns in a table is probably too many. It is hard to say. But there are few options to consider:

1. You can partition the table vertically. For example, split the table vertically into two - one with the most frequently accessed columns and another with the infrequent ones. This will make each table smaller in terms of row size and queries will be faster than accessing a single wide table. This of course complicates the data modifications since you have to issue two DMLs as opposed to one but it depends on your application and nature of the table

2. You can use partitioning (horizontal). But it depends on the choice of your partitioning key whether it is part of every search criteria or at least one of the primary search conditions. You get some manageability benefit also.

Anyway, the approach depends on your needs. You can find whitepapers in MSDN that discuss partitioning for example. Please take a look at that also. You can leave the table as is also and try to optimize the queries by say adding more indexes or creating additional statistics or restricting the search columns. Each has it's pros and cons. For example, adding more indexes will slow down data modification operations.

The reason why the queries are faster after the first time is due to plan caching. Looks like you are generating lot of ad-hoc queries due to variable number of search conditions and this is expected. Of course, the plan might get removed from cache later due to memory pressure or lack of reusability. It is hard to tell without knowing the type of queries and the nature of the workload.

|||

What type of partition will give me the best performance vertical or horizontal?

In horizontal partition what is the best way to do it:

How mane tables?

Witch column on each table?

The query that do all the problems is the search query and it use 14 columns in the ‘WHERE’ statement, maybe it best to divide the table into 2 table that one will hold the 14 columns for the search query and the second table will hold the rest of the columns is this will give best performance?

Friday, February 24, 2012

Large number of rows issue

Hi ,

There is a table with the following structure

_
Date-Time of Operation | Message | details | Reason | Username | IP | MAC-Address
_

A user can fire query based on some condition on columns. The maximum no of rows that should be returned are 1 billion.

By default the data returned to the view is sorted on Date-Time. Further user can sort the data in the view on Message, details , reason , username , IP , Mac-Address.

Suppose i have a scenerio where the user first gets the 1 billion records, then some records are inserted in the database.Then user sort on IP column. Now for differnt sorting if i again fire the query new data will be obtained. Thus i will get different result.

One way is to maintain the data in my memory. But the number of records are huge,so this is not a good solution. So what should i do to get the disconnected data without actually getting everything in the memory.

Regards,
Sunil

One way would be to have a stored procedure to fire this information into another table, then have a view of that table. Hence you are in control of the "loading" of that data for reporting purposes.|||

Hi,

I had a similar case in my company.

The solution for the case was creating a second instance on a different server which keeps denormalized data partitioned according to a date field. And loading the most recent partition into memory.

Also the query should have to use the date field since it will use that criteria in the index.

The OLTP database server was using 4GB of RAM on the other hand this second instance was using 12 GB of RAM only for maintaining such reporting processes.

Perhaps you may create covering index on the table. But this will increase the size of data.

Eralper