Thursday, March 8, 2012
Can Iif( do a substring query on a storage field?
This is what I don't understand. Hidden rows with downline dealers
only has '5678'. But the visible row that needs a drill down might
have in the dealerlineageid field (which is a field not to be
displayed)
'5678' and '2754' and 10 other dealer numbers (multiple dealer
numbers).
How would I know if '5678' is in there as a string value (which is just
one of the many numbers in that field)? If it has multiple
dealer numbers separated by commas? Wouldn't the expression for the
"hidden" value need to be a substring query? How would I do that?
Thanks,
TrintI always prefer preparing the data from base on. Where do you get the data
you want to render in the report, is there a chance to normalize it before
passing it to the report ?
HTH, Jens Suessmeyer.
--
http://www.sqlserver2005.de
--
"trint" <trinity.smith@.gmail.com> schrieb im Newsbeitrag
news:1113916563.747603.202200@.l41g2000cwc.googlegroups.com...
> Ok,
> This is what I don't understand. Hidden rows with downline dealers
> only has '5678'. But the visible row that needs a drill down might
> have in the dealerlineageid field (which is a field not to be
> displayed)
> '5678' and '2754' and 10 other dealer numbers (multiple dealer
> numbers).
> How would I know if '5678' is in there as a string value (which is just
> one of the many numbers in that field)? If it has multiple
> dealer numbers separated by commas? Wouldn't the expression for the
> "hidden" value need to be a substring query? How would I do that?
> Thanks,
> Trint
>|||No, it comes from a group within our company that is dedicated only
toward sql server and I am only in c#.net and reporting services.
Trint
Can i use Top keyword in SQL Server 2003?
In this case you can add an id field which acts like a row number. Then you can use the following query
select * from mytable where id between 1 and 10
It will return top 10 rows
Hope this will work
All the Best
With Regards
Vijay. R|||SQL server 2003 ?
Can I use this thing to import data from a table into a view.
I need to import data from a source table(table is in Sql) into a view. I
need to insert, delete or update the view to make sure all rows in the view
have the same data as the source table. Can you please give me how to design
this in a Integration Services project?
Thanks.Views contain no data -- there is nothing to insert, delete, or update.
Simply create the view, and every time you access it, the base tables
involved in the view will be queried. Therefore, the view will always have
the same data as the base tables.
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--
"someone" <nospam@.stopspam.com> wrote in message
news:OusidbIcGHA.3840@.TK2MSFTNGP04.phx.gbl...
> Hi all,
> I need to import data from a source table(table is in Sql) into a view.
> I need to insert, delete or update the view to make sure all rows in the
> view have the same data as the source table. Can you please give me how to
> design this in a Integration Services project?
> Thanks.
>
Can I use this thing to import data from a table into a view.
I need to import data from a source table(table is in Sql) into a view. I
need to insert, delete or update the view to make sure all rows in the view
have the same data as the source table. Can you please give me how to design
this in a Integration Services project?
Thanks.Views contain no data -- there is nothing to insert, delete, or update.
Simply create the view, and every time you access it, the base tables
involved in the view will be queried. Therefore, the view will always have
the same data as the base tables.
Adam Machanic
Pro SQL Server 2005, available now
http://www.apress.com/book/bookDisplay.html?bID=457
--
"someone" <nospam@.stopspam.com> wrote in message
news:OusidbIcGHA.3840@.TK2MSFTNGP04.phx.gbl...
> Hi all,
> I need to import data from a source table(table is in Sql) into a view.
> I need to insert, delete or update the view to make sure all rows in the
> view have the same data as the source table. Can you please give me how to
> design this in a Integration Services project?
> Thanks.
>
Thursday, February 16, 2012
Can I reserve all the rows in Fact table when add the dimension in cube
But there some foreign key in the fact table which doesn't exist in the
dimension table.
When I add the dimension in cube, all no match rows will be filtered out
Can I reserve all the rows in Fact table when add the dimension in cube?
I'd suggest that as a part of your data load you establish a process which
selects all the foreign keys from fact table that are not found in the
dimension and add them to the datasource for dimension (it can be a table or
a view). Then your dimension and cube will be consistent
Regards,
Ilona Shulman
Senior Development Consultant, DBA
SSE Inc
http://www.sseinc.com
"ad" <ad@.wfes.tcc.edu.tw> wrote in message
news:elh4y1lqEHA.1728@.TK2MSFTNGP10.phx.gbl...
> My fact table join with a dimension table with a foreign key.
> But there some foreign key in the fact table which doesn't exist in the
> dimension table.
> When I add the dimension in cube, all no match rows will be filtered out
> Can I reserve all the rows in Fact table when add the dimension in cube?
>
|||Analysis Services dislike bad data quality.
so you have to insure that all your keys in your fact table exists in your
dimension.
Generally we create an "unknown" member in the dimension.
For example, the key "-1" is the unknown dimension member.
Then every row in the fact table as the -1 as the default value, so if the
dimension member doesn't exists, your fact table contain -1 and can be used
in the cube.
Change your ETL process to do this job.
"ad" <ad@.wfes.tcc.edu.tw> a crit dans le message de news:
elh4y1lqEHA.1728@.TK2MSFTNGP10.phx.gbl...
> My fact table join with a dimension table with a foreign key.
> But there some foreign key in the fact table which doesn't exist in the
> dimension table.
> When I add the dimension in cube, all no match rows will be filtered out
> Can I reserve all the rows in Fact table when add the dimension in cube?
>
Tuesday, February 14, 2012
can I read some rows from the middle of rows in DataReader?
helo..
I have 100,000 rows in the database and I want to read results for eg: from 5000 to 5050 by DataReader.
I wrote this code to do this but its too slow:
Dim SlctStr As String = "select * from topicstbl where partID like '" & PagePartID & "'"
Dim ReadCom As New SqlClient.SqlCommand
ReadCom.CommandText = SlctStr
ReadCom.Connection = MainLib.MyConnection
Dim MyReader As SqlClient.SqlDataReader = ReadCom.ExecuteReader()
Dim StartTNum As Long = 5000
For IR As Long = 0 To StartTNum - 1
MyReader.Read()
Next
Do While MyReader.Read
StartTNum += 1
If StartTNum > 5500 Then Exit Do
'''''''''''''''''''
Loop
MyReader.Close()
is there another way to do the same thing better off than this code?
If you are using SQL 2005, use Row_Number() to create rownumbering, and then in your query, add the WHERE clause:
|||thank you..
WHERE MyRowNumber BETWEEN 5000 AND 5050
Sunday, February 12, 2012
Can I Overlay Rows in a Table?
like the second group header to 'overlap with' the detail row when the
report prints. Is this possible? And if so, how do I go about
accomplishing this?
Thanks!Very good question. I'm curious as to how to do this too. Sorry this
isn't a helpful response, but hopefully others see this thread who can
answer it and know there's an interest.
Cindy wrote:
> I've created a table with two group header rows and a detail row. I would
> like the second group header to 'overlap with' the detail row when the
> report prints. Is this possible? And if so, how do I go about
> accomplishing this?
> Thanks!|||I don't want to hide my detail lines. I want my second group header to
print in the same visual row as my first row of detail.
Below is the best I can do with a textual visualization:
[Group Header #1]
[Group Header #2] Detail
Detail
Detail|||Thanks BYU,
That sounds like an approach. My second group header contains a memo field
with a lot of word-wrapping. My hope was that it would 'overlay' however
many detail lines it wraps out to. I believe with your suggestion I would
get a large space between my first detail row and my second row.
My crude illustration of what I am after:
[Group Header #1]
[Group Header #2 [Detail Row 1]
that could potentially [Detail Row 2]
wrap to multiple lines of [Detail Row 3]
text] [Detail Row 4]
[Detail Row 5]
[Group Header #1]
Your solution as I understand it:
[Group Header #1]
[Group Header #2 [Detail Row 1]
that could potentially
wrap to multiple lines of
text]
[Detail Row 2]
[Detail Row 3]
[Detail Row 4]
[Detail Row 5]
[Group Header #1]
Can I make my delete run faster?
it, the need is to remove half of these rows. There are 7 indexes on
the table (one clustered, six non-clustered). Since the IT director
won't let me move it to a faster server, I have to delete the records
month by month. The data is not referential (I didn't design the db) so
I can't use cascading deletes and have to use a delete trigger:
CREATE TRIGGER [Delete From Joining Tables] ON dbo.Consignments
FOR DELETE
AS
SET NOCOUNT ON
IF NOT EXISTS(SELECT * FROM INSERTED)
BEGIN
delete from tbl1 where exists (select * from deleted as d where
d.[master id] = tbl1.[master id])
delete from tbl2 where exists (select * from deleted as d where
d.[master id] = tbl2.[master id])
delete from tbl3 where exists (select * from deleted as d where
d.[master id] = tbl3.[master id])
delete from tbl4 where exists (select * from deleted as d where
d.[master id] = tbl4.[master id])
. .. .. .. .. .. .. .. .. ..
. .. .. .. .. .. .. .. .. ..
(up to 17 tables)
END
The trigger works fine and is deleting the relevant data in the above
tables, however I am wondering if there is some way I can speed up the
deletes. I have tried using a while loop and deleting in piecemeal
however this doesn't seem to be any faster than using the following
statement:
delete from [consignments]
where [Date] >= '20011201' and [Date] <= '20011207'
Currently on our high spec'd dev server (10 disks, quad procs,
enterprise edn) I am experiencing odd timings, sometimes it takes 5
minutes to delete 1million rows, yet it can take over 30 mins to delete
1.5million rows. There is no one else accessing this server. I created
a further index on [master id] and [Date] no visible increase in
deletes. I have ran DBCC INDEXDEFRAG on each index in the table and
Updated the stats, no joy also. So, any ideas you SQL MVP's?
Should I be using index hints? (I have read the SQL Server can do this
efficiently on its own.)
Cheers
qhScott
Why do you need a trigger?
Write stored procedure instead
Don't run the transaction to delete all rows at once. Divide the tran into
small ones
SET ROWCOUNT 100000
WHILE 1 = 1
BEGIN
DELETE FROM MyTable WHERE ........
IF @.@.ROWCOUNT = 0
BEGIN
BREAK
END
END
SET ROWCOUNT 0
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1116427283.912954.281370@.g49g2000cwa.googlegroups.com...
> On our OLTP 24/6 system we have a table with over 40,000,000 rows in
> it, the need is to remove half of these rows. There are 7 indexes on
> the table (one clustered, six non-clustered). Since the IT director
> won't let me move it to a faster server, I have to delete the records
> month by month. The data is not referential (I didn't design the db) so
> I can't use cascading deletes and have to use a delete trigger:
> CREATE TRIGGER [Delete From Joining Tables] ON dbo.Consignments
> FOR DELETE
> AS
> SET NOCOUNT ON
> IF NOT EXISTS(SELECT * FROM INSERTED)
> BEGIN
> delete from tbl1 where exists (select * from deleted as d where
> d.[master id] = tbl1.[master id])
> delete from tbl2 where exists (select * from deleted as d where
> d.[master id] = tbl2.[master id])
> delete from tbl3 where exists (select * from deleted as d where
> d.[master id] = tbl3.[master id])
> delete from tbl4 where exists (select * from deleted as d where
> d.[master id] = tbl4.[master id])
> .. .. .. .. .. .. .. .. .. ..
> .. .. .. .. .. .. .. .. .. ..
> (up to 17 tables)
> END
> The trigger works fine and is deleting the relevant data in the above
> tables, however I am wondering if there is some way I can speed up the
> deletes. I have tried using a while loop and deleting in piecemeal
> however this doesn't seem to be any faster than using the following
> statement:
> delete from [consignments]
> where [Date] >= '20011201' and [Date] <= '20011207'
> Currently on our high spec'd dev server (10 disks, quad procs,
> enterprise edn) I am experiencing odd timings, sometimes it takes 5
> minutes to delete 1million rows, yet it can take over 30 mins to delete
> 1.5million rows. There is no one else accessing this server. I created
> a further index on [master id] and [Date] no visible increase in
> deletes. I have ran DBCC INDEXDEFRAG on each index in the table and
> Updated the stats, no joy also. So, any ideas you SQL MVP's?
> Should I be using index hints? (I have read the SQL Server can do this
> efficiently on its own.)
> Cheers
> qh
>|||If you are deleting half of the rows in this table, then I assume this is a
one time deal and not a daily operation. Most of the time spent grinding
away is SQL Server logging the deletes to the transaction log. In addition
to deleting the records in batches of say 100,000 at a time, consider
changing the recovery model, perhaps temporarily, of the database to
"simple" and executing a "checkpoint" between each iteration of deletes.
This will minimize the amount of transaction logging. You may also want to
delete indexes from the table prior to the mass delete and then re-apply
them again afterward, especially the primary / clustered index. Also,
possibly temporary setting the database to SINGLE_USER mode could help.
"Scott" <quackhandle1975@.yahoo.co.uk> wrote in message
news:1116427283.912954.281370@.g49g2000cwa.googlegroups.com...
> On our OLTP 24/6 system we have a table with over 40,000,000 rows in
> it, the need is to remove half of these rows. There are 7 indexes on
> the table (one clustered, six non-clustered). Since the IT director
> won't let me move it to a faster server, I have to delete the records
> month by month. The data is not referential (I didn't design the db) so
> I can't use cascading deletes and have to use a delete trigger:
> CREATE TRIGGER [Delete From Joining Tables] ON dbo.Consignments
> FOR DELETE
> AS
> SET NOCOUNT ON
> IF NOT EXISTS(SELECT * FROM INSERTED)
> BEGIN
> delete from tbl1 where exists (select * from deleted as d where
> d.[master id] = tbl1.[master id])
> delete from tbl2 where exists (select * from deleted as d where
> d.[master id] = tbl2.[master id])
> delete from tbl3 where exists (select * from deleted as d where
> d.[master id] = tbl3.[master id])
> delete from tbl4 where exists (select * from deleted as d where
> d.[master id] = tbl4.[master id])
> .. .. .. .. .. .. .. .. .. ..
> .. .. .. .. .. .. .. .. .. ..
> (up to 17 tables)
> END
> The trigger works fine and is deleting the relevant data in the above
> tables, however I am wondering if there is some way I can speed up the
> deletes. I have tried using a while loop and deleting in piecemeal
> however this doesn't seem to be any faster than using the following
> statement:
> delete from [consignments]
> where [Date] >= '20011201' and [Date] <= '20011207'
> Currently on our high spec'd dev server (10 disks, quad procs,
> enterprise edn) I am experiencing odd timings, sometimes it takes 5
> minutes to delete 1million rows, yet it can take over 30 mins to delete
> 1.5million rows. There is no one else accessing this server. I created
> a further index on [master id] and [Date] no visible increase in
> deletes. I have ran DBCC INDEXDEFRAG on each index in the table and
> Updated the stats, no joy also. So, any ideas you SQL MVP's?
> Should I be using index hints? (I have read the SQL Server can do this
> efficiently on its own.)
> Cheers
> qh
>|||JT wrote:
> If you are deleting half of the rows in this table, then I assume
this is a
> one time deal and not a daily operation.
Yes, for political reasons, I can't simply create a new table and copy
the rows we want then rename the new/old table. Therefore on a w

(only time db isn't being used) I need to run a script that can remove
over 15,000,000 in a few hours. I should be able to do it now, however
I want to see if I can make it faster.
I'm using a trigger to ensure that I don't have any redundant data in
the other tables. Yes RI should have been setup in the design stage
however, in a perfect world, etc...
Thanks for the reply.
Scott|||Hi
This can be modified as:
delete tbl4
FROM tbl4
INNER JOIN deleted as d ON d.[master id] = tbl4.[master id]
this will improve the performance
best Regards,
Chandra
http://chanduas.blogspot.com/
http://groups.msn.com/SQLResource/
---
"Scott" wrote:
> On our OLTP 24/6 system we have a table with over 40,000,000 rows in
> it, the need is to remove half of these rows. There are 7 indexes on
> the table (one clustered, six non-clustered). Since the IT director
> won't let me move it to a faster server, I have to delete the records
> month by month. The data is not referential (I didn't design the db) so
> I can't use cascading deletes and have to use a delete trigger:
> CREATE TRIGGER [Delete From Joining Tables] ON dbo.Consignments
> FOR DELETE
> AS
> SET NOCOUNT ON
> IF NOT EXISTS(SELECT * FROM INSERTED)
> BEGIN
> delete from tbl1 where exists (select * from deleted as d where
> d.[master id] = tbl1.[master id])
> delete from tbl2 where exists (select * from deleted as d where
> d.[master id] = tbl2.[master id])
> delete from tbl3 where exists (select * from deleted as d where
> d.[master id] = tbl3.[master id])
> delete from tbl4 where exists (select * from deleted as d where
> d.[master id] = tbl4.[master id])
> .. .. .. .. .. .. .. .. .. ..
> .. .. .. .. .. .. .. .. .. ..
> (up to 17 tables)
> END
> The trigger works fine and is deleting the relevant data in the above
> tables, however I am wondering if there is some way I can speed up the
> deletes. I have tried using a while loop and deleting in piecemeal
> however this doesn't seem to be any faster than using the following
> statement:
> delete from [consignments]
> where [Date] >= '20011201' and [Date] <= '20011207'
> Currently on our high spec'd dev server (10 disks, quad procs,
> enterprise edn) I am experiencing odd timings, sometimes it takes 5
> minutes to delete 1million rows, yet it can take over 30 mins to delete
> 1.5million rows. There is no one else accessing this server. I created
> a further index on [master id] and [Date] no visible increase in
> deletes. I have ran DBCC INDEXDEFRAG on each index in the table and
> Updated the stats, no joy also. So, any ideas you SQL MVP's?
> Should I be using index hints? (I have read the SQL Server can do this
> efficiently on its own.)
> Cheers
> qh
>
Friday, February 10, 2012
Can I load only specific rows of data?
Is there a way to load only specific rows of data into a table in a SSIS package?
I am copying data from 2 separate ingres databases tables that have the same table structure into one SQLServer table using the ADO.NET DataReader Source for ODBC.
Let's say table1 from db1 and table2 from db2 for example.
table1 in db1 contains two rows:
MemberID - 1
Name - Rob
MemberID - 2
Name - James
table2 in db2 contains one row:
MemberID - 1
Name - Rob
MemberID - 2
Name - JAMES
I would like my SSIS job to load all data from table1 in db1 and then only load the data from table2 in db2 where the data in the row is different (as in 'JAMES').
Is there a data flow transformation that can help me do this?
I previously did this in 2000 DTS but had to load the data into 2 separate 'holding' tables and use T-SQL to update the table1 in db1 and was hoping I cuold cut out these steps in SSIS.
I was hoping that SSIS might have some new quick way to do this (lookups maybe?)
Can you help? Thank You
You can use the Lookup transform or the Merge Join transform. T-SQL is still a valid option if it works best for you.Jamie did a nice comparison here-
Get all from Table A that isn't in Table B
(http://www.sqlis.com/default.aspx?311)|||Thank you
I shall have a read of the article and give it a try
Can I limit 'detail' rows per page?
of rows to, let's say 5 per page?
Thanks,
TrintHello, I would like to do the same thing. For example, I would like to create an invoice template and have a header section contain the Billto, Shipto addresses and a footer section contain the remittance section. However, the middle of the 'page' should have a fixed number of records for the detail items on the invoice. If more than the fixed amount of detail records is reached, a second page would be displayed. Any help or insight with this would be helpful. Thanks, Dave
--
Message posted via http://www.sqlmonster.com|||Add a group to your table, set the group expression to
=System.Math.Floor((RowNumber(Nothing)-1)/<# of rows you want to display in
a page>, and turn on page break on the group. The header and footer sections
would be the group header and footer, and you can mark them repeated on new
page.
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Dave C via SQLMonster.com" <forum@.SQLMonster.com> wrote in message
news:c5d891b35fa44feeaef5957179524cdc@.SQLMonster.com...
> Hello, I would like to do the same thing. For example, I would like to
create an invoice template and have a header section contain the Billto,
Shipto addresses and a footer section contain the remittance section.
However, the middle of the 'page' should have a fixed number of records for
the detail items on the invoice. If more than the fixed amount of detail
records is reached, a second page would be displayed. Any help or insight
with this would be helpful. Thanks, Dave
> --
> Message posted via http://www.sqlmonster.com
Can I just import first several rows of Text file to Table using DTS
I want import Text files to database using DTS, but I only want to
import first several Rows because the file size is very big and the
first row contains colomn names, I hope to check the field names first.
Anyone help meI'm not a big DTS guy, you might check the DTS groups, but with BULK INSERT
you can specify WITH options like FIRSTROW and LASTROW.
A
<helloyou8888@.yahoo.ca> wrote in message
news:1149697121.715659.143370@.h76g2000cwa.googlegroups.com...
> Hello Group
> I want import Text files to database using DTS, but I only want to
> import first several Rows because the file size is very big and the
> first row contains colomn names, I hope to check the field names first.
> Anyone help me
>