My table design is
CREATE TABLE [HashKeyTotals] (
[hKey] [bigint] NOT NULL ,
[Total] [int] NULL ,
CONSTRAINT [HashKeyTotals] PRIMARY KEY CLUSTERED
(
[hKey]
) ON [PRIMARY]
) ON [PRIMARY]
GO
and I also have data files of the same structure. My goal is to import the
data files in order to update the Total table where the hKey matches.
I could
A. Read each record (4.5 million) and send to a stored procedure which does
a look up and add to the total field.
or
B. Import the data file into a temp table, join the HashKeyTotals table with
some sort of calc'd field summing the two totals and put that into a temp
table to be loaded back into the HashKeyTotals table.
I'm not even sure if B is possible but just a crazy idea. Will it work and
will it be that much faster than plan A? The HashKeyTotals table can have up
to 25 million records and I'd be loading a data file with 5 million records
in it each day.
Any thoughts?
ThanksSo, you're incrementing the Total by what you get in the import table for
the corresponding key? If so, try:
CREATE TABLE [HashKeyTotals] (
[hKey] [bigint] NOT NULL ,
[Total] [int] NULL ,
CONSTRAINT [PK_HashKeyTotals] PRIMARY KEY CLUSTERED
(
[hKey]
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [HashKeyTotals2] (
[hKey] [bigint] NOT NULL ,
[Total] [int] NULL ,
CONSTRAINT [PK_HashKeyTotals2] PRIMARY KEY CLUSTERED
(
[hKey]
) ON [PRIMARY]
) ON [PRIMARY]
GO
insert HashKeyTotals values (1, 10)
insert HashKeyTotals values (2, 20)
insert HashKeyTotals2 values (1, 5)
insert HashKeyTotals2 values (2, 15)
go
update h
set
Total = h.Total + h2.Total
from
HashKeyTotals h
join HashKeyTotals2 h2 on h2.hKey = h.hKey
go
select * from HashKeyTotals
go
drop table HashKeyTotals, HashKeyTotals2
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Toronto, ON Canada
.
"Mike" <Mike@.mike.com> wrote in message
news:uuIuedUcGHA.3936@.TK2MSFTNGP05.phx.gbl...
My table design is
CREATE TABLE [HashKeyTotals] (
[hKey] [bigint] NOT NULL ,
[Total] [int] NULL ,
CONSTRAINT [HashKeyTotals] PRIMARY KEY CLUSTERED
(
[hKey]
) ON [PRIMARY]
) ON [PRIMARY]
GO
and I also have data files of the same structure. My goal is to import the
data files in order to update the Total table where the hKey matches.
I could
A. Read each record (4.5 million) and send to a stored procedure which does
a look up and add to the total field.
or
B. Import the data file into a temp table, join the HashKeyTotals table with
some sort of calc'd field summing the two totals and put that into a temp
table to be loaded back into the HashKeyTotals table.
I'm not even sure if B is possible but just a crazy idea. Will it work and
will it be that much faster than plan A? The HashKeyTotals table can have up
to 25 million records and I'd be loading a data file with 5 million records
in it each day.
Any thoughts?
Thanks|||Yes thats exactly what I need. Thank you very much.
"Tom Moreau" <tom@.dont.spam.me.cips.ca> wrote in message
news:%23LBk8iVcGHA.1276@.TK2MSFTNGP03.phx.gbl...
> So, you're incrementing the Total by what you get in the import table for
> the corresponding key? If so, try:
> CREATE TABLE [HashKeyTotals] (
> [hKey] [bigint] NOT NULL ,
> [Total] [int] NULL ,
> CONSTRAINT [PK_HashKeyTotals] PRIMARY KEY CLUSTERED
> (
> [hKey]
> ) ON [PRIMARY]
> ) ON [PRIMARY]
> GO
> CREATE TABLE [HashKeyTotals2] (
> [hKey] [bigint] NOT NULL ,
> [Total] [int] NULL ,
> CONSTRAINT [PK_HashKeyTotals2] PRIMARY KEY CLUSTERED
> (
> [hKey]
> ) ON [PRIMARY]
> ) ON [PRIMARY]
> GO
> insert HashKeyTotals values (1, 10)
> insert HashKeyTotals values (2, 20)
> insert HashKeyTotals2 values (1, 5)
> insert HashKeyTotals2 values (2, 15)
> go
> update h
> set
> Total = h.Total + h2.Total
> from
> HashKeyTotals h
> join HashKeyTotals2 h2 on h2.hKey = h.hKey
> go
> select * from HashKeyTotals
> go
> drop table HashKeyTotals, HashKeyTotals2
> --
> Tom
> ----
> Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
> SQL Server MVP
> Toronto, ON Canada
> .
> "Mike" <Mike@.mike.com> wrote in message
> news:uuIuedUcGHA.3936@.TK2MSFTNGP05.phx.gbl...
> My table design is
> CREATE TABLE [HashKeyTotals] (
> [hKey] [bigint] NOT NULL ,
> [Total] [int] NULL ,
> CONSTRAINT [HashKeyTotals] PRIMARY KEY CLUSTERED
> (
> [hKey]
> ) ON [PRIMARY]
> ) ON [PRIMARY]
> GO
> and I also have data files of the same structure. My goal is to import the
> data files in order to update the Total table where the hKey matches.
> I could
> A. Read each record (4.5 million) and send to a stored procedure which
> does
> a look up and add to the total field.
> or
> B. Import the data file into a temp table, join the HashKeyTotals table
> with
> some sort of calc'd field summing the two totals and put that into a temp
> table to be loaded back into the HashKeyTotals table.
> I'm not even sure if B is possible but just a crazy idea. Will it work and
> will it be that much faster than plan A? The HashKeyTotals table can have
> up
> to 25 million records and I'd be loading a data file with 5 million
> records
> in it each day.
> Any thoughts?
> Thanks
>
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment