
Hi All
We have a table in SQL SERVER 2000 which has about 250 million records
and this will be growing by 100 million every w

should contain just 13 w


be loaded the first w

And this deletes 100 million every w

of transaction log space the job is not successful.
Can you please help with what are the approaches we can take to fix
this problem?
Performance and transaction log are the issues we are facing. We tried
deletion in steps too but that also is taking time. What are the
different ways we can address this quickly.
Please reply at the earliest.
Thanks
HarishHi Harish,
You should look at partitioning, keep a cycle the partitions and simply
CREATE TABLE and DROP TABLE the new partitions, that way you won't have to
do any logging.
Tony
--
Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com - free video tutorials
"harish" <harish.prabhala@.gmail.com> wrote in message
news:1131162393.816615.122850@.f14g2000cwb.googlegroups.com...
> DELETING 100 million from a table w

> Hi All
> We have a table in SQL SERVER 2000 which has about 250 million records
> and this will be growing by 100 million every w

> should contain just 13 w


> be loaded the first w

> And this deletes 100 million every w

> of transaction log space the job is not successful.
> Can you please help with what are the approaches we can take to fix
> this problem?
> Performance and transaction log are the issues we are facing. We tried
> deletion in steps too but that also is taking time. What are the
> different ways we can address this quickly.
> Please reply at the earliest.
> Thanks
> Harish
>|||I agree with Tony in that SQL2005 gives you Partitioning which is great for
these type activities. But if you do the deletes in smaller batches you can
backup the log during the operation periodically to keep the tran log from
growing.
SET ROWCOUNT 10000
WHILE 1 = 1
BEGIN
DELETE FROM TABLE WHERE Col = xxx
IF @.@.ROWCOUNT = 0
BREAK
END
SET ROWCOUNT 0
Andrew J. Kelly SQL MVP
"harish" <harish.prabhala@.gmail.com> wrote in message
news:1131162393.816615.122850@.f14g2000cwb.googlegroups.com...
> DELETING 100 million from a table w

> Hi All
> We have a table in SQL SERVER 2000 which has about 250 million records
> and this will be growing by 100 million every w

> should contain just 13 w


> be loaded the first w

> And this deletes 100 million every w

> of transaction log space the job is not successful.
> Can you please help with what are the approaches we can take to fix
> this problem?
> Performance and transaction log are the issues we are facing. We tried
> deletion in steps too but that also is taking time. What are the
> different ways we can address this quickly.
> Please reply at the earliest.
> Thanks
> Harish
>|||Hey
Thanks. We have an index on four columns in this table. For Ex A, B, C
and D
The delete statement's where clause has the conditions for A, B and C
The delete statement's where clause has the conditions for just A
Which of the two's performance will be faster?
We tried something like this:
SET ROWCOUNT 5000;
WHILE 1 = 1
BEGIN
DELETE FROM T1 WHERE dt < '20030101' -- original delete
IF @.@.rowcount < 5000 BREAK;
END
SET ROWCOUNT 0;
1) Does this setting ROWCOUNT first sort the table and then delete?
2) The above query is executed to delete all records satisfying the
condition in steps of 5000 until the delete is comple.
How can I stop it after one 5000?|||On 5 Nov 2005 11:04:29 -0800, harish wrote:
(snip)
Hi Harish,
I just replied to the same question in another thread.
Could you please ask your questions in JUST ONE place, and in JUST ONE
group? I've seen your messages scattered over several groups, and
several different messages in just this group. Many of them have
attracted replies. It's very hard to keep track of what is going on in
all thesse threads, and it's a waste of other people's time if someone
posts a reply to you that you already had received in another group.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||harish (harish.prabhala@.gmail.com) writes:
> Thanks. We have an index on four columns in this table. For Ex A, B, C
> and D
> The delete statement's where clause has the conditions for A, B and C
> The delete statement's where clause has the conditions for just A
>
> Which of the two's performance will be faster?
The one that uses the clustered index on the table. In fact, given
the number of rows you are to delete, it is essential that you use
the clustered index in your WHERE condition. Else you will lose on
all the time it takes to locate the rows.
> We tried something like this:
>
> SET ROWCOUNT 5000;
> WHILE 1 = 1
> BEGIN
> DELETE FROM T1 WHERE dt < '20030101' -- original delete
> IF @.@.rowcount < 5000 BREAK;
> END
> SET ROWCOUNT 0;
>
> 1) Does this setting ROWCOUNT first sort the table and then delete?
It will locate the rows by some means. If there is no good index,
this may lead to a scan of the table. But there should not be any
sorting, as there is no reason to sort the table.
By the way, 5000 rows at a time, is a far too low batch-size; 50000
is a minimum. Since you need to delete 100 million, I would even
try a million at a time.
Keep in mind that if you are running with full recovery, you still
need to backup the transaction log. Simple recovery may be a good
thing here.
> 2) The above query is executed to delete all records satisfying the
> condition in steps of 5000 until the delete is comple.
> How can I stop it after one 5000?
The red button in Query Analyzer?
I still think you should consider partitioned views with one view per
month, or one per ten days in a month. In this case deleting the work
for ten days is as easy:
1) Create a new table for the next period.
2) Alter the view to include the new table, and not include the
table with the data to go.
3) Drop the table and lose 130 millions rows instantly.
Check out partitioned views in Books Online.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
No comments:
Post a Comment