I have a large database - 33 million rows - VERY WIDE table and am
running out of disk space - lovely ...
I have to move 7 million records off the table - no biggie - I'll just
dts the records to another server.
My problem is deleting the remaining data to make room on the server
without impacting/logging too much.
Any help would be appreciated.
Thanks,
CraigOn Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> I have a large database - 33 million rows - VERY WIDE table and am
> running out of disk space - lovely ...
> I have to move 7 million records off the table - no biggie - I'll just
> dts the records to another server.
> My problem is deleting the remaining data to make room on the server
> without impacting/logging too much.
> Any help would be appreciated.
> Thanks,
> Craig
truncate|||On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> I have a large database - 33 million rows - VERY WIDE table and am
> running out of disk space - lovely ...
> I have to move 7 million records off the table - no biggie - I'll just
> dts the records to another server.
> My problem is deleting the remaining data to make room on the server
> without impacting/logging too much.
> Any help would be appreciated.
> Thanks,
> Craig
you can truncate the table one the data is moved out.|||On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> > I have a large database - 33 million rows - VERY WIDE table and am
> > running out of disk space - lovely ...
> > I have to move 7 million records off the table - no biggie - I'll just
> > dts the records to another server.
> > My problem is deleting the remaining data to make room on the server
> > without impacting/logging too much.
> > Any help would be appreciated.
> > Thanks,
> > Craig
> you can truncate the table one the data is moved out.
I will only need to delete 7 m rows of the 33 m rows ... so I need a
good way to do that.
Craig|||On Sep 12, 11:27 am, Craig <csomb...@.gmail.com> wrote:
> On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>
>
> > On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> > > I have a large database - 33 million rows - VERY WIDE table and am
> > > running out of disk space - lovely ...
> > > I have to move 7 million records off the table - no biggie - I'll just
> > > dts the records to another server.
> > > My problem is deleting the remaining data to make room on the server
> > > without impacting/logging too much.
> > > Any help would be appreciated.
> > > Thanks,
> > > Craig
> > you can truncate the table one the data is moved out.
> I will only need to delete 7 m rows of the 33 m rows ... so I need a
> good way to do that.
> Craig- Hide quoted text -
> - Show quoted text -
I need to look at the delete statement. How long it takes now?|||Do not forget changing your Recovery Model temporarily to "Simple Recovery
Model". And before doing this, backup your Transaction Log, otherwise log
chain will be broken and you will not be able to restore your database to
the point where you changed your recovery model if you encounter a problem.
After your deleting operation, change your Recovery Model back to FULL.
(After changing your recovery model to FULL, back up your transaction log
again to prevent breaking the log chain)
The aim of changing your recovery model is not to log all this 7million
deletion to the transaction log file and blow it up. You change your
recovery model to prevent this. Because in this operation (as you already
lack of free space on your disks) if you keep FULL recovery model, then it
would log all this deletion operation and it probably raise an "out of
space" error and halt the process of deletion or whatever.
Here's a link that you can obtain more info abour Recovery Models from:
http://msdn2.microsoft.com/en-us/library/ms366344.aspx
--
Ekrem Önsoy
"Craig" <csomberg@.gmail.com> wrote in message
news:1189574876.579528.58280@.r29g2000hsg.googlegroups.com...
> On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
>> > I have a large database - 33 million rows - VERY WIDE table and am
>> > running out of disk space - lovely ...
>> > I have to move 7 million records off the table - no biggie - I'll just
>> > dts the records to another server.
>> > My problem is deleting the remaining data to make room on the server
>> > without impacting/logging too much.
>> > Any help would be appreciated.
>> > Thanks,
>> > Craig
>> you can truncate the table one the data is moved out.
>
> I will only need to delete 7 m rows of the 33 m rows ... so I need a
> good way to do that.
> Craig
>|||On Tue, 11 Sep 2007 22:27:56 -0700, Craig wrote:
>On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
>> > I have a large database - 33 million rows - VERY WIDE table and am
>> > running out of disk space - lovely ...
>> > I have to move 7 million records off the table - no biggie - I'll just
>> > dts the records to another server.
>> > My problem is deleting the remaining data to make room on the server
>> > without impacting/logging too much.
>> > Any help would be appreciated.
>> > Thanks,
>> > Craig
>> you can truncate the table one the data is moved out.
>
>I will only need to delete 7 m rows of the 33 m rows ... so I need a
>good way to do that.
>Craig
Hi Craig,
A common technique is to split the delete in batches, like this:
DECLARE @.rc int;
SET @.rc = 1;
WHILE @.rc <> 0
BEGIN;
DELETE TOP(100000)
FROM YourTable
WHERE whatever has to be deleted;
SET @.rc = @.@.ROWCOUNT;
END;
In SQL Server 2000, DELETE TOP(..) is not supported - instead, use SET
ROWCOUNT 100000 in the beginning and SET ROWCOUNT 0 at the end of the
script. Also, for all versions, you might need to experiment to find the
ideal batch size.
IIf your recovery model is full, either switch temporarily to simple (as
sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||> IIf your recovery model is full, either switch temporarily to simple (as
> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
Now, above doesn't jive in a SQL Server forum. I think you meant:
CASE WHEN recovery model is full THEN switch temporarily to simple ...
(Sorry, I couldn't resist. Just playing a bad joke with Hugo, doesn't have anything to do with your
problem, Craig...)
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Hugo Kornelis" <hugo@.perFact.REMOVETHIS.info.INVALID> wrote in message
news:60tge3pj8h8jk3d3kohjahp2gh6s345595@.4ax.com...
> On Tue, 11 Sep 2007 22:27:56 -0700, Craig wrote:
>>On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
>> > I have a large database - 33 million rows - VERY WIDE table and am
>> > running out of disk space - lovely ...
>> > I have to move 7 million records off the table - no biggie - I'll just
>> > dts the records to another server.
>> > My problem is deleting the remaining data to make room on the server
>> > without impacting/logging too much.
>> > Any help would be appreciated.
>> > Thanks,
>> > Craig
>> you can truncate the table one the data is moved out.
>>
>>I will only need to delete 7 m rows of the 33 m rows ... so I need a
>>good way to do that.
>>Craig
> Hi Craig,
> A common technique is to split the delete in batches, like this:
> DECLARE @.rc int;
> SET @.rc = 1;
> WHILE @.rc <> 0
> BEGIN;
> DELETE TOP(100000)
> FROM YourTable
> WHERE whatever has to be deleted;
> SET @.rc = @.@.ROWCOUNT;
> END;
> In SQL Server 2000, DELETE TOP(..) is not supported - instead, use SET
> ROWCOUNT 100000 in the beginning and SET ROWCOUNT 0 at the end of the
> script. Also, for all versions, you might need to experiment to find the
> ideal batch size.
> IIf your recovery model is full, either switch temporarily to simple (as
> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
> --
> Hugo Kornelis, SQL Server MVP
> My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||On Thu, 13 Sep 2007 09:52:56 +0200, Tibor Karaszi wrote:
>> IIf your recovery model is full, either switch temporarily to simple (as
>> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
>Now, above doesn't jive in a SQL Server forum. I think you meant:
>CASE WHEN recovery model is full THEN switch temporarily to simple ...
Hey, Tibor,
You are not actually using CASE as a *statement*, now are you? Off you
go to the nearest C++ newsgroup!!
>(Sorry, I couldn't resist. Just playing a bad joke with Hugo, doesn't have anything to do with your
>problem, Craig...)
And neither could I ... :-)
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||> You are not actually using CASE as a *statement*, now are you? Off you
> go to the nearest C++ newsgroup!!
I guess my wishes for a closer integration with ANSI SQL PSM syntax influences me... :-)
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Hugo Kornelis" <hugo@.perFact.REMOVETHIS.info.INVALID> wrote in message
news:rhije3lgrsc2476uhtfig6ue17iiv3b1s8@.4ax.com...
> On Thu, 13 Sep 2007 09:52:56 +0200, Tibor Karaszi wrote:
>> IIf your recovery model is full, either switch temporarily to simple (as
>> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
>>Now, above doesn't jive in a SQL Server forum. I think you meant:
>>CASE WHEN recovery model is full THEN switch temporarily to simple ...
> Hey, Tibor,
> You are not actually using CASE as a *statement*, now are you? Off you
> go to the nearest C++ newsgroup!!
>>(Sorry, I couldn't resist. Just playing a bad joke with Hugo, doesn't have anything to do with
>>your
>>problem, Craig...)
> And neither could I ... :-)
> --
> Hugo Kornelis, SQL Server MVP
> My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
Showing posts with label wide. Show all posts
Showing posts with label wide. Show all posts
Wednesday, March 21, 2012
Subscribe to:
Posts (Atom)