Showing posts with label running. Show all posts
Showing posts with label running. Show all posts

Tuesday, March 27, 2012

deleting old backups/trn files.

i have a maintenance plan running on my database, in which I told the wizard, on creation, to "remove files older than 4 week" and yet it doesn't seem to be doing so, as on checking this morning, diskspace was getting low, due to over 300gb of backups and trn' dating back to september.

Anyone have ny problems with maintenance plans not cleaning up when told?

ano because I do not use them. I create my own jobs.|||Maintenance plans are nothing but wizards that create SQL Agent jobs. If the job is modified, the maintenance plan has no clue that anything has changed, and if the maintenance plan is re-edited it will overwrite any other changes made to the jobs.
Maintenance plans are top candidates for the most confusing and misleading functionality within SQL server. I avoid them, except as a means of defining groups of databases for administrative purposes.
Open up the job in SQL Agent and check the code that is being run. It should be something like "EXEC xp_sqlmaint '-PlanID 02A52657-D546-11D1-9D8A-00A0C9054212...".
Post it here.|||here you go...

EXECUTE master.dbo.xp_sqlmaint N'-PlanID 2A998DBD-3F5D-4685-ACCA-70354638C8C5 -Rpt "D:\mssql\REPORTS\live DB Maintenance4.txt" -WriteHistory -VrfyBackup -BkUpOnlyIfClean -CkDBRepair -BkUpMedia DISK -BkUpDB "D:\mssql\BACKUP" -DelBkUps 4WEEKS -CrBkSubDir -BkExt "BAK"'|||There is an additional parameter for specifying deletion of reports, and for some reason this parameter is left out of the sql_maint documentation in Books Online. I am thinking it is "-DelRpts 4WEEKS", but I am not sure. I will look it up for you once I get back to my office.

Wednesday, March 21, 2012

deleting database backup history

I started running sp_delete_backuphistory before leaving work yesterday and
now 17 hours later it is still running. I know there where over 270,000
records in the backupset table so I knew it would take a while to run but
this seems too long. Should I cancel the query execution and do it in smaller
chunks, or let it go now that it's been running this long.
I posted this same post earlier today but it looked like it didn't post so,
my appologies if this is a duplicate.This might help:
http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx
--
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
news:DA5E5F91-00DB-4C2B-8041-92C2D86D07CF@.microsoft.com...
>I started running sp_delete_backuphistory before leaving work yesterday and
> now 17 hours later it is still running. I know there where over 270,000
> records in the backupset table so I knew it would take a while to run but
> this seems too long. Should I cancel the query execution and do it in
> smaller
> chunks, or let it go now that it's been running this long.
> I posted this same post earlier today but it looked like it didn't post
> so,
> my appologies if this is a duplicate.|||Geoff, Thanks very much. I was looking at this link yesterday but didn't do
the indexing first. I have stopped the query now and the backupset table is
down to 167000 records from over 270000. Next question is approximately how
long will it take to run the indexing script? I'm thinking I would like to
run the script while I'm at work then run the sp_delete_backuphistory over
the weekend. Thanks again, this problem has been a pain for quite a while.
"Geoff N. Hiten" wrote:
> This might help:
> http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
>
> "AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
> news:DA5E5F91-00DB-4C2B-8041-92C2D86D07CF@.microsoft.com...
> >I started running sp_delete_backuphistory before leaving work yesterday and
> > now 17 hours later it is still running. I know there where over 270,000
> > records in the backupset table so I knew it would take a while to run but
> > this seems too long. Should I cancel the query execution and do it in
> > smaller
> > chunks, or let it go now that it's been running this long.
> >
> > I posted this same post earlier today but it looked like it didn't post
> > so,
> > my appologies if this is a duplicate.
>|||Unless you are on a very slow system, indexing should be just a few minutes,
but it will block backups during that time. And once you create the
indexes, the delete will go VERY fast.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
news:56E5BD88-7B1A-4ABF-A957-C25D780FB01E@.microsoft.com...
> Geoff, Thanks very much. I was looking at this link yesterday but didn't
> do
> the indexing first. I have stopped the query now and the backupset table
> is
> down to 167000 records from over 270000. Next question is approximately
> how
> long will it take to run the indexing script? I'm thinking I would like to
> run the script while I'm at work then run the sp_delete_backuphistory over
> the weekend. Thanks again, this problem has been a pain for quite a while.
> "Geoff N. Hiten" wrote:
>> This might help:
>> http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx
>> --
>> Geoff N. Hiten
>> Senior SQL Infrastructure Consultant
>> Microsoft SQL Server MVP
>>
>> "AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
>> news:DA5E5F91-00DB-4C2B-8041-92C2D86D07CF@.microsoft.com...
>> >I started running sp_delete_backuphistory before leaving work yesterday
>> >and
>> > now 17 hours later it is still running. I know there where over 270,000
>> > records in the backupset table so I knew it would take a while to run
>> > but
>> > this seems too long. Should I cancel the query execution and do it in
>> > smaller
>> > chunks, or let it go now that it's been running this long.
>> >
>> > I posted this same post earlier today but it looked like it didn't post
>> > so,
>> > my appologies if this is a duplicate.
>>|||THanks again Geoff! The indexing only took a minute and now I'm using the
nibbler code from a responce the your post in the link and it's taking about
2 minutes to purge 10 days worth.
"Geoff N. Hiten" wrote:
> Unless you are on a very slow system, indexing should be just a few minutes,
> but it will block backups during that time. And once you create the
> indexes, the delete will go VERY fast.
>
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
>
> "AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
> news:56E5BD88-7B1A-4ABF-A957-C25D780FB01E@.microsoft.com...
> > Geoff, Thanks very much. I was looking at this link yesterday but didn't
> > do
> > the indexing first. I have stopped the query now and the backupset table
> > is
> > down to 167000 records from over 270000. Next question is approximately
> > how
> > long will it take to run the indexing script? I'm thinking I would like to
> > run the script while I'm at work then run the sp_delete_backuphistory over
> > the weekend. Thanks again, this problem has been a pain for quite a while.
> >
> > "Geoff N. Hiten" wrote:
> >
> >> This might help:
> >> http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx
> >>
> >> --
> >> Geoff N. Hiten
> >> Senior SQL Infrastructure Consultant
> >> Microsoft SQL Server MVP
> >>
> >>
> >> "AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
> >> news:DA5E5F91-00DB-4C2B-8041-92C2D86D07CF@.microsoft.com...
> >> >I started running sp_delete_backuphistory before leaving work yesterday
> >> >and
> >> > now 17 hours later it is still running. I know there where over 270,000
> >> > records in the backupset table so I knew it would take a while to run
> >> > but
> >> > this seems too long. Should I cancel the query execution and do it in
> >> > smaller
> >> > chunks, or let it go now that it's been running this long.
> >> >
> >> > I posted this same post earlier today but it looked like it didn't post
> >> > so,
> >> > my appologies if this is a duplicate.
> >>
> >>
>sql

deleting database backup history

I started running sp_delete_backuphistory before leaving work yesterday and
now 17 hours later it is still running. I know there where over 270,000
records in the backupset table so I knew it would take a while to run but
this seems too long. Should I cancel the query execution and do it in smaller
chunks, or let it go now that it's been running this long.
I posted this same post earlier today but it looked like it didn't post so,
my appologies if this is a duplicate.
This might help:
http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
news:DA5E5F91-00DB-4C2B-8041-92C2D86D07CF@.microsoft.com...
>I started running sp_delete_backuphistory before leaving work yesterday and
> now 17 hours later it is still running. I know there where over 270,000
> records in the backupset table so I knew it would take a while to run but
> this seems too long. Should I cancel the query execution and do it in
> smaller
> chunks, or let it go now that it's been running this long.
> I posted this same post earlier today but it looked like it didn't post
> so,
> my appologies if this is a duplicate.
|||Geoff, Thanks very much. I was looking at this link yesterday but didn't do
the indexing first. I have stopped the query now and the backupset table is
down to 167000 records from over 270000. Next question is approximately how
long will it take to run the indexing script? I'm thinking I would like to
run the script while I'm at work then run the sp_delete_backuphistory over
the weekend. Thanks again, this problem has been a pain for quite a while.
"Geoff N. Hiten" wrote:

> This might help:
> http://weblogs.sqlteam.com/geoffh/archive/2008/01/21/MSDB-Performance-Tuning.aspx
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
>
> "AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
> news:DA5E5F91-00DB-4C2B-8041-92C2D86D07CF@.microsoft.com...
>
|||Unless you are on a very slow system, indexing should be just a few minutes,
but it will block backups during that time. And once you create the
indexes, the delete will go VERY fast.
Geoff N. Hiten
Senior SQL Infrastructure Consultant
Microsoft SQL Server MVP
"AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
news:56E5BD88-7B1A-4ABF-A957-C25D780FB01E@.microsoft.com...[vbcol=seagreen]
> Geoff, Thanks very much. I was looking at this link yesterday but didn't
> do
> the indexing first. I have stopped the query now and the backupset table
> is
> down to 167000 records from over 270000. Next question is approximately
> how
> long will it take to run the indexing script? I'm thinking I would like to
> run the script while I'm at work then run the sp_delete_backuphistory over
> the weekend. Thanks again, this problem has been a pain for quite a while.
> "Geoff N. Hiten" wrote:
|||THanks again Geoff! The indexing only took a minute and now I'm using the
nibbler code from a responce the your post in the link and it's taking about
2 minutes to purge 10 days worth.
"Geoff N. Hiten" wrote:

> Unless you are on a very slow system, indexing should be just a few minutes,
> but it will block backups during that time. And once you create the
> indexes, the delete will go VERY fast.
>
> --
> Geoff N. Hiten
> Senior SQL Infrastructure Consultant
> Microsoft SQL Server MVP
>
> "AkAlan" <AkAlan@.discussions.microsoft.com> wrote in message
> news:56E5BD88-7B1A-4ABF-A957-C25D780FB01E@.microsoft.com...
>

Deleting data with least impact

I have a large database - 33 million rows - VERY WIDE table and am
running out of disk space - lovely ...
I have to move 7 million records off the table - no biggie - I'll just
dts the records to another server.
My problem is deleting the remaining data to make room on the server
without impacting/logging too much.
Any help would be appreciated.
Thanks,
CraigOn Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> I have a large database - 33 million rows - VERY WIDE table and am
> running out of disk space - lovely ...
> I have to move 7 million records off the table - no biggie - I'll just
> dts the records to another server.
> My problem is deleting the remaining data to make room on the server
> without impacting/logging too much.
> Any help would be appreciated.
> Thanks,
> Craig
truncate|||On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> I have a large database - 33 million rows - VERY WIDE table and am
> running out of disk space - lovely ...
> I have to move 7 million records off the table - no biggie - I'll just
> dts the records to another server.
> My problem is deleting the remaining data to make room on the server
> without impacting/logging too much.
> Any help would be appreciated.
> Thanks,
> Craig
you can truncate the table one the data is moved out.|||On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> > I have a large database - 33 million rows - VERY WIDE table and am
> > running out of disk space - lovely ...
> > I have to move 7 million records off the table - no biggie - I'll just
> > dts the records to another server.
> > My problem is deleting the remaining data to make room on the server
> > without impacting/logging too much.
> > Any help would be appreciated.
> > Thanks,
> > Craig
> you can truncate the table one the data is moved out.
I will only need to delete 7 m rows of the 33 m rows ... so I need a
good way to do that.
Craig|||On Sep 12, 11:27 am, Craig <csomb...@.gmail.com> wrote:
> On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>
>
> > On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
> > > I have a large database - 33 million rows - VERY WIDE table and am
> > > running out of disk space - lovely ...
> > > I have to move 7 million records off the table - no biggie - I'll just
> > > dts the records to another server.
> > > My problem is deleting the remaining data to make room on the server
> > > without impacting/logging too much.
> > > Any help would be appreciated.
> > > Thanks,
> > > Craig
> > you can truncate the table one the data is moved out.
> I will only need to delete 7 m rows of the 33 m rows ... so I need a
> good way to do that.
> Craig- Hide quoted text -
> - Show quoted text -
I need to look at the delete statement. How long it takes now?|||Do not forget changing your Recovery Model temporarily to "Simple Recovery
Model". And before doing this, backup your Transaction Log, otherwise log
chain will be broken and you will not be able to restore your database to
the point where you changed your recovery model if you encounter a problem.
After your deleting operation, change your Recovery Model back to FULL.
(After changing your recovery model to FULL, back up your transaction log
again to prevent breaking the log chain)
The aim of changing your recovery model is not to log all this 7million
deletion to the transaction log file and blow it up. You change your
recovery model to prevent this. Because in this operation (as you already
lack of free space on your disks) if you keep FULL recovery model, then it
would log all this deletion operation and it probably raise an "out of
space" error and halt the process of deletion or whatever.
Here's a link that you can obtain more info abour Recovery Models from:
http://msdn2.microsoft.com/en-us/library/ms366344.aspx
--
Ekrem Önsoy
"Craig" <csomberg@.gmail.com> wrote in message
news:1189574876.579528.58280@.r29g2000hsg.googlegroups.com...
> On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
>> > I have a large database - 33 million rows - VERY WIDE table and am
>> > running out of disk space - lovely ...
>> > I have to move 7 million records off the table - no biggie - I'll just
>> > dts the records to another server.
>> > My problem is deleting the remaining data to make room on the server
>> > without impacting/logging too much.
>> > Any help would be appreciated.
>> > Thanks,
>> > Craig
>> you can truncate the table one the data is moved out.
>
> I will only need to delete 7 m rows of the 33 m rows ... so I need a
> good way to do that.
> Craig
>|||On Tue, 11 Sep 2007 22:27:56 -0700, Craig wrote:
>On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
>> > I have a large database - 33 million rows - VERY WIDE table and am
>> > running out of disk space - lovely ...
>> > I have to move 7 million records off the table - no biggie - I'll just
>> > dts the records to another server.
>> > My problem is deleting the remaining data to make room on the server
>> > without impacting/logging too much.
>> > Any help would be appreciated.
>> > Thanks,
>> > Craig
>> you can truncate the table one the data is moved out.
>
>I will only need to delete 7 m rows of the 33 m rows ... so I need a
>good way to do that.
>Craig
Hi Craig,
A common technique is to split the delete in batches, like this:
DECLARE @.rc int;
SET @.rc = 1;
WHILE @.rc <> 0
BEGIN;
DELETE TOP(100000)
FROM YourTable
WHERE whatever has to be deleted;
SET @.rc = @.@.ROWCOUNT;
END;
In SQL Server 2000, DELETE TOP(..) is not supported - instead, use SET
ROWCOUNT 100000 in the beginning and SET ROWCOUNT 0 at the end of the
script. Also, for all versions, you might need to experiment to find the
ideal batch size.
IIf your recovery model is full, either switch temporarily to simple (as
sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||> IIf your recovery model is full, either switch temporarily to simple (as
> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
Now, above doesn't jive in a SQL Server forum. I think you meant:
CASE WHEN recovery model is full THEN switch temporarily to simple ...
(Sorry, I couldn't resist. Just playing a bad joke with Hugo, doesn't have anything to do with your
problem, Craig...)
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Hugo Kornelis" <hugo@.perFact.REMOVETHIS.info.INVALID> wrote in message
news:60tge3pj8h8jk3d3kohjahp2gh6s345595@.4ax.com...
> On Tue, 11 Sep 2007 22:27:56 -0700, Craig wrote:
>>On Sep 11, 9:17 pm, SB <othell...@.yahoo.com> wrote:
>> On Sep 12, 9:17 am, Craig <csomb...@.gmail.com> wrote:
>> > I have a large database - 33 million rows - VERY WIDE table and am
>> > running out of disk space - lovely ...
>> > I have to move 7 million records off the table - no biggie - I'll just
>> > dts the records to another server.
>> > My problem is deleting the remaining data to make room on the server
>> > without impacting/logging too much.
>> > Any help would be appreciated.
>> > Thanks,
>> > Craig
>> you can truncate the table one the data is moved out.
>>
>>I will only need to delete 7 m rows of the 33 m rows ... so I need a
>>good way to do that.
>>Craig
> Hi Craig,
> A common technique is to split the delete in batches, like this:
> DECLARE @.rc int;
> SET @.rc = 1;
> WHILE @.rc <> 0
> BEGIN;
> DELETE TOP(100000)
> FROM YourTable
> WHERE whatever has to be deleted;
> SET @.rc = @.@.ROWCOUNT;
> END;
> In SQL Server 2000, DELETE TOP(..) is not supported - instead, use SET
> ROWCOUNT 100000 in the beginning and SET ROWCOUNT 0 at the end of the
> script. Also, for all versions, you might need to experiment to find the
> ideal batch size.
> IIf your recovery model is full, either switch temporarily to simple (as
> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
> --
> Hugo Kornelis, SQL Server MVP
> My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||On Thu, 13 Sep 2007 09:52:56 +0200, Tibor Karaszi wrote:
>> IIf your recovery model is full, either switch temporarily to simple (as
>> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
>Now, above doesn't jive in a SQL Server forum. I think you meant:
>CASE WHEN recovery model is full THEN switch temporarily to simple ...
Hey, Tibor,
You are not actually using CASE as a *statement*, now are you? Off you
go to the nearest C++ newsgroup!!
>(Sorry, I couldn't resist. Just playing a bad joke with Hugo, doesn't have anything to do with your
>problem, Craig...)
And neither could I ... :-)
--
Hugo Kornelis, SQL Server MVP
My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis|||> You are not actually using CASE as a *statement*, now are you? Off you
> go to the nearest C++ newsgroup!!
I guess my wishes for a closer integration with ANSI SQL PSM syntax influences me... :-)
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Hugo Kornelis" <hugo@.perFact.REMOVETHIS.info.INVALID> wrote in message
news:rhije3lgrsc2476uhtfig6ue17iiv3b1s8@.4ax.com...
> On Thu, 13 Sep 2007 09:52:56 +0200, Tibor Karaszi wrote:
>> IIf your recovery model is full, either switch temporarily to simple (as
>> sugggested by Ekrem), or add a BACKUP LOG command inside the WHILE loop.
>>Now, above doesn't jive in a SQL Server forum. I think you meant:
>>CASE WHEN recovery model is full THEN switch temporarily to simple ...
> Hey, Tibor,
> You are not actually using CASE as a *statement*, now are you? Off you
> go to the nearest C++ newsgroup!!
>>(Sorry, I couldn't resist. Just playing a bad joke with Hugo, doesn't have anything to do with
>>your
>>problem, Craig...)
> And neither could I ... :-)
> --
> Hugo Kornelis, SQL Server MVP
> My SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis

Deleting binn files

I'm running a copy of SQL-Server v7 on a win2k machine,
and I've run into the problem of files randomly
disappearing from the \80\tools\binn directory. So far,
over 40 files have gone missing from the binn dir,
including all the files and dlls to run dtsrun and the
query analyzer. Has anyone ever seen this kind of
behaviour on their sql installation, or know what causes
it?
btw, I'm running sp3 on the system.
-IanOops, I accidentally made a mistake with my post... the
version of SQL that I'm running is actually SQL 2000...
does that affect the nature of the problem?
I appreciate your help w/ this problem.
>--Original Message--
>\80\tools\binn directory is for SQL 2000 not for SQL V7.
>It is very unlikely to be deleted these files randomly.
>Because, if you are running these application and if you
>try to delete these dlls, you get access violation.
>If you are running multiple instances for example SQL 7
as
>default, SQL 2000 as second instance, then there is a
>possibility of some files taken out from one of these
>directories(particularly common resources for both
>instances). This happens only when the installation
>process for 2 instances are not done properly.
>Chinna.
>>--Original Message--
>>I'm running a copy of SQL-Server v7 on a win2k machine,
>>and I've run into the problem of files randomly
>>disappearing from the \80\tools\binn directory. So far,
>>over 40 files have gone missing from the binn dir,
>>including all the files and dlls to run dtsrun and the
>>query analyzer. Has anyone ever seen this kind of
>>behaviour on their sql installation, or know what causes
>>it?
>>btw, I'm running sp3 on the system.
>>-Ian
>>.
>.
>|||I have more info on the actual files that were deleted...
I compared the contents of the binn directory to another
machine running a similar installation of SQL 2000, and
these are the files that appear to be missing:
colctrl.dll
dmcomscm.exe
dtsrun.exe
dtsruni.exe
dtswiz.exe
hhsetup.dll
isql.exe
itwiz.exe
odbccmpt.exe
psdcscm.dll
replmon.pmc
sqlbrows.cnt
sqlbrows.hlp
sqldmo80.cnt
sqldmo80.hlp
sqldts80.cnt
sqldts80.hlp
sqlftwiz.exe
sqllex.dll
sqlmmmem.chm
sqlmmcii.chm
sqlns80.cnt
sqlns80.hlp
uncol.exe
wiztrace.exe
Once again, thanx for your help on this...
-Ian
>--Original Message--
>I'm running a copy of SQL-Server v7 on a win2k machine,
>and I've run into the problem of files randomly
>disappearing from the \80\tools\binn directory. So far,
>over 40 files have gone missing from the binn dir,
>including all the files and dlls to run dtsrun and the
>query analyzer. Has anyone ever seen this kind of
>behaviour on their sql installation, or know what causes
>it?
>btw, I'm running sp3 on the system.
>-Ian
>.
>|||\80\tools\binn directory is for SQL 2000 not for SQL V7.
It is very unlikely to be deleted these files randomly.
Because, if you are running these application and if you
try to delete these dlls, you get access violation.
If you are running multiple instances for example SQL 7 as
default, SQL 2000 as second instance, then there is a
possibility of some files taken out from one of these
directories(particularly common resources for both
instances). This happens only when the installation
process for 2 instances are not done properly.
Chinna.
>--Original Message--
>I'm running a copy of SQL-Server v7 on a win2k machine,
>and I've run into the problem of files randomly
>disappearing from the \80\tools\binn directory. So far,
>over 40 files have gone missing from the binn dir,
>including all the files and dlls to run dtsrun and the
>query analyzer. Has anyone ever seen this kind of
>behaviour on their sql installation, or know what causes
>it?
>btw, I'm running sp3 on the system.
>-Ian
>.
>

deleting an instance of SQL Server

We have a server with with four instances of SQL Server 2000 running on a
server with Win2003. On of the instances (not the default) was a Biztalk
development server. Well, I want to delete that old SQL Server instance and
reinstall a new instance ( I want to have a new name for the installed
instance of SLQ Server)
Question - will I have to stop and start the server to get this done or can
I delete that old database instance and reinstall the new database instance?
richHi Rich
"Rich" wrote:

> We have a server with with four instances of SQL Server 2000 running on a
> server with Win2003. On of the instances (not the default) was a Biztalk
> development server. Well, I want to delete that old SQL Server instance a
nd
> reinstall a new instance ( I want to have a new name for the installed
> instance of SLQ Server)
> Question - will I have to stop and start the server to get this done or ca
n
> I delete that old database instance and reinstall the new database instanc
e?
> rich
Have you seen http://msdn2.microsoft.com/en-us/library/ms144255.aspx. I
don't think you will need to reboot, but you can install the new instance an
d
just stop the service for the old instance. Then remove that instance when
you have a maintenance window.
John

deleting an instance of SQL Server

We have a server with with four instances of SQL Server 2000 running on a
server with Win2003. On of the instances (not the default) was a Biztalk
development server. Well, I want to delete that old SQL Server instance and
reinstall a new instance ( I want to have a new name for the installed
instance of SLQ Server)
Question - will I have to stop and start the server to get this done or can
I delete that old database instance and reinstall the new database instance?
richHi Rich
"Rich" wrote:
> We have a server with with four instances of SQL Server 2000 running on a
> server with Win2003. On of the instances (not the default) was a Biztalk
> development server. Well, I want to delete that old SQL Server instance and
> reinstall a new instance ( I want to have a new name for the installed
> instance of SLQ Server)
> Question - will I have to stop and start the server to get this done or can
> I delete that old database instance and reinstall the new database instance?
> rich
Have you seen http://msdn2.microsoft.com/en-us/library/ms144255.aspx. I
don't think you will need to reboot, but you can install the new instance and
just stop the service for the old instance. Then remove that instance when
you have a maintenance window.
John

Monday, March 19, 2012

Deleting all but top record.

Hey Guys,

I have Performance Monitor running and storing the network usage to my MsSQL database, and this is done a few times a minute. I have a page that then shows show much of my bandwidth is being used. As you can gather, the database quickly starts filling up with hundrreds of records so I could do with a script that delete these records.
I cant simply delete all records because that would cause my webpage to fail so I need a way to delete all records apart from the latest one.
Wondering if anyone would know how I could do this?delete from tableRecords where recordid < (select max(recordid) from tablerecords)

Nick|||

Thank you, that works great.
I've just however noticed a potential problem. I understand that the RecordID can only go so high then will give errors. The thing is, the PerfMon has only been running a few hours and already it has produced 1800+ records. With your script I can just keep it as one record in the table, but the RecordID will get higher and higher. Im wondering what will happening if I let this run for a week or so, eventually it will stop working.
Anyone got any ideas?

|||Dont use an auto-identity field.
insert into table
select max(recordid) + 1 from table, [rest of fields]
Since you will only have one record in there at a time, this shouldnt hurt your performance.
Nick

deleting a replicated restore

I am running SQL 2K & Win 2K. I restored a backup, to a
different name, that was being replicated when it was
backed up. After the restore, I cannot detach the
backup, because SQL thinks it is being replicated. How
can I fix/detach this dB?
Larry,
have a look at sp_removedbreplication 'dbname' and sp_replicationdboption in
BOL.
Regards,
Paul Ibison
|||If sp_removedbreplication doesnt work you can update the category column in
sysdatabases to 0
Thanks
Gopal
|||If working inside the database, you may also have to manually set the
replinfo column in sysobjects to 0.
Mike
Principal Mentor
Solid Quality Learning
"More than just Training"
SQL Server MVP
http://www.solidqualitylearning.com
http://www.mssqlserver.com

Friday, March 9, 2012

deleted my database log file, cant reattach

Hello,

A utility I was running caused our database's log file to run out of
hard drive.

So, I detached the database, deleted the log file, and tried to
reattach.

SQL server fails to reattach because it cant find the log file. (Why
it cant just create a new default one is beyond me...but...)

EXEC sp_attach_single_file_db @.dbname = 'sgvault',
@.physname = 'c:\Program Files\Microsoft SQL
Server\MSSQL\Data\sgvault.mdf''

Could not open new database 'sgvault'. CREATE DATABASE is aborted.
Device activation error. The physical file name 'C:\Program
Files\Microsoft SQL Server\MSSQL\data\sgvault_log.LDF' may be
incorrect.

Do you know how I can fix this dilemma? Thanks!josepinchero@.yahoo.com (Jose) wrote in message news:<b9dff992.0408251727.2fe3e679@.posting.google.com>...
> Hello,
> A utility I was running caused our database's log file to run out of
> hard drive.
> So, I detached the database, deleted the log file, and tried to
> reattach.
> SQL server fails to reattach because it cant find the log file. (Why
> it cant just create a new default one is beyond me...but...)
> EXEC sp_attach_single_file_db @.dbname = 'sgvault',
> @.physname = 'c:\Program Files\Microsoft SQL
> Server\MSSQL\Data\sgvault.mdf''
> Could not open new database 'sgvault'. CREATE DATABASE is aborted.
> Device activation error. The physical file name 'C:\Program
> Files\Microsoft SQL Server\MSSQL\data\sgvault_log.LDF' may be
> incorrect.
> Do you know how I can fix this dilemma? Thanks!

Assuming that you detached the database with sp_detach_db, then this
should work. If not, and assuming that you have the correct path to
the .mdf in your command, as well as full permissions in that folder,
then you will probably have to restore from backup. You can then
shrink the log file as described here:

http://support.microsoft.com/defaul...8&Product=sql2k

Simon|||[posted and mailed]

Jose (josepinchero@.yahoo.com) writes:
> A utility I was running caused our database's log file to run out of
> hard drive.
> So, I detached the database, deleted the log file, and tried to
> reattach.

That was a very, very bad thing to do. Never, never delete log fils
(unless you really want to get rid of that database). And if you really
must do, first make sure that copied it somewhere else first.

I leave this is as a memento not only for you, but for anyone who might
see these thread on Google in the future.

> SQL server fails to reattach because it cant find the log file. (Why
> it cant just create a new default one is beyond me...but...)
> EXEC sp_attach_single_file_db @.dbname = 'sgvault',
> @.physname = 'c:\Program Files\Microsoft SQL
> Server\MSSQL\Data\sgvault.mdf''
> Could not open new database 'sgvault'. CREATE DATABASE is aborted.
> Device activation error. The physical file name 'C:\Program
> Files\Microsoft SQL Server\MSSQL\data\sgvault_log.LDF' may be
> incorrect.

I tried this exercise (with a database I could afford to waste), and I
got this message too, but the next message said that it was creating
an LDF for me.

It may be that your database was not shut down cleanly, for instance
because you panicked and killed SQL Server before you detached.

> Do you know how I can fix this dilemma? Thanks!

If you have a good backup, restore it. If not, well, you may be able
to get to that data - or you may not.

I know of a way to make the database accessible from SQL Server, in so
far that you can say "use db". But depending on what that application
was doing, your database may be more or less corrupt. This is because
you may get the database in the middle of a transaction, so that page
linkings may be bad. And if even DBCC comes out clean, your own data
may be inconsistent.

I am not going to post the steps bring the database back online, because
they are far too dangerous. I would strongly recommend you to open a
case with Microsoft Support. No that is not going to be cheap, but you
need to weigh that against of losing the data altogether.

If you absolutely want to fight this alone, I can send the instructions
by mail.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Wednesday, March 7, 2012

deleted AD

I am not a DBA and may have a problem. I am running an
app that sits on top of the sql desktop engine. I am
using AD integrated security. My problem is that one of
the domain admins decided to reformat and reinstall both
our AD servers.(#$%#) There is a new domain, but there is
no longer an AD server for the account that "owns" the
sql database.
I am still logged in to the server with the account that
owns the database. I have not logged out since the AD was
deleted for fear that my database will no longer be
accessible. As far as I can see, the admin tools
available to me for the desktop engine don't have the
ability to change the owner of the database.
What must I do to ensure that I can access this database?You can change the database owner with sp_changedbowner. Run this using the
command-line OSQL utility like the example below:
OSQL -Q "EXEC sp_changedbowner 'sa'" -d MyDatabase -E -S MyServer
By default, members of the local Administrators have sysadmin access to SQL
Server via the 'BULTIN\Administrators' login. This will provide sysadmin
access to your SQL Server instance as long as you can access the machine
with Windows account that is a member of the local Administrators group.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"hhtpnoc" <anonymous@.discussions.microsoft.com> wrote in message
news:413301c3e42f$29ac0100$a301280a@.phx.gbl...
> I am not a DBA and may have a problem. I am running an
> app that sits on top of the sql desktop engine. I am
> using AD integrated security. My problem is that one of
> the domain admins decided to reformat and reinstall both
> our AD servers.(#$%#) There is a new domain, but there is
> no longer an AD server for the account that "owns" the
> sql database.
> I am still logged in to the server with the account that
> owns the database. I have not logged out since the AD was
> deleted for fear that my database will no longer be
> accessible. As far as I can see, the admin tools
> available to me for the desktop engine don't have the
> ability to change the owner of the database.
> What must I do to ensure that I can access this database?
>

deleted AD

I am not a DBA and may have a problem. I am running an
app that sits on top of the sql desktop engine. I am
using AD integrated security. My problem is that one of
the domain admins decided to reformat and reinstall both
our AD servers.(#$%#) There is a new domain, but there is
no longer an AD server for the account that "owns" the
sql database.
I am still logged in to the server with the account that
owns the database. I have not logged out since the AD was
deleted for fear that my database will no longer be
accessible. As far as I can see, the admin tools
available to me for the desktop engine don't have the
ability to change the owner of the database.
What must I do to ensure that I can access this database?You can change the database owner with sp_changedbowner. Run this using the
command-line OSQL utility like the example below:
OSQL -Q "EXEC sp_changedbowner 'sa'" -d MyDatabase -E -S MyServer
By default, members of the local Administrators have sysadmin access to SQL
Server via the 'BULTIN\Administrators' login. This will provide sysadmin
access to your SQL Server instance as long as you can access the machine
with Windows account that is a member of the local Administrators group.
Hope this helps.
Dan Guzman
SQL Server MVP
"hhtpnoc" <anonymous@.discussions.microsoft.com> wrote in message
news:413301c3e42f$29ac0100$a301280a@.phx.gbl...
quote:

> I am not a DBA and may have a problem. I am running an
> app that sits on top of the sql desktop engine. I am
> using AD integrated security. My problem is that one of
> the domain admins decided to reformat and reinstall both
> our AD servers.(#$%#) There is a new domain, but there is
> no longer an AD server for the account that "owns" the
> sql database.
> I am still logged in to the server with the account that
> owns the database. I have not logged out since the AD was
> deleted for fear that my database will no longer be
> accessible. As far as I can see, the admin tools
> available to me for the desktop engine don't have the
> ability to change the owner of the database.
> What must I do to ensure that I can access this database?
>

Saturday, February 25, 2012

Delete takes extremly long time.

guys, im running MSSQL 2000 latest SP on PIII-1Ghz, 4GB RAM, SCSII and all good things but...

...when i execute this simple query to delete 8,000 records out of 18,000 total in the table it takes 11 seconds:

delete from tempViewResults where opportunity_id = '016158'

i do have an index on opportunity_id.
any ideas why? is this normal?

any help is greatly appreciated.please anybody?|||In this case, the index is not really helping you so much as hurting you. In order for the optimizer to consider using the index, you have to be going after <10% of the table. How many other indexes are there on the table? Also check for triggers on the table that may be firing for deletes.|||Also consider to create that "temp" table in the tempdb, which is faster.|||Two more cents:

Cent #1:Your delete statement will take longer if their are cascading relationships set up with subtables.

Cent #2:I'm not sure if using tempdb would be faster, because it still would be writing data to disk, but if you can get away with using a table variable which uses memory you might be able to avoid disk writes altogether.

blindman|||Using a #temp table is faster if your database is in FULL or BULK-LOGGED model.

Of course if your database is in SIMLE model you won't get any speed improvement|||Manowar:

What makes the temptable faster than a stored table? I couldn't find anything in Books Online regarding this. I'd have thought that querying a stored table would enable the optimizer to take advantage of cached query plans. Could you point me to where this is explained or documented in Books Online or Microsoft's support site?

blindman|||I think there's nothing about it on BOL or MSDN. I've found a note on Kalen Delaney "Inside SQL Server 2000" world-famous book.

You can look in the section about temp (#) tables. Kalen says that since in the tempdb sql server only register log data for rollback purposes (and not for database recovery), data modification operations can be up to for time faster that in a standard db.

Friday, February 24, 2012

DELETE Statement

Hi folks,
I am running the following statement on SQL Server 2000 Database:
DELETE from tablea
WHERE effective_date >=CONVERT(varchar(11),GETDATE()-3,121)
AND effective_date < CONVERT(varchar(10),GETDATE(),120)
tablea has effective_date defined as smalldatetime.
When i run the sql, i get the following error:
DELETE failed because the following SET options have incorrect settings:
'ARITHABORT'
The effective_date column is not a computed column. Any idea as to why the
statement should genrate the error.Can you post the table DDL, including indexes?. Is tablea an indexed view or
does tablea have an index on a computed column?

> WHERE effective_date >=CONVERT(varchar(11),GETDATE()-3,121)
> AND effective_date < CONVERT(varchar(10),GETDATE(),120)
effective_date >= CONVERT(char(8), dateadd(day, -3, GETDATE()), 112)
effective_date < CONVERT(char(8), GETDATE(), 112)
AMB
"Ram" wrote:

> Hi folks,
> I am running the following statement on SQL Server 2000 Database:
> DELETE from tablea
> WHERE effective_date >=CONVERT(varchar(11),GETDATE()-3,121)
> AND effective_date < CONVERT(varchar(10),GETDATE(),120)
> tablea has effective_date defined as smalldatetime.
> When i run the sql, i get the following error:
> DELETE failed because the following SET options have incorrect settings:
> 'ARITHABORT'
> The effective_date column is not a computed column. Any idea as to why the
> statement should genrate the error.|||Is there a calculated column or an indexed view on the table you are
attempting to delete from?
Andrew J. Kelly SQL MVP
"Ram" <Ram@.discussions.microsoft.com> wrote in message
news:4EF71394-62B6-49C0-8BF6-3FA849F22770@.microsoft.com...
> Hi folks,
> I am running the following statement on SQL Server 2000 Database:
> DELETE from tablea
> WHERE effective_date >=CONVERT(varchar(11),GETDATE()-3,121)
> AND effective_date < CONVERT(varchar(10),GETDATE(),120)
> tablea has effective_date defined as smalldatetime.
> When i run the sql, i get the following error:
> DELETE failed because the following SET options have incorrect settings:
> 'ARITHABORT'
> The effective_date column is not a computed column. Any idea as to why the
> statement should genrate the error.|||Hi,
Here is the DDL:
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[tablea]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[tablea]
GO
CREATE TABLE [dbo].[tablea] (
[row_id] [int] IDENTITY (1, 1) NOT NULL ,
[iex_id] [int] NOT NULL ,
[mu_id] [int] NOT NULL ,
[effective_date] [smalldatetime] NOT NULL ,
[start_time] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[end_time] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[update_date] [smalldatetime] NULL
) ON [PRIMARY]
GO
Indexes:
IDX_mu_id mu_id
IDX_mu_id_agent_id mu_id, iex_id
PK_tblASAgentSchedule iex_id, mu_id, effective_date
Ram
"Alejandro Mesa" wrote:
> Can you post the table DDL, including indexes?. Is tablea an indexed view
or
> does tablea have an index on a computed column?
>
> effective_date >= CONVERT(char(8), dateadd(day, -3, GETDATE()), 112)
> effective_date < CONVERT(char(8), GETDATE(), 112)
>
> AMB
> "Ram" wrote:
>|||I can not reproduce the error.
CREATE TABLE [dbo].[tablea] (
[row_id] [int] IDENTITY (1, 1) NOT NULL ,
[iex_id] [int] NOT NULL ,
[mu_id] [int] NOT NULL ,
[effective_date] [smalldatetime] NOT NULL ,
[start_time] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[end_time] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[update_date] [smalldatetime] NULL,
constraint PK_tblASAgentSchedule primary key (iex_id, mu_id, effective_date)
) ON [PRIMARY]
GO
DELETE from tablea
WHERE effective_date >=CONVERT(varchar(11),GETDATE()-3,121)
AND effective_date < CONVERT(varchar(10),GETDATE(),120)
go
drop table tablea
go
AMB
"Ram" wrote:
> Hi,
> Here is the DDL:
> if exists (select * from dbo.sysobjects where id =
> object_id(N'[dbo].[tablea]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
> drop table [dbo].[tablea]
> GO
> CREATE TABLE [dbo].[tablea] (
> [row_id] [int] IDENTITY (1, 1) NOT NULL ,
> [iex_id] [int] NOT NULL ,
> [mu_id] [int] NOT NULL ,
> [effective_date] [smalldatetime] NOT NULL ,
> [start_time] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [end_time] [char] (4) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
> [update_date] [smalldatetime] NULL
> ) ON [PRIMARY]
> GO
> Indexes:
> IDX_mu_id mu_id
> IDX_mu_id_agent_id mu_id, iex_id
> PK_tblASAgentSchedule iex_id, mu_id, effective_date
> Ram
> "Alejandro Mesa" wrote:
>|||The whole design looks wrong. A SQL programmer does not use IDENTITY
for keys, only for display in cursors. Times are not stored in CHAR(n),
but in temporal data types which they do not have to CONVERT() in weird
ways. Audit information is not stored in the tables, but externally
and best done with log tools.
Having something go into effect is a status change, so where is the
status code? You have two identifiers that are not in this table, so
where is the DRI to the base tables that define them? My guess is that
it should look like this:
CREATE TABLE FoobarHistory
(iex_id INTEGER NOT NULL
REFERENCES IEX(iex_id),
mu_id INTEGER NOT NULL
REFERENCES MU(mu_id),
status_code INTEGER NOT NULL, -- let 0 be effective
start_time DATETIME DEFAULT CURRENT_TIMESTAMP NOT NULL,
end_time DATETIME, -- null means current
PRIMARY KEY (iex_id, mu_id));
Then you can write:
DELETE FROM Foobar
WHERE status = 0;|||Hi,
I dropped and recreated the index on the table.
The delete statement worked fine after that.
Thank you
"Andrew J. Kelly" wrote:

> Is there a calculated column or an indexed view on the table you are
> attempting to delete from?
> --
> Andrew J. Kelly SQL MVP
>
> "Ram" <Ram@.discussions.microsoft.com> wrote in message
> news:4EF71394-62B6-49C0-8BF6-3FA849F22770@.microsoft.com...
>
>

Friday, February 17, 2012

Delete query in Linked server

Hi,
I have a pair of SQL Server 2005 (SQL Server 2005 - 9.00.3054.00 Build 3790:
Service Pack 2) running on Windows 2K OS. Linked server is established and
working fine for all type of queries except the following delete query.
DELETE A from ttdb1.sasixp.sasi.aprn7030 A
WHERE
not EXISTS ( SELECT stulink
FROM ttdb1.sasixp.sasi.astu7030 B
WHERE B.stulink = A.stulink )
But the same query is working fine in test server. Test SQL Server 2005
version is SQL Server 2005 - 9.00.3042.00 Build 3790: Service Pack 2). But
production server ver is SQL Server 2005 - 9.00.3054.00 Build 3790: Service
Pack 2. This is the only difference between test server env & production
env.
When I run this query got the following error:
OLE DB provider "SQLNCLI" for linked server "ttdb1" returned message
"Multiple-step OLE DB operation generated errors. Check each OLE DB status
value, if available. No work was done.".
Msg 7202, Level 11, State 2, Line 1
Could not find server 'TTDB1' in sys.servers. Verify that the correct server
name was specified. If necessary, execute the stored procedure
sp_addlinkedserver to add the server to sys.servers.
Any idea what causes this problem?
Thanks in advance for your help.
VenkatPlease read the post below, I had the same issue and got it fixed by changing
the servername.
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=274270&SiteID=1
Regards,
Henry
"Venkat N" wrote:
> Hi,
> I have a pair of SQL Server 2005 (SQL Server 2005 - 9.00.3054.00 Build 3790:
> Service Pack 2) running on Windows 2K OS. Linked server is established and
> working fine for all type of queries except the following delete query.
> DELETE A from ttdb1.sasixp.sasi.aprn7030 A
> WHERE
> not EXISTS ( SELECT stulink
> FROM ttdb1.sasixp.sasi.astu7030 B
> WHERE B.stulink = A.stulink )
> But the same query is working fine in test server. Test SQL Server 2005
> version is SQL Server 2005 - 9.00.3042.00 Build 3790: Service Pack 2). But
> production server ver is SQL Server 2005 - 9.00.3054.00 Build 3790: Service
> Pack 2. This is the only difference between test server env & production
> env.
> When I run this query got the following error:
> OLE DB provider "SQLNCLI" for linked server "ttdb1" returned message
> "Multiple-step OLE DB operation generated errors. Check each OLE DB status
> value, if available. No work was done.".
> Msg 7202, Level 11, State 2, Line 1
> Could not find server 'TTDB1' in sys.servers. Verify that the correct server
> name was specified. If necessary, execute the stored procedure
> sp_addlinkedserver to add the server to sys.servers.
> Any idea what causes this problem?
> Thanks in advance for your help.
> Venkat
>
>

delete output into <xml column of audit table>

Currently running Sql Server 2005
Is it possible to issue the delete command and capture the affected rows as
xml types that will be stored in an audit table with an xml column?
Something along the lines of:
delete from source_table
output
(deleted.*
into audit_table (xml_audit_column)
for xml auto)
where source_table.column = @.delete_valueYou cannot do it in a single statement because the OUTPUT clause does not
support subqueries. One way is to get the deleted rows into a temp table and
then transfer as XML to your audit table.
Here is one example of implementing this in a trigger:
http://blogs.sqlservercentral.com/prosqlxml/archive/2008/01/26/create-a-dynamic-logging-trigger-with-xml.aspx
HTH,
Plamen Ratchev
http://www.SQLStudio.com

Tuesday, February 14, 2012

Delete on a single filegroup

I am running SQL 2005 SP2.
I have a partitioned table and was wondering if there is a view that could
tell me the data that resides on a specific filegroup. What I would like to
do is to delete only the data that resides on a specific filegroup. Thx.
--
Message posted via SQLMonster.com
http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200802/1Perhaps the $PARTITION() function will help you?
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message news:801e77dbe7979@.uwe...
>I am running SQL 2005 SP2.
> I have a partitioned table and was wondering if there is a view that could
> tell me the data that resides on a specific filegroup. What I would like to
> do is to delete only the data that resides on a specific filegroup. Thx.
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200802/1
>|||> I have a partitioned table and was wondering if there is a view that could
> tell me the data that resides on a specific filegroup. What I would like
> to
> do is to delete only the data that resides on a specific filegroup. Thx.
The query below will list partitions by filegroup. You can then delete the
data using $PARTITION function like Tibor mentioned or specify partitioning
column values that map to the partitions on the filegroup. If you have a
lot of data, you might consider switching partitions into a staging table
and then truncating the staging table.
SELECT
s.name AS SchemaName,
o.name AS TableName,
ds.name AS PartitionScheme,
p.partition_number AS PartitionNumber,
fg.name AS FileGroupName,
prv_left.value AS LowerBoundaryValue,
prv_right.value AS UpperBoundaryValue,
CASE pf.boundary_value_on_right WHEN 1 THEN 'RIGHT' ELSE 'LEFT' END AS
Boundary,
p.rows AS Rows
FROM sys.schemas AS s
JOIN sys.objects AS o ON
o.schema_id = s.schema_id
AND o.type = 'U'
JOIN sys.indexes AS i ON
i.object_id = o.object_id AND
i.index_id IN(0,1)
JOIN sys.partitions p ON
p.object_id = i.object_id AND
p.index_id = i.index_id
JOIN sys.data_spaces AS ds ON
ds.data_space_id = i.data_space_id
JOIN sys.partition_schemes AS ps ON
ps.data_space_id = ds.data_space_id
JOIN sys.partition_functions AS pf ON
pf.function_id = ps.function_id
JOIN sys.destination_data_spaces AS dds2 ON
dds2.partition_scheme_id = ps.data_space_id
AND dds2.destination_id = p.partition_number
JOIN sys.filegroups AS fg ON
fg.data_space_id = dds2.data_space_id
LEFT JOIN sys.partition_range_values AS prv_left ON
ps.function_id = prv_left.function_id
AND prv_left.boundary_id = p.partition_number - 1
LEFT JOIN sys.partition_range_values AS prv_right ON
ps.function_id = prv_right.function_id
AND prv_right.boundary_id = p.partition_number
ORDER BY
SchemaName,
TableName,
FileGroupName,
PartitionNumber;
Hope this helps.
Dan Guzman
SQL Server MVP
http://weblogs.sqlteam.com/dang/
"cbrichards via SQLMonster.com" <u3288@.uwe> wrote in message
news:801e77dbe7979@.uwe...
>I am running SQL 2005 SP2.
> I have a partitioned table and was wondering if there is a view that could
> tell me the data that resides on a specific filegroup. What I would like
> to
> do is to delete only the data that resides on a specific filegroup. Thx.
> --
> Message posted via SQLMonster.com
> http://www.sqlmonster.com/Uwe/Forums.aspx/sql-server/200802/1
>

Delete on a single filegroup

I am running SQL 2005 SP2.
I have a partitioned table and was wondering if there is a view that could
tell me the data that resides on a specific filegroup. What I would like to
do is to delete only the data that resides on a specific filegroup. Thx.
Message posted via droptable.com
http://www.droptable.com/Uwe/Forums.aspx/sql-server/200802/1
> I have a partitioned table and was wondering if there is a view that could
> tell me the data that resides on a specific filegroup. What I would like
> to
> do is to delete only the data that resides on a specific filegroup. Thx.
The query below will list partitions by filegroup. You can then delete the
data using $PARTITION function like Tibor mentioned or specify partitioning
column values that map to the partitions on the filegroup. If you have a
lot of data, you might consider switching partitions into a staging table
and then truncating the staging table.
SELECT
s.name AS SchemaName,
o.name AS TableName,
ds.name AS PartitionScheme,
p.partition_number AS PartitionNumber,
fg.name AS FileGroupName,
prv_left.value AS LowerBoundaryValue,
prv_right.value AS UpperBoundaryValue,
CASE pf.boundary_value_on_right WHEN 1 THEN 'RIGHT' ELSE 'LEFT' END AS
Boundary,
p.rows AS Rows
FROM sys.schemas AS s
JOIN sys.objects AS o ON
o.schema_id = s.schema_id
AND o.type = 'U'
JOIN sys.indexes AS i ON
i.object_id = o.object_id AND
i.index_id IN(0,1)
JOIN sys.partitions p ON
p.object_id = i.object_id AND
p.index_id = i.index_id
JOIN sys.data_spaces AS ds ON
ds.data_space_id = i.data_space_id
JOIN sys.partition_schemes AS ps ON
ps.data_space_id = ds.data_space_id
JOIN sys.partition_functions AS pf ON
pf.function_id = ps.function_id
JOIN sys.destination_data_spaces AS dds2 ON
dds2.partition_scheme_id = ps.data_space_id
AND dds2.destination_id = p.partition_number
JOIN sys.filegroups AS fg ON
fg.data_space_id = dds2.data_space_id
LEFT JOIN sys.partition_range_values AS prv_left ON
ps.function_id = prv_left.function_id
AND prv_left.boundary_id = p.partition_number - 1
LEFT JOIN sys.partition_range_values AS prv_right ON
ps.function_id = prv_right.function_id
AND prv_right.boundary_id = p.partition_number
ORDER BY
SchemaName,
TableName,
FileGroupName,
PartitionNumber;
Hope this helps.
Dan Guzman
SQL Server MVP
http://weblogs.sqlteam.com/dang/
"cbrichards via droptable.com" <u3288@.uwe> wrote in message
news:801e77dbe7979@.uwe...
>I am running SQL 2005 SP2.
> I have a partitioned table and was wondering if there is a view that could
> tell me the data that resides on a specific filegroup. What I would like
> to
> do is to delete only the data that resides on a specific filegroup. Thx.
> --
> Message posted via droptable.com
> http://www.droptable.com/Uwe/Forums.aspx/sql-server/200802/1
>

delete old backup files

Hi, I am running SQL 2000 and have setup a daily
maintenance plan to backup the data and log files. I have
also set it up to delete files that are older than 1 day
so that it won't take up all the free disk space. The
problem is that the files are still there and there is no
sign that they have been deleted by the maintenance plan.
What is wrong? Can somepone please help ? Thanks.You probably do not have enough room for all the files to be created and
then deleted. You would have to have over twice the disk space of the
databases and backups to accomplish this. You may want to either try
and separate the jobs or use another scheduler (Windows Tasks) to delete
the old backups.
Bill
Seing wrote:
> Hi, I am running SQL 2000 and have setup a daily
> maintenance plan to backup the data and log files. I have
> also set it up to delete files that are older than 1 day
> so that it won't take up all the free disk space. The
> problem is that the files are still there and there is no
> sign that they have been deleted by the maintenance plan.
> What is wrong? Can somepone please help ? Thanks.|||Pretty sure I have enough disk space though...any other
suggestions?
>--Original Message--
>You probably do not have enough room for all the files to
be created and
>then deleted. You would have to have over twice the disk
space of the
>databases and backups to accomplish this. You may want
to either try
>and separate the jobs or use another scheduler (Windows
Tasks) to delete
>the old backups.
>Bill
>Seing wrote:
>> Hi, I am running SQL 2000 and have setup a daily
>> maintenance plan to backup the data and log files. I
have
>> also set it up to delete files that are older than 1 day
>> so that it won't take up all the free disk space. The
>> problem is that the files are still there and there is
no
>> sign that they have been deleted by the maintenance
plan.
>> What is wrong? Can somepone please help ? Thanks.
>.
>|||Then the only other suggestion I have is to verify the job history and
the step histories for any error messages that can help in your
troubleshooting.
Sorry
Bill
Seing wrote:
> Pretty sure I have enough disk space though...any other
> suggestions?
> >--Original Message--
> >You probably do not have enough room for all the files to
> be created and
> >then deleted. You would have to have over twice the disk
> space of the
> >databases and backups to accomplish this. You may want
> to either try
> >and separate the jobs or use another scheduler (Windows
> Tasks) to delete
> >the old backups.
> >
> >Bill
> >
> >Seing wrote:
> >>
> >> Hi, I am running SQL 2000 and have setup a daily
> >> maintenance plan to backup the data and log files. I
> have
> >> also set it up to delete files that are older than 1 day
> >> so that it won't take up all the free disk space. The
> >> problem is that the files are still there and there is
> no
> >> sign that they have been deleted by the maintenance
> plan.
> >> What is wrong? Can somepone please help ? Thanks.
> >.
> >|||Below KB might help:
http://support.microsoft.com/default.aspx?scid=kb;en-us;303292&Product=sql2k
Also, check out below great troubleshooting suggestions from Bill H at MS:
-- Log files don't delete --
This is likely to be either a permissions problem or a sharing violation
problem. The maintenance plan is run as a job, and jobs are run by the
SQLServerAgent service.
Permissions:
1. Determine the startup account for the SQLServerAgent service
(Start|Programs|Administrative tools|Services|SQLServerAgent|Startup). This
account is the security context for jobs, and thus the maintenance plan.
2. If SQLServerAgent is started using LocalSystem (as opposed to a domain
account) then skip step 3.
3. On that box, log onto NT as that account. Using Explorer, attempt to
delete an expired backup. If that succeeds then go to Sharing Violation
section.
4. Log onto NT with an account that is an administrator and use Explorer to
look at the Properties|Security of the folder (where the backups reside)
and ensure the SQLServerAgent startup account has Full Control. If the
SQLServerAgent startup account is LocalSystem, then the account to consider
is SYSTEM.
5. In NT, if an account is a member of an NT group, and if that group has
Access is Denied, then that account will have Access is Denied, even if
that account is also a member of the Administrators group. Thus you may
need to check group permissions (if the Startup Account is a member of a
group).
6. Keep in mind that permissions (by default) are inherited from a parent
folder. Thus, if the backups are stored in C:\bak, and if someone had
denied permission to the SQLServerAgent startup account for C:\, then
C:\bak will inherit access is denied.
Sharing violation:
This is likely to be rooted in a timing issue, with the most likely cause
being another scheduled process (such as NT Backup or Anti-Virus software)
having the backup file open at the time when the SQLServerAgent (i.e., the
maintenance plan job) tried to delete it.
1. Download filemon and handle from www.sysinternals.com.
2. I am not sure whether filemon can be scheduled, or you might be able to
use NT scheduling services to start filemon just before the maintenance
plan job is started, but the filemon log can become very large, so it would
be best to start it some short time before the maintenance plan starts.
3. Inspect the filemon log for another process that has that backup file
open (if your lucky enough to have started filemon before this other
process grabs the backup folder), and inspect the log for the results when
the SQLServerAgent agent attempts to open that same file.
4. Schedule the job or that other process to do their work at different
times.
5. You can use the handle utility if you are around at the time when the
job is scheduled to run.
If the backup files are going to a \\share or a mapped drive (as opposed to
local drive), then you will need to modify the above (with respect to where
the tests and utilities are run).
Finally, inspection of the maintenance plan's history report might be
useful.
Thanks,
Bill Hollinshead
Microsoft, SQL Server
Tibor Karaszi, SQL Server MVP
Archive at:
http://groups.google.com/groups?oi=djq&as_ugroup=microsoft.public.sqlserver
"Seing" <anonymous@.discussions.microsoft.com> wrote in message
news:005801c3c8b4$5c223500$a401280a@.phx.gbl...
> Hi, I am running SQL 2000 and have setup a daily
> maintenance plan to backup the data and log files. I have
> also set it up to delete files that are older than 1 day
> so that it won't take up all the free disk space. The
> problem is that the files are still there and there is no
> sign that they have been deleted by the maintenance plan.
> What is wrong? Can somepone please help ? Thanks.

Delete not releasing disk space

I'm running sql 2000 standard on my ISA server for ISA to log to. ISA
generates a lot of data each day so I used DTS to backup data to access to
burn to dvd. Then I used a delete sql statement in query analyzer as such:
DELETE
FROM FireWallLog
WHERE LogDate BETWEEN CONVERT(DATETIME, '2004-10-18', 102) AND
CONVERT(DATETIME, '2004-11-14', 102)
After a good 10 minutes or so, it comes back with hundreds of thousands of
rows affected, which is good. I then look at the database MDF and LDF files
and see that the MDF file didn't shrink any and the LDF grew in size, up to
a gig in size.
It is my understanding from reading SQL Books online that the delete
statement is supposed to open up disk space immediately. It doesn't seem to
be the case. My hard drive is about full (again .. I cleared other space up
earlier because once it fills up, the proxy server starts to crash - hard).
Thanks.
Jim
Hi Jim
Massive deletes will deallocate space within the SQL Server files, for
re-use by the database, but will not remove space from the physical files.
Plus, as you noticed, deleting so many rows causing a lot of logging, which
could cause the log file to grow.
The only way to remove space from the files is to physically shrink the
database or its files.
Please read about DBCC SHRINKFILE and DBCC SHRINKDATABASE in the Books
Online.
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Jim in Arizona" <tiltowait@.hotmail.com> wrote in message
news:%23RcvGam0EHA.2824@.TK2MSFTNGP09.phx.gbl...
> I'm running sql 2000 standard on my ISA server for ISA to log to. ISA
> generates a lot of data each day so I used DTS to backup data to access to
> burn to dvd. Then I used a delete sql statement in query analyzer as such:
> DELETE
> FROM FireWallLog
> WHERE LogDate BETWEEN CONVERT(DATETIME, '2004-10-18', 102) AND
> CONVERT(DATETIME, '2004-11-14', 102)
> After a good 10 minutes or so, it comes back with hundreds of thousands of
> rows affected, which is good. I then look at the database MDF and LDF
> files and see that the MDF file didn't shrink any and the LDF grew in
> size, up to a gig in size.
> It is my understanding from reading SQL Books online that the delete
> statement is supposed to open up disk space immediately. It doesn't seem
> to be the case. My hard drive is about full (again .. I cleared other
> space up earlier because once it fills up, the proxy server starts to
> crash - hard).
> Thanks.
> Jim
>
>
>
|||In addition to Kalen's advice, please see:
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
http://www.aspfaq.com/
(Reverse address to reply.)
"Jim in Arizona" <tiltowait@.hotmail.com> wrote in message
news:#RcvGam0EHA.2824@.TK2MSFTNGP09.phx.gbl...
> I'm running sql 2000 standard on my ISA server for ISA to log to. ISA
> generates a lot of data each day so I used DTS to backup data to access to
> burn to dvd. Then I used a delete sql statement in query analyzer as such:
> DELETE
> FROM FireWallLog
> WHERE LogDate BETWEEN CONVERT(DATETIME, '2004-10-18', 102) AND
> CONVERT(DATETIME, '2004-11-14', 102)
> After a good 10 minutes or so, it comes back with hundreds of thousands of
> rows affected, which is good. I then look at the database MDF and LDF
files
> and see that the MDF file didn't shrink any and the LDF grew in size, up
to
> a gig in size.
> It is my understanding from reading SQL Books online that the delete
> statement is supposed to open up disk space immediately. It doesn't seem
to
> be the case. My hard drive is about full (again .. I cleared other space
up
> earlier because once it fills up, the proxy server starts to crash -
hard).
> Thanks.
> Jim
>
>
>