I have a backup job that has failed.
The database size is 20.6 GB and the Transaction logs are 135MB
The amount of disk space I have left is 3.65MB. Which I know is not going to work.
However on the maintenance plan it is suppose to remove files older than 1 day.
I am wondering if a job works like this:
Step 1 create backup file
Step 2 Create Transaction Log Back up
Step 3 Delete old backup file
Step 4 Delete old Transaction log back up
Which tell me I would need to have double amount of disk space to accommodate 2 20 GB backup file and 2 135MB Transaction log file.
Is this correct??
Also is there a way that I can have step 3,4 done first.
LystraAs far as the full backup, you can overwrite the old backup file every time you take a full backup. This way you don't need to reserve the space for two full backup file. In the case of log backup, if you don't need to or don't want to backup the log file, just use simple recovery mode. It looks like you are deleting them any way.|||That's the problem it is not deleting the older files, even though I have it check to delete the files.
The database recovery mode is set to full.
Lystra|||Drives are cheap....
And what happens when you need to migrate data around?
Do you have a disaster box?
How odten do you dump the transaction log?
How many dumps do you have now?
How big is the hard drive?|||I think I have solve the problem. The actual mdf is 47.2GB big and there is not enough disk space to accommodate this backup.
Thanks
Lystra|||Not to sound like a broken record.
I think I have solve the problem. The actual mdf is 47.2GB big and there is not enough disk space to accommodate this backup.
Thanks
Lystra
Showing posts with label amount. Show all posts
Showing posts with label amount. Show all posts
Tuesday, March 27, 2012
Sunday, March 11, 2012
deleteing small amount of records from a view causes IX lock on all the base tables.
Sorry that I had to post it as new message instead of replying since I got server application error.
Kalen, this is a different issue. I wonder why other 4 base tables got IX TAB lock as well since the partitioned view is supposed to look up the relevant tables only by querying on the constraint column.Tom
Can you please include relevant portions of the original message, so I can
know what I am replying to without having to search the archives?
If this is a question about partitioned views, did you supply the view
definition, and the version you are using?
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Tom" <anonymous@.discussions.microsoft.com> wrote in message
news:29346002-6963-4D4E-B63C-C6A5C5E292CD@.microsoft.com...
> Sorry that I had to post it as new message instead of replying since I got
server application error.
> Kalen, this is a different issue. I wonder why other 4 base tables got IX
TAB lock as well since the partitioned view is supposed to look up the
relevant tables only by querying on the constraint column.
Kalen, this is a different issue. I wonder why other 4 base tables got IX TAB lock as well since the partitioned view is supposed to look up the relevant tables only by querying on the constraint column.Tom
Can you please include relevant portions of the original message, so I can
know what I am replying to without having to search the archives?
If this is a question about partitioned views, did you supply the view
definition, and the version you are using?
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Tom" <anonymous@.discussions.microsoft.com> wrote in message
news:29346002-6963-4D4E-B63C-C6A5C5E292CD@.microsoft.com...
> Sorry that I had to post it as new message instead of replying since I got
server application error.
> Kalen, this is a different issue. I wonder why other 4 base tables got IX
TAB lock as well since the partitioned view is supposed to look up the
relevant tables only by querying on the constraint column.
Tuesday, February 14, 2012
Delete large amount of records
Hi,
I need to delete large amount of record from sql2k table w
ly.
My question is that there is any way that I could delete them pypassing sql
log file.
Delete * from table1 where year(createdate) < '02'
Thanks,no, DELETE is a logged operation. if the majority of rows in the table is to
be deleted, it might be faster to move the remaining rows into a new table,
truncate the original table (truncate is non-logged) and then re-insert, or
drop the original table alltogether and rename the new table. another
approach would be to delete smaller portions of data in a loop, one month or
one w
at the time.
dean
"mecn" <mecn2002@.yahoo.com> wrote in message
news:eBm8nKOKGHA.1312@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I need to delete large amount of record from sql2k table w
ly.
> My question is that there is any way that I could delete them pypassing
> sql log file.
> Delete * from table1 where year(createdate) < '02'
> Thanks,
>|||Thanks,
Dean
"Dean" <dvitner@.nospam.gmail.com> wrote in message
news:OK5F05OKGHA.604@.TK2MSFTNGP14.phx.gbl...
> no, DELETE is a logged operation. if the majority of rows in the table is
> to be deleted, it might be faster to move the remaining rows into a new
> table, truncate the original table (truncate is non-logged) and then
> re-insert, or drop the original table alltogether and rename the new
> table. another approach would be to delete smaller portions of data in a
> loop, one month or one w
at the time.
> dean
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:eBm8nKOKGHA.1312@.TK2MSFTNGP09.phx.gbl...
>
I need to delete large amount of record from sql2k table w

My question is that there is any way that I could delete them pypassing sql
log file.
Delete * from table1 where year(createdate) < '02'
Thanks,no, DELETE is a logged operation. if the majority of rows in the table is to
be deleted, it might be faster to move the remaining rows into a new table,
truncate the original table (truncate is non-logged) and then re-insert, or
drop the original table alltogether and rename the new table. another
approach would be to delete smaller portions of data in a loop, one month or
one w

dean
"mecn" <mecn2002@.yahoo.com> wrote in message
news:eBm8nKOKGHA.1312@.TK2MSFTNGP09.phx.gbl...
> Hi,
> I need to delete large amount of record from sql2k table w

> My question is that there is any way that I could delete them pypassing
> sql log file.
> Delete * from table1 where year(createdate) < '02'
> Thanks,
>|||Thanks,
Dean
"Dean" <dvitner@.nospam.gmail.com> wrote in message
news:OK5F05OKGHA.604@.TK2MSFTNGP14.phx.gbl...
> no, DELETE is a logged operation. if the majority of rows in the table is
> to be deleted, it might be faster to move the remaining rows into a new
> table, truncate the original table (truncate is non-logged) and then
> re-insert, or drop the original table alltogether and rename the new
> table. another approach would be to delete smaller portions of data in a
> loop, one month or one w

> dean
> "mecn" <mecn2002@.yahoo.com> wrote in message
> news:eBm8nKOKGHA.1312@.TK2MSFTNGP09.phx.gbl...
>
Subscribe to:
Posts (Atom)