Showing posts with label mssql. Show all posts
Showing posts with label mssql. Show all posts

Monday, March 19, 2012

Deleting all but top record.

Hey Guys,

I have Performance Monitor running and storing the network usage to my MsSQL database, and this is done a few times a minute. I have a page that then shows show much of my bandwidth is being used. As you can gather, the database quickly starts filling up with hundrreds of records so I could do with a script that delete these records.
I cant simply delete all records because that would cause my webpage to fail so I need a way to delete all records apart from the latest one.
Wondering if anyone would know how I could do this?delete from tableRecords where recordid < (select max(recordid) from tablerecords)

Nick|||

Thank you, that works great.
I've just however noticed a potential problem. I understand that the RecordID can only go so high then will give errors. The thing is, the PerfMon has only been running a few hours and already it has produced 1800+ records. With your script I can just keep it as one record in the table, but the RecordID will get higher and higher. Im wondering what will happening if I let this run for a week or so, eventually it will stop working.
Anyone got any ideas?

|||Dont use an auto-identity field.
insert into table
select max(recordid) + 1 from table, [rest of fields]
Since you will only have one record in there at a time, this shouldnt hurt your performance.
Nick

Saturday, February 25, 2012

Delete Transaction Log

Hi all,

I am using MSSQL Server 2000. I have my SQL Server Database: mydb.mdf (1 GB) and mydb_log.ldf (30 GB).
Now I would like to delete this VERY BIG transaction log and let the SQL Server automatic create a new one. What is the best way to do that? System downtime is available. Here is my way when I tested on development machine:
1) Backup full database and transaction log.
2) Detach DB by using Enterprise Manager
3) Delete transaction log (mydb_log.ldf file) by using Windows Explorer.
4) Attach DB.
Then I saw SQL Server automatic created a new Transaction log (mydb_log.ldf) only 1 MB in size.

Before I apply on Production, I would like to ask for your idea? Any warning? If possible do you know where I can find any official articles of Microsoft about deleting Transaction log?
I appreciate for all help.
ThanksHave you tried dbcc shrinkfile / dbcc shrinkdatabase ? Have a look at the following article:

article (http://support.microsoft.com/default.aspx?scid=kb;EN-US;q272318)

If all else fails, then your solution is the quickest.

Good luck.|||On deleting Transaction logs:

Re: Before I apply on Production, I would like to ask for your idea? Any warning? If possible do you know where I can find any official articles of Microsoft about deleting Transaction log?
I appreciate for all help

Q1 Before I apply on Production, I would like to ask for your idea?
A1 Why not use DBCC ShrinkFile, and / or enable autoshrink? Also, why is your Log so much larger than your Data e.g., (i Are you not frequently dumping / backing up your Log to transaction log backup *.trn files? ; ii Or, do you have long running transactions that are filling up your DB Log; iii Or, is your DB very heavily used, etc.?)?

DBCC ShrinkFile advantages:
* it is safe
* it may be safely used even if your DB has multiple log files (add several additional log files to your DB, then rigorously test your method)
* ordinary users may work in the DB while its files are being shrunk

Use MyDB
Go
DBCC ShrinkFile ([MyDB_Log], 1, TruncateOnly)
Go

Q2 Any warning?
A2 You may "get away with" using your method indefinitly; however it is not safe.

Q3 If possible do you know where I can find any official articles of Microsoft about deleting Transaction log?
A3 BOL = Books On Line. You may install BOL using the Sql Server installer, it covers various recommended DBMS methods for reducing / limiting transaction log file sizes e.g., (including DBCC ShrinkFile, DBCC ShrinkDatabase). Microsoft Technet is another source of documentation that may help if / when your method gets you into trouble.|||Many thanks for all your help.
First of all, I just received new role as a DBA. Then I found my DB is not monitored, dumped/backed up to *TRN for a long time.
Why I do not want to use SHRINKFile? It seems to me DBCC SHRINKFILE does not work well, the physical size not reduce much as I expected! So I would like to "delete" and create a new log file.

Regards,
John|||Bill,
Below is a reply I made to a post from a month or so ago. Think it will help you out.

A nice feature that was added with SQL 2000 is that you now need to backup the database using the SQL Backup utility in order for the transaction log to shrink. I think that many of us were truncating the log at checkpoint in 7.0 and using a third party backup. We looked at a few things here and ended up setting up a backup device and then doing a complete backup once a day and appending a diff backup every three hours. Seems to do a good job on keeping the log at a reasonable size.

Hope this helps.

Brent|||JohnBill,
Below is a reply I made to a post from a month or so ago. Think it will help you out.

A nice feature that was added with SQL 2000 is that you now need to backup the database using the SQL Backup utility in order for the transaction log to shrink. I think that many of us were truncating the log at checkpoint in 7.0 and using a third party backup. We looked at a few things here and ended up setting up a backup device and then doing a complete backup once a day and appending a diff backup every three hours. Seems to do a good job on keeping the log at a reasonable size.

Hope this helps.

Brent

Delete takes extremly long time.

guys, im running MSSQL 2000 latest SP on PIII-1Ghz, 4GB RAM, SCSII and all good things but...

...when i execute this simple query to delete 8,000 records out of 18,000 total in the table it takes 11 seconds:

delete from tempViewResults where opportunity_id = '016158'

i do have an index on opportunity_id.
any ideas why? is this normal?

any help is greatly appreciated.please anybody?|||In this case, the index is not really helping you so much as hurting you. In order for the optimizer to consider using the index, you have to be going after <10% of the table. How many other indexes are there on the table? Also check for triggers on the table that may be firing for deletes.|||Also consider to create that "temp" table in the tempdb, which is faster.|||Two more cents:

Cent #1:Your delete statement will take longer if their are cascading relationships set up with subtables.

Cent #2:I'm not sure if using tempdb would be faster, because it still would be writing data to disk, but if you can get away with using a table variable which uses memory you might be able to avoid disk writes altogether.

blindman|||Using a #temp table is faster if your database is in FULL or BULK-LOGGED model.

Of course if your database is in SIMLE model you won't get any speed improvement|||Manowar:

What makes the temptable faster than a stored table? I couldn't find anything in Books Online regarding this. I'd have thought that querying a stored table would enable the optimizer to take advantage of cached query plans. Could you point me to where this is explained or documented in Books Online or Microsoft's support site?

blindman|||I think there's nothing about it on BOL or MSDN. I've found a note on Kalen Delaney "Inside SQL Server 2000" world-famous book.

You can look in the section about temp (#) tables. Kalen says that since in the tempdb sql server only register log data for rollback purposes (and not for database recovery), data modification operations can be up to for time faster that in a standard db.