Is this typical? Can I do something to slim it down? As I am just trying to decrease the overall size to make it easier to work with when creating my application in VB, I am not worried about restoring the db (I have secure copies).
The deletes are recorded in the Transaction Log, so that in case of needing to rebuild a 'crashed' database, you have a complete record or every action.
However, if you are just trying to clean out tables, you 'might' be able to use the TRUNCATE command.
TRUNCATE TABLE MyTable
If you are just removing part of the data, then you could change the database recovery type to 'Simple', and the log will be truncated as soon as the deletes are complete.
|||Hi Arnie,
I just want to create a lightweight version of the DB to make the VB application load and run faster during the early stages of development (and take up less drive space). I want the structure to remain the same so that when the application code is in good shape I will use the full data version. So I want to remove most of the data and keep the file size (especially log) as low as possible.
It sounds like I need to change the recovery type to 'Simple' but don't know how. Have checked out the menus and search through Books Online without luck.
|||
One consideration many of us make is to create a script of the database, perhaps including scripted data inserts for Lookup tables, etc.
Then when there is a schema change, you add the schema change to the script, and then you can regularly drop the development database and run the script to re-create a 'clean' copy. You can script out the objects by right-clicking on the database, select [Tasks...], and then [Generate Scripts...]. Follow the prompts. There are also third party tools for this purpose. Below are some resources that will help with scripting out data.
To change the Recovery Model, using Object Explorer, right-click on the database, select [Properties], then [Options]. You can change the Recovery model on that screen. Be sure to make a note to return the Recovery Model to [FULL] when you move the database to production. You may need to 'shrink' the log file since it has grown so large.
DDL -Script Database to File
http://www.wardyit.com/blog/blog/archive/2006/07/21/133.aspx
http://www.sqlteam.com/publish/scriptio
http://www.aspfaq.com/etiquette.asp?id=5006
http://www.codeproject.com/dotnet/ScriptDatabase.asp
http://www.nigelrivett.net/DMO/DMOScriptAllDatabases.html
http://rac4sql.net/qalite_main.asp
DDL –Script Database to File using SMO (VB.NET)
http://msdn2.microsoft.com/en-us/library/ms162138.aspx
DDL –Script Data to file (Database Publishing Wizard)
http://www.microsoft.com/downloads/details.aspx?FamilyID=56e5b1c5-bf17-42e0-a410-371a838e570a&DisplayLang=en
FileSize -How to stop the log file from growing
http://www.support.microsoft.com/?id=873235
FileSize -Log file filling up
http://www.support.microsoft.com/?id=110139
FileSize -Log File Grows too big
http://www.support.microsoft.com/?id=317375
FileSize -Log File issues
http://www.nigelrivett.net/TransactionLogFileGrows_1.html
FileSize -Shrinking Log in SQL Server 2000 with DBCC SHRINKFILE
http://www.support.microsoft.com/?id=272318
I set the Recovery Model to simple.
I deleted about 95% of the data.
I ran DBCC SHRINKFILE and shrunk the log file to 2 mb (FANTASTIC! - NO PROBLEMS)
But the file size for the DB still show 702 mb in Windows Explorer. So I looked at the Disk Usage where it shows 702 mb for SPACE RESERVED but only 4.3 mb for SPACE USED.
I wonder if this is because the field with the most data is ntext datatype (which I believe reserves a certain amount of disk space regardless of what is used. The db was created several years ago in Access and migrated to SQL Server 2000. I have considered changing the datatype to nvarchar (MAX) which I believe only reserves what is actually needed. Also, when I create a full-text index, I get an alert that the ntext datatype may prevent some features of WITH CHANGE_TRACKING AUTO;
Do you think the ntext datatype is keeping the file size so large? If so, is it simply a matter of changing datatype on that column? Or could there be other factors keeping the db file size so large?|||
First up the warning about ntext and fulltext search. Full Text change tracking does not support updates made via writetext or updatetext which is probably why you are getting that error. As you are using SQL2005 (?) i'd suggest changing the column type to nvarchar(max). Text, Ntext and image have been superceded but are supported for backward compatability.
As for the size of the database. When you delete data, the size of the datafile will not shrink but the pages within that will be marked as free. Generally speaking its a good idea to have your datafile at a size bigger than the current database size as this allows for growth of data without the (expensive) operation of growing the datafile.
In saying this, you are able to shrink the data file in a similar way to the transaction log.
HTH!
|||Thanks Rich,I shrunk the db as well and it really does make a big difference while I developing in VB. I'm learning LINQ so there is alot of trial and error, so this really saves me some time.
Now to the question of what to do with the database when I am ready for full size (when I get my VB code worked out) since there is clearly some redundant data that will need to be deleted. What do you mean by "its a good idea to have your datafile at a size bigger than the current databse size as this allows for growth of data without the expensive operation of growing the datafile"?
I didn't realize that you could directly determine the size of the data other than choosing column datatypes that limit or extend size (e.g. nvarchar(10)). This was one of the concerns about the ntext datatype (besides being outdated). I had read that it used up a certain amount of space regardless of the data whereas nvarchar(max) is flexible in allocating space depending upon the amount of data, hence a smaller file size customized to the requirements of the data.
For this particular database, the size should not increase significantly once it is finished since users will not be able to add or delete data. From time to time corrections (typos) may need to be made but should not add substantiallly to the size.
|||
Sorry if my wording was a bit confusing.
When you create a database you can specify the size of the data and log files which essentially allocates space for your data to be stored.
If you set your files too small and you have a lot of database write activity, the files will physically need to grow to store the data. The physical growth of a file is expensive in terms of resources and you want to minimise this, so set the files at a size that reflects how big you predict your data to grow.
The size of the your columns etc is how you control how much data is stored within the datafile and this is more likely to be a factor in performance rather than the physical file size.
HTH!
As a followup to the datatype question, I have been trying to change the datatype ntext on the SectionText column to nvarchar(max), but am having a problem. My sql is:
ALTER TABLE FullDocuments MODIFY SectionText nvarchar(Max) not null;
I am getting this error message:
Incorrect syntax near 'MODIFY'.
Any thought on what I am doing wrong and how to fix it?|||
Thats not the correct syntax for that command:
Try:
Code Snippet
ALTER TABLE FullDocuments
ALTER COLUMN SectionText NVARCHAR(Max) NOT NULL;
Let me expand upon Rich's explanation.
When a database runs out of space, and if 'AutoGrow' is set to TRUE, it will automatically acquire additional disk space from the OS. Consider the analogy of a notebook. When SQL Server acquires disk space from the OS, it is like the notebook gets paper, and then that paper has to be divided into pages, lines drawn on the pages, and they have to be numbered before they are ready to use. Once done, it is quick and efficient for SQL Server to use the pages. But when the pages fill up, and SQL Server has to acquire additional space from the OS, it is a 'costly' operation to have to wait until all of the new pages are ready to use before the operation that needed to store data can continue. So it is best to 'size' the database large enough to contain the expected data growth over a good period of time. Then it is a 'Best Practice' to have a scheduled task that will periodically assess the need for additional space, and acquire that space during 'non-peak' times. (AutoGrow could fire off at the worst moment for server load.)
Perhaps these resources will help:
FileSize -Considerations for Autogrow and AutoShrink
http://www.support.microsoft.com/?id=315512
FileSize -DB Shrink Issues
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
|||Thanks again.