Wednesday, February 17, 2010

Compacting A Dynamics AX 4.0 SQL Server Database Using Index Scripts


To implement a Microsoft Visual Source Safe source control solution for the various developers on our companies' Microsoft Dynamics AX 4.0 development environment, we had to first create three new Microsoft Virtual PC 2007 servers that would host the AOS (Application Object Server) and required SQL Server 2005 databases. We were experiencing the growing pains that any IT department with the responsibility of writing their own code solutions faces. Several days of research and testing helped us identify the right configuration for our new Visual Source safe solution, but we were frustrated because the size of our central Dynamics AX 4.0 development database was preventing us from going any further.


The problem was that we soon discovered that our development AX 4.0 database had a 48Gb MDF file with 28Gb of unused space. In order to host the three virtual servers on the Hewlett Packard ProLiant DL360 G5 server and not exceed the 130Gb of capacity on the RAID array, I will need to reduce the size of this MDF file as much as possible.


Before you execute any of my solutions below, consider getting yourself a test server first. A test box will prevent any costly downtime for your company's live server systems and allow you to confirm your proposed solutions before deployment. Additionally, a properly configured Dynamics AX 4.0 test server can act as a standby server if the need should ever arise.


I created a test server with the following specifications for this exercise:

ASUS NCCH-DL Motherboard

Dual Intel Xeon 3.4 Ghz 800mhz 2mb L2 Cache (40k2512) Processors

Six Samsung SpinPoint T HD501LJ 500 GB SATA II Hard Drives

Western Digital My Passport Essential 320GB External USB 2.0 Portable - Black Hard Drive


To protect the privacy of my data, I changed the name of the called database to YOURDBHERE. You will want to use your database name instead.


Failed Solutions:

#1 My first approach was to execute the shrinkfile commands on the database as follows:

use YOURDBHERE;

go

dbcc shrinkfile (YOURDBHERE);

go

dbcc shrinkdatabase (YOURDBHERE, TRUNCATEONLY);

go

The commands had no effect. I detached and reattached the database then ran the commands again with no effect on the MDF file size. Many database administrators have good success with these commands but with my unruly database, a more robust approach was going to be required.


#2 When you perform any data modification operations (INSERT, UPDATE, or DELETE statements) table fragmentation can occur. My second option is to REINDEX that database. Dynamics AX 4.0 uses over 1300 Tables which makes this a time consuming procedure if you just use the DBCC DBREINDEX ('TableName','Index_Name',fillfactor) command on each. Some may say that you should use the SQL Administration form within Dynamics AX 4.0 but I disagree because it is does not provide any detail about the paging in your database and as a result makes you a blind administrator.


My first solution involved a semi-automated reporting tool that allows me to observe where my most critical problems are and a series of generated statements to fix them using two queries.

Step 1) Run This Query Within The Desired Database To Review Index Status:

SELECT TOP (20) OBJECT_NAME(DMV.object_id) AS TABLE_NAME, SI.name AS INDEX_NAME, avg_fragmentation_in_percent AS FRAGMENTATION_Percent,

Fragment_Count, page_count, Record_Count

FROM sys.dm_db_index_physical_stats(DB_ID(N'YOURDBHERE'), NULL, NULL, NULL, 'DETAILED') AS DMV LEFT OUTER JOIN

sys.indexes AS SI ON DMV.OBJECT_ID = SI.object_id AND DMV.INDEX_ID = SI.index_id

WHERE (avg_fragmentation_in_percent > 80) AND (record_count > 1000)

Step 2) SELECT TOP (10) 'DBCC INDEXDEFRAG (YOURDBHERE,' + CAST(OBJECT_NAME(DMV.object_id) AS NVARCHAR(50))

+ ',' + CAST(SI.name AS NVARCHAR(50)) + ')' AS SYNTAX_EXECUTE

FROM sys.dm_db_index_physical_stats(DB_ID(N'YOURDBHERE'), NULL, NULL, NULL, 'DETAILED') AS DMV LEFT OUTER JOIN

sys.indexes AS SI ON DMV.OBJECT_ID = SI.object_id AND DMV.INDEX_ID = SI.index_id

WHERE (avg_fragmentation_in_percent > 80) AND (record_count > 1000)

Step 3) Now copy all of the output text into a query window for the affected database and execute.

Step 4) For those tricky indexes, let's purge them with this and repeate step 2 through 3 again!

SELECT TOP (20) 'DBCC DBREINDEX (' + CAST(OBJECT_NAME(DMV.object_id) AS NVARCHAR(50))

+ ')' AS SYNTAX_EXECUTE

FROM sys.dm_db_index_physical_stats(DB_ID(N'YOURDBHERE'), NULL, NULL, NULL, 'DETAILED') AS DMV LEFT OUTER JOIN

sys.indexes AS SI ON DMV.OBJECT_ID = SI.object_id AND DMV.INDEX_ID = SI.index_id

WHERE (avg_fragmentation_in_percent > 80) AND (record_count > 1000)


For now, this work on a micro-scale to help me solve my most immediate problems. The query has its limitations on my hardware though. I cannot restrict the where clause any more nor can I return more than 20 rows. Returning a margin of less than 80 for fragmentation percent or record_count less than 1000 will speed this up but return fewer effective indexes to defragment.


Up to this point in the article, I have taken an entire day with no success. The ultimate result was finding a Microsoft Support article that admits that there is a code failure or bug in SQL Server that causes this problem on BLOB data types of text, ntext and image. If you have Microsoft SQL Server 2005, then visit this url to find their article with a solution that requires copying and deletion of problematic database tables. http://support.microsoft.com/default.aspx?scid=kb;en-us;324432


I soon realized that I could recreate their complicated BLOB search by both reindexing the database and searching for blobs at the same time. You will also notice that Microsoft is indirectly admitting that you do not need this blob data since it is only associated with the nodes in the database. Database administrators who work with table structures that exceed 100 objects are also not given an effective means of finding the offending BLOB data discussed in the article either. As a result, I decided to run a series of queries that repair my database structure and then allow the system to purge any unnecessary data through the use of the REPAIR_ALLOW_DATA_LOSS command. Please note that I am running this command because I have exhausted all other forms of removing the BLOB data. DBCC commands are very powerful and have the ability to remove critical business data that you may need.


Prerequisites:

After running this query window statement on the database with no effect, I then applied SP3 service pack update to my SQL Server 2005 SP2 database engine.

The service pack along with these system queries with integrated reporting for blob candidates through a table that I created on the database helped me reach a solution to my problem.


STEP 1:

Put your database in single user mode.


STEP 2:

DECLARE @indid int

DECLARE @TableName sysname

DECLARE cur_tblfetch CURSOR FOR

SELECT Table_Name FROM information_schema.tables WHERE table_type = 'base table'

OPEN cur_tblfetch

FETCH NEXT FROM cur_tblfetch INTO @TableName

WHILE @@FETCH_STATUS = 0

BEGIN

DECLARE cur_indfetch CURSOR FOR

SELECT indid FROM sysindexes WHERE id = OBJECT_ID (@TableName) and keycnt > 0

OPEN cur_indfetch

FETCH NEXT FROM cur_indfetch INTO @indid

WHILE @@FETCH_STATUS = 0

BEGIN

SELECT 'INDEXDEFRAG DBREINDEX of sysindexes.index_id = ' + convert(nvarchar(5), @indid) +

'for table: ' + rtrim(@TableName)

IF @indid <> 255 DBCC INDEXDEFRAG (0, @TableName, @indid)

IF @indid <> 255 DBCC DBREINDEX (@TableName)

IF @indid = 255 INSERT INTO AABlobs (BlobTable) VALUES(@TableName);

FETCH NEXT FROM cur_indfetch INTO @indid

END

CLOSE cur_indfetch

DEALLOCATE cur_indfetch

FETCH NEXT FROM cur_tblfetch INTO @TableName

END

CLOSE cur_tblfetch

DEALLOCATE cur_tblfetch


STEP 3:

use YOURDBHERE;

go

DBCC CHECKDB (YOURDBHERE, REPAIR_ALLOW_DATA_LOSS);

go


STEP 4:

use YOURDBHERE;

go

dbcc shrinkfile (YOURDBHERE);

go


STEP 5:

From SQL Server 2005 Enterprise Manager, run the Shrink File program on the Data file and choose the "Reorganize pages before releasing unused space" option. I recommend selecting at least 500 Mb greater than minimum as your goal.


Conclusion

My database shrunk from 48Gb to 14.5Gb using these five steps. I am providing this discussion as a way of helping other Dynamics AX 4.0 system administrators who not getting the proper answers to their problems. You can do this without the help of costly consultants who just call Microsoft Technical Support anyway.


I take no responsibility for the validity of my comments and solutions. Use the solutions in this article on your data at your own risk!