Monday, March 12, 2012

Attending Convergence 2012 In Houston, TX

I will be attending the AX Convergence 2012 users conference in Houston, Texas next week.

My goals will be:
1) Determine how an organization with users that perform multiple-roles for the company will be licensed... Enterprise CAL?
2) What type of performance increases in the communication speed between the AOS (Application Object Server) and SQL Server can we expect?
3) How much commitment does Microsoft have towards making the transition to SQL Reporting Services in AX 2012 a smooth one for existing AX 4.0 and AX 2009 customers?

In short, there are a lot of changes under hood of the next model of Dynamics AX 2012 and we as a community should get prepared by asking questions and seeking answers before committing to this product.

If you wish to contact me at the conference, send a message to my Twitter account:
https://twitter.com/#!/rgathright

Sources:
http://www.microsoft.com/dynamics/convergence/houston12/

Thursday, November 11, 2010

The AMD 1090T Processor Works Well As A Dynamic AX Server

















I have been using the AMD x6 Phenom II 1090T Black Edition Processor now for over six months as a Dynamic AX development server. The processor provides plenty of memory bandwidth for SQL Server and works great in 64 bit mode.

A new AMD 1100T has been announced is due to arrive in time for the Christmas 2010 holiday season but the AMD 1090T price has been cut so it may be an even more attractive option!

Click here to read about my experiences and to see full benchmarks:
http://www.epinions.com/review/3_2GHz_AMD_Phenom_II_X6_Six_Core_Processor_1090T_AM3_OEM_Black_Edition_HDT90ZFBK6DGR/content_527182171780

Monday, March 1, 2010

Are SSD Hard Drives An Option For Increasing SQL Server 2005 And Dynamics AX 4.0 Performance?


In this Dynamics AX 4.0 hard drive shooting match, I wanted to find a solid performing 2.5” form factor product that could be used on my HP MSA70 RAID Array. The Hewlett Packard Storage Works MSA70 will accept SATA 2.5” hard drives but the question is which one to buy!

Test Server Configuration
ASUS NCCH-DL Motherboard
http://www.epinions.com/review/Processor_Xeon_3_4_Ghz_800mhz_2mb_L2_Cache_40k2512_IBM_40K2512_0000435908469/content_499236703876
Transcend 32GB SSD, 2.5- Inch, SATA, MLC SATA Hard Drive
PDP Systems Patriot Warp 32GB Solid State - SATA-300, 2.5 32 GB SATA Hard Drive
Western Digital 250GB Scorpio Black SATA 7200 RPM 2.5IN 16MB WD2500BEKT 250 GB Hard Drive

The test server does not use SAS technology but it does implement a software level raid that I find easy to work with. I was able to get full control over the stripping in my drives.

Transcend 32Gb SSD:
In both my ASUS NCCH-DL server and a SATA II desktop motherboard, I could only get a sustained average read of 57.5 Mbs. The drive set me back $145.

PDP Systems Patriot Warp 32Gb SSD:

The drive came to life in my SATA II desktop motherboard with an average read of 145.4Mbs and an average read of 101.3Mbs on the ASUS NCCH-DL server. The drive set me back $130.

WD2500BEKT 250Gb 2.5” SATA Hard Drive:

The Western Digital WD2500BEKT has 16Mb of cache and spins at 7200RPM. Many people today would consider this just a slow notebook hard drive but in the right computer or RAID Array this drive can become a powerful force. I got a sustained average read of 139.4 Mbs using this hard drive on a SATA II desktop motherboard. I placed the drive in my ASUS NCCH-DL motherboard and could never get an average read rate better than 51.4Mbs. Do not be misled by these results though. All they show is that the drive is optimized for SATA II operation. The drive set me back $70.

Summary
While I have not tested the Intel SSD hard drive offerings, I still wanted to share my results with these drives to encourage a form of fiscal restraint for those fellow database administrators looking to increase the performance of their systems. SSD drives are still overpriced for the performance that they offer. You can buy two 250GB hard drives that are just 10Mb slower for the same cost as one of these SSD offerings!

Wednesday, February 17, 2010

Compacting A Dynamics AX 4.0 SQL Server Database Using Index Scripts


To implement a Microsoft Visual Source Safe source control solution for the various developers on our companies' Microsoft Dynamics AX 4.0 development environment, we had to first create three new Microsoft Virtual PC 2007 servers that would host the AOS (Application Object Server) and required SQL Server 2005 databases. We were experiencing the growing pains that any IT department with the responsibility of writing their own code solutions faces. Several days of research and testing helped us identify the right configuration for our new Visual Source safe solution, but we were frustrated because the size of our central Dynamics AX 4.0 development database was preventing us from going any further.


The problem was that we soon discovered that our development AX 4.0 database had a 48Gb MDF file with 28Gb of unused space. In order to host the three virtual servers on the Hewlett Packard ProLiant DL360 G5 server and not exceed the 130Gb of capacity on the RAID array, I will need to reduce the size of this MDF file as much as possible.


Before you execute any of my solutions below, consider getting yourself a test server first. A test box will prevent any costly downtime for your company's live server systems and allow you to confirm your proposed solutions before deployment. Additionally, a properly configured Dynamics AX 4.0 test server can act as a standby server if the need should ever arise.


I created a test server with the following specifications for this exercise:

ASUS NCCH-DL Motherboard

Dual Intel Xeon 3.4 Ghz 800mhz 2mb L2 Cache (40k2512) Processors

Six Samsung SpinPoint T HD501LJ 500 GB SATA II Hard Drives

Western Digital My Passport Essential 320GB External USB 2.0 Portable - Black Hard Drive


To protect the privacy of my data, I changed the name of the called database to YOURDBHERE. You will want to use your database name instead.


Failed Solutions:

#1 My first approach was to execute the shrinkfile commands on the database as follows:

use YOURDBHERE;

go

dbcc shrinkfile (YOURDBHERE);

go

dbcc shrinkdatabase (YOURDBHERE, TRUNCATEONLY);

go

The commands had no effect. I detached and reattached the database then ran the commands again with no effect on the MDF file size. Many database administrators have good success with these commands but with my unruly database, a more robust approach was going to be required.


#2 When you perform any data modification operations (INSERT, UPDATE, or DELETE statements) table fragmentation can occur. My second option is to REINDEX that database. Dynamics AX 4.0 uses over 1300 Tables which makes this a time consuming procedure if you just use the DBCC DBREINDEX ('TableName','Index_Name',fillfactor) command on each. Some may say that you should use the SQL Administration form within Dynamics AX 4.0 but I disagree because it is does not provide any detail about the paging in your database and as a result makes you a blind administrator.


My first solution involved a semi-automated reporting tool that allows me to observe where my most critical problems are and a series of generated statements to fix them using two queries.

Step 1) Run This Query Within The Desired Database To Review Index Status:

SELECT TOP (20) OBJECT_NAME(DMV.object_id) AS TABLE_NAME, SI.name AS INDEX_NAME, avg_fragmentation_in_percent AS FRAGMENTATION_Percent,

Fragment_Count, page_count, Record_Count

FROM sys.dm_db_index_physical_stats(DB_ID(N'YOURDBHERE'), NULL, NULL, NULL, 'DETAILED') AS DMV LEFT OUTER JOIN

sys.indexes AS SI ON DMV.OBJECT_ID = SI.object_id AND DMV.INDEX_ID = SI.index_id

WHERE (avg_fragmentation_in_percent > 80) AND (record_count > 1000)

Step 2) SELECT TOP (10) 'DBCC INDEXDEFRAG (YOURDBHERE,' + CAST(OBJECT_NAME(DMV.object_id) AS NVARCHAR(50))

+ ',' + CAST(SI.name AS NVARCHAR(50)) + ')' AS SYNTAX_EXECUTE

FROM sys.dm_db_index_physical_stats(DB_ID(N'YOURDBHERE'), NULL, NULL, NULL, 'DETAILED') AS DMV LEFT OUTER JOIN

sys.indexes AS SI ON DMV.OBJECT_ID = SI.object_id AND DMV.INDEX_ID = SI.index_id

WHERE (avg_fragmentation_in_percent > 80) AND (record_count > 1000)

Step 3) Now copy all of the output text into a query window for the affected database and execute.

Step 4) For those tricky indexes, let's purge them with this and repeate step 2 through 3 again!

SELECT TOP (20) 'DBCC DBREINDEX (' + CAST(OBJECT_NAME(DMV.object_id) AS NVARCHAR(50))

+ ')' AS SYNTAX_EXECUTE

FROM sys.dm_db_index_physical_stats(DB_ID(N'YOURDBHERE'), NULL, NULL, NULL, 'DETAILED') AS DMV LEFT OUTER JOIN

sys.indexes AS SI ON DMV.OBJECT_ID = SI.object_id AND DMV.INDEX_ID = SI.index_id

WHERE (avg_fragmentation_in_percent > 80) AND (record_count > 1000)


For now, this work on a micro-scale to help me solve my most immediate problems. The query has its limitations on my hardware though. I cannot restrict the where clause any more nor can I return more than 20 rows. Returning a margin of less than 80 for fragmentation percent or record_count less than 1000 will speed this up but return fewer effective indexes to defragment.


Up to this point in the article, I have taken an entire day with no success. The ultimate result was finding a Microsoft Support article that admits that there is a code failure or bug in SQL Server that causes this problem on BLOB data types of text, ntext and image. If you have Microsoft SQL Server 2005, then visit this url to find their article with a solution that requires copying and deletion of problematic database tables. http://support.microsoft.com/default.aspx?scid=kb;en-us;324432


I soon realized that I could recreate their complicated BLOB search by both reindexing the database and searching for blobs at the same time. You will also notice that Microsoft is indirectly admitting that you do not need this blob data since it is only associated with the nodes in the database. Database administrators who work with table structures that exceed 100 objects are also not given an effective means of finding the offending BLOB data discussed in the article either. As a result, I decided to run a series of queries that repair my database structure and then allow the system to purge any unnecessary data through the use of the REPAIR_ALLOW_DATA_LOSS command. Please note that I am running this command because I have exhausted all other forms of removing the BLOB data. DBCC commands are very powerful and have the ability to remove critical business data that you may need.


Prerequisites:

After running this query window statement on the database with no effect, I then applied SP3 service pack update to my SQL Server 2005 SP2 database engine.

The service pack along with these system queries with integrated reporting for blob candidates through a table that I created on the database helped me reach a solution to my problem.


STEP 1:

Put your database in single user mode.


STEP 2:

DECLARE @indid int

DECLARE @TableName sysname

DECLARE cur_tblfetch CURSOR FOR

SELECT Table_Name FROM information_schema.tables WHERE table_type = 'base table'

OPEN cur_tblfetch

FETCH NEXT FROM cur_tblfetch INTO @TableName

WHILE @@FETCH_STATUS = 0

BEGIN

DECLARE cur_indfetch CURSOR FOR

SELECT indid FROM sysindexes WHERE id = OBJECT_ID (@TableName) and keycnt > 0

OPEN cur_indfetch

FETCH NEXT FROM cur_indfetch INTO @indid

WHILE @@FETCH_STATUS = 0

BEGIN

SELECT 'INDEXDEFRAG DBREINDEX of sysindexes.index_id = ' + convert(nvarchar(5), @indid) +

'for table: ' + rtrim(@TableName)

IF @indid <> 255 DBCC INDEXDEFRAG (0, @TableName, @indid)

IF @indid <> 255 DBCC DBREINDEX (@TableName)

IF @indid = 255 INSERT INTO AABlobs (BlobTable) VALUES(@TableName);

FETCH NEXT FROM cur_indfetch INTO @indid

END

CLOSE cur_indfetch

DEALLOCATE cur_indfetch

FETCH NEXT FROM cur_tblfetch INTO @TableName

END

CLOSE cur_tblfetch

DEALLOCATE cur_tblfetch


STEP 3:

use YOURDBHERE;

go

DBCC CHECKDB (YOURDBHERE, REPAIR_ALLOW_DATA_LOSS);

go


STEP 4:

use YOURDBHERE;

go

dbcc shrinkfile (YOURDBHERE);

go


STEP 5:

From SQL Server 2005 Enterprise Manager, run the Shrink File program on the Data file and choose the "Reorganize pages before releasing unused space" option. I recommend selecting at least 500 Mb greater than minimum as your goal.


Conclusion

My database shrunk from 48Gb to 14.5Gb using these five steps. I am providing this discussion as a way of helping other Dynamics AX 4.0 system administrators who not getting the proper answers to their problems. You can do this without the help of costly consultants who just call Microsoft Technical Support anyway.


I take no responsibility for the validity of my comments and solutions. Use the solutions in this article on your data at your own risk!

Wednesday, December 3, 2008

Creating a new Axapta 4.0 document in X++ code.

To satisfy the reporting style of a legacy system, I needed my new Dynamics AX 4.0 system to create a new document note that contains the customer's item code. Yes, Dynamics AX 4.0 does have External Item descriptions and they are populated with the values I want to display but to avoid rebuilding the layout of reports I decided to use external document notes to display these values for the customer's benefit on their sales documentation. The external notes are functioning properly and appearing beneath the respective sales lines in reports.

Here is the method I found to create a new DocuRef:
public static void create(recId docuRefRecId, tableId refTableId, recId refRecId)

I found some help in MSDN, specifically a mapping of the the 3 arguments needed for the create method in relation to the SalesTable:
DocuRef.RefCompanyId==SalesTable.dataAreaId
DocuRef.RefTableId==SalesTable.TableId
DocuRef.RefRecId==SalesTable.RecId
However, I am doing this for the SalesQuotationLine table. My first instinct was to try and use the create method to do this. Yet, experience kicks in and you realize that declaring a variable of type DocuRef and setting the values of the RefTableID and RefRecID then using the insert() method of the new variable is the best course of action. I attached the following code to the modified method of SalesQuotationLine_DS.ItemID.

DocuRef tblDocuRef;
;
super();
ttsbegin;
tblDocuref.RefTableId = salesquotationline.TableId;
tbldocuref.RefRecId = salesquotationline.RecId;
tbldocuref.RefCompanyId = salesquotationline.dataAreaId;
tbldocuref.TypeId = "Notes";
tbldocuref.Restriction = DocuRestriction::External;
tblDocuref.Name = "CUSTOMER ITEM CODE";
tblDocuref.Notes = "CUST ITEM CODE: 00000034343";
tbldocuref.insert();
ttscommit;

The code worked, but since the new record had not been committed to the database yet, my RefRecID had a value of 0. My solution then turned into a post update or in other words writing the DocuRef record after the SalesQuotationLine has been committed to the database. I placed my code in the write method of the SalesQuotationLine datasource right after the salesQuotationLine and SalesQuotationTable reinitialization calls (reread(), refresh()) and removed it from the modified method. The following is the code that works flawlessly for me:

//[rg] 12/3/2008 first make sure the item has an external item id.
if (CustVendExternalItem::find(ModuleInventPurchSalesVendCustGroup::Cust,salesQuotationline.ItemId,"AllBlank",SalesquotationTable.CustAccount).ExternalItemId)
{ //[rg] 12/3/2008 now make sure we did not already write this record.
if (DocuRef::find(salesquotationline.dataAreaId,salesquotationline.TableId,salesquotationline.RecId,salesquotationline.createdDate).Name)
{
tbldocuref.RecId = 3; //[rg] remove this code.
}
else
{
ttsbegin;
tblDocuref.RefTableId = salesquotationline.TableId;
tbldocuref.RefRecId = salesquotationline.RecId;
tbldocuref.RefCompanyId = salesquotationline.dataAreaId;
tbldocuref.TypeId = "Note";
tbldocuref.Restriction = DocuRestriction::External;
tblDocuref.Name = "CUSTOMER ITEM CODE";
tblDocuref.Notes = "CUSTOMER ITEM CODE: " + CustVendExternalItem::find(ModuleInventPurchSalesVendCustGroup::Cust,salesQuotationline.ItemId,"AllBlank",SalesquotationTable.CustAccount).ExternalItemId;
tbldocuref.insert();
ttscommit;
}
}

As you can see I am pulling my note from the CustVendExternalItem.ExternalItemID field. I am happy with the results because I can now see the external note clearly on my Sales Confirmation report. I suggest using the Docu::Copy method to transfer these notes, upon confirmation of quote, to the sales order.

Tuesday, December 2, 2008

How to transfer Sales Quotation Document to Sales Orders upon confirmation in AX 4.0

If you need to move Sales Quotation Document reference notes automatically upon confirmation in AX 4.0 to your new Sales Order follow these steps:

1) Open the class: SalesQuotationEditLinesForm_Sales_Confir
2) In method: createSalesLines paste this code above second to last }

Docu::copy(SalesQuotationLineUpdate,SalesLine);

3) In method: createSalesTable past this code above second to last }

Docu::copy(SalesQuotationTable,SalesTable);