Monday, 16 February 2009

Sizing a database server continued

Sizing of a database server from Perfmon Measurements

In an early blog entry I described the performance monitoring excersize that was to be used to determine the hardware specifications of a database server. Here is a list of process counters and there recommended limits

Processor Counters :
Processor Time: The Percentage of elapsed time that the processor spends to execute a non-idle thread. It should not exceed 50% long-term.

Memory Counters :
Available Mbytes: The amount of physical memory, in Megabytes, immediately available for allocation to a process or for system use

Paging File % Usage: The amount of the Page File instance in use in percent. Should not exceed 10% -20%

Storage Counters :
Avg. Disk sec/Read. The average time, in seconds, of a read of data from the disk. Should not exceed 20 ms

Avg. Disk sec/Write. The average time, in seconds of a write of data to the disk. Should not exceed 20 ms

Avg. Disk Queue Length. The average numbers of both read and write requests that were queued for the selected disk during the sample interval Should not exceed 2 per spindle

Disk Read/sec. The rate of read operations on the disk

Disk Write/sec. The rate of write operations on the disk

Disk Transfer/sec. Indicates the number of reads and writes completed per second, regardless of how much data they involve. Should not exceed 50 per spindle

Current Disk Queue Length. The number of requests outstanding on the disk at the time the performance data is collected. It also includes requests in service at the time of the collection. This is an instantaneous snapshot, not an average over the time interval. Multi-spindle disk devices can have multiple requests that are active at one time, but other concurrent requests are awaiting service. Requests experience delays proportional to the length of this queue minus the number of spindles on the disks. Should not exceed 2 per spindle

Network Counters :
Output Queue Length. Is the length of the output packet queue (in packets). Should not exceed 2 packets

System Counters :
System Uptime. The elapsed time in seconds that the computer has been running since it was last started

Server Bytes Received/sec. The number of bytes the server has received from the network

Server Bytes Transmitted/sec. The number of bytes the server has sent over the network

Objects Processes. The number of processes in the system at the time of data collection

We plotted the performance counters and marked the recommended limits. From this we where able to determine an number of configuration issues such as software that had been installed and consumes resources but that does nothing, or anti virus software consuming CPU and memory.

The I/O queue grows over the allowed value; also, the latencies (read, write) was going over 20ms. More spindles can help to shorten the queue and latency.

To calculate the value of IOPS that the server can handle, we need to know how the percentage of Reads and Writes we have on the disk. In our case this was 20 percent of all IOPS are reads. With Raid 1, 20 percent Reads and 14 disks we get an IOPS of 1167. Raid 5 is much slower because of a double Write Penalty.

The performance monitor does not give us enough information to determine in which ratio we should split the hard disks but we will go for: 2 for the system. 4 for the TempDB, 4 for the MDF and 4 for the LDF. So the hardware for this looks like

1x Smart Array P800 with BBWC 512MB
16 x 72 GB 3G SAS 15K SFF DP ENT HDD

Part of the performance monitoring covered the database backup. On one of our servers we where backing up onto a NetApp share. It took 11 hours to complete this backup. From the performance counters we could determine this was due to a slow data io to the NetApp share. We where also able to determine that the performance of the NetApp share was not smooth and dropped off drastically in the middle of the night. This was probably when transferring data or rearranging the data storage within the Netapp. The conclusion was that the performance of NetApp is not sufficient for our database and that we would need to add some hard disks to our local EVA to make the backup window smaller and copy the backup files at a later date. This would mean a different Smart Array controller and an additional box to house the 2 additional hard disks.

I saw a study on the performance of NetApp with a simulated load over time. The study showed that over time the performance of the i/o to the NetApp share deteriorates over time. To start with NetApp is faster that the local storage but within a few days becomes 3.5 times slower. I think the reason for this is defragmentation tasks that happen in the background. The same study showed drops in performance resulting from reallocation of data within the device.

To determine the amount of memory look at the percent Pagefile usage. This should not exceed 10% -20%. In our case it brings the Memory requirement to 8 GB. Since this is more than 4 GB we would require a 64 bit operating system.

Finally the CPU can be determined from % Processor Time which should not exceed 50% over an extended period of time. At this time CPUs are cheap and so we will go for 2 quad core CPUs

So the final specification looks something like:

64 bit SQL 2008
8GB RAM
2 Quad core CPUs
Smart array with 16 harddisks

Today this will cost around 14k CHF

Wednesday, 11 February 2009

Improvements to WPF data binding

I am working on a WPF application to make loss estimates of live catastrophic events. Part of this application is a wizard that guides the user through the steps of the process. Last year I made a WPF application based on an article I wrote in code project http://www.codeproject.com/KB/WPF/nTierWithLinqAndWPF.aspx. I made a code review of how this works in the context of wizard control and we found some simplifications that mean that we don’t need to use a Singleton and reduce the use of dependency variables.

The first simplification is instead of letting the WPF instantiate the dataprovider on it’s own that this is done as shown below:

public partial class WinLiveCatWizard : Window
{
private WinLiveCatWizardDataProvider DataProvider;

….

public WinLiveCatWizard(String CatFocusSQLServer, int USER_ID)
{
DataProvider = new WinLiveCatWizardDataProvider(CatFocusSQLServer, USER_ID);

InitializeComponent();

}


}

This eliminates the need of the singleton and controls the instantiation of the dataprovider thus avoiding 2 instances being created when a userccontrol is bound to the same dataprovider.

The next simplification is not to use dependency properties everywhere. In this way the data provider for a parent child combo box will look like:

WinLiveCatWizardDataProvider.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.Windows;
using System.Windows.Media;

using PartnerRe.DOM.CatFocusClientFacade;
using PartnerRe.UTILS.LinqValueObjects.CatFocusDB;
using PartnerRe.UTILS.ValueObjects.CatFocusDB;

namespace PartnerRe.WPF.LiveCat
{

public class WinLiveCatWizardDataProvider
{
private PERILItem mySelectedPerilID;
private String _CatFocusSQLServer;
private int _USER_ID;

public bool IsInDesignMode
{
get
{
return DesignerProperties.GetIsInDesignMode(new DependencyObject());
}
}
public bool IsInRuntimeMode
{
get { return !IsInDesignMode; }
}
public bool IsDebug
{
get
{
#if DEBUG
return true;
#else
return false;
#endif
}
}
#if DEBUG
private bool _isInTestMode = false;
public bool IsInTestMode
{
get
{
return _isInTestMode;
}
set
{
if (_isInTestMode == value)
{
return;
}
_isInTestMode = value;
MakeData();
}
}
#endif
#region INOTIFYPropertyChanged Members

public event PropertyChangedEventHandler PropertyChanged;

protected void OnPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChangedEventArgs args = new PropertyChangedEventArgs(propertyName);
PropertyChanged(this, args);
}
}

#endregion

public PERILItem MySelectedPerilID
{
get
{
return mySelectedPerilID;
}

set
{
if (value != mySelectedPerilID && value != null)
{
if (value.PERIL_ID == 1)
{
MyREGIONCollection.Add(new REGIONItem(1, "Region1"));
MyREGIONCollection.Add(new REGIONItem(2, "Region2"));
}
else if (value.PERIL_ID == 2)
{
MyREGIONCollection.Add(new REGIONItem(3, "Region3"));
MyREGIONCollection.Add(new REGIONItem(4, "Region4"));
}
}
mySelectedPerilID = value;
}
}

private ObservableCollection _myPERILCollection = new ObservableCollection();
public ObservableCollection MyPERILCollection
{
get
{
return _myPERILCollection;
}
}

public REGIONItem MySelectedREGIONID { get; set; }

private ObservableCollection _myREGIONCollection = new ObservableCollection();
public ObservableCollection MyREGIONCollection
{
get
{
return _myREGIONCollection;
}
}
private void MakeData()
{

if (_myPERILCollection.Count > 0)
{
_myPERILCollection.Clear();
}

#if DEBUG
if (IsInDesignMode IsInTestMode)
{
// TEST DATA FOR UI DESIGN
int PERIL_ID = 1;
String PERIL = "PERIL1";
PERILItem z = new PERILItem(PERIL_ID, PERIL);

_myPERILCollection.Add(new PERILItem(PERIL_ID, PERIL));
PERIL_ID = 2;
PERIL = "PERIL2";

_myPERILCollection.Add(new PERILItem(PERIL_ID, PERIL));
}
else
#endif
{
// LIVE DATA

int PERIL_ID = 1;
String PERIL = "PERIL1";
PERILItem z = new PERILItem(PERIL_ID, PERIL);

_myPERILCollection.Add(new PERILItem(PERIL_ID, PERIL));
PERIL_ID = 2;
PERIL = "PERIL2";
_myPERILCollection.Add(new PERILItem(PERIL_ID, PERIL));
}
}

public WinLiveCatWizardDataProvider(String CatFocusSQLServer, int USER_ID)
{
_USER_ID = USER_ID;
_CatFocusSQLServer = CatFocusSQLServer;
MakeData();
}
}
}

This dataprovider is referenced in the xaml (local:WinLiveCatWizardDataProvider x:Key="MyProvider") is no longer required.

The user control no longer has a dependency property to the dataprovider. It looks like:

local:CntLiveCatWizardStep1ChooseModel
x:Name="CntLiveCatWizardStep1ChooseModel1"
BtnBackClick="CntLiveCatWizardStep1ChooseModel1_BtnBackClick"

The data context of the control is set as follows:

private void Window_Loaded(object sender, RoutedEventArgs e)
{
this.DataContext = DataProvider;
CntLiveCatWizardStep1ChooseModel1.DataContext = DataProvider;
}

Then in the control we can bind directly to the data provider in the xaml

ComboBox Name="cmbPeril"
ItemsSource="{Binding MyPERILCollection}"
DisplayMemberPath = "PERIL"
SelectedValue="{Binding MySelectedPerilID}"
Height="23" Width="120"

In this way the only dependany properties I need are those handling the button click events eg

public static readonly RoutedEvent BtnNextClickEvent = EventManager.RegisterRoutedEvent("BtnNextClick", RoutingStrategy.Bubble, typeof(RoutedEventHandler), typeof(CntLiveCatWizardStep1ChooseModel));
public event RoutedEventHandler BtnNextClick
{
add { this.AddHandler(BtnNextClickEvent, value); }
remove { this.RemoveHandler(BtnNextClickEvent, value); }
}

private void BtnNext_Click(object sender, RoutedEventArgs e)
{

RoutedEventArgs args = new RoutedEventArgs(BtnNextClickEvent);
RaiseEvent(args);
}

Thursday, 5 February 2009

Restoring differential backups in SQL 2008

I have a backup procedure that involves making a full backup over the weekend and then making differential backups during the week. The reason to do this is that it takes 6 hours to complete the full backup and only a few minutes to make the differential backups.

I can restore the full backup via the Microsoft SQL server Management Studio or via the following script:

RESTORE DATABASE [WS_MODELL]
FROM DISK = N'C:\PRECCP\Database\WS_MODELL.BAK'
WITH FILE = 1, NOUNLOAD, REPLACE, STATS = 10
GO

When I try to restore the differential backup via the management studio or via the following script I get the following error:

RESTORE DATABASE [WS_MODELL] FROM DISK = N'C:\PRECCP\Database\WS_MODELL_D1.BAK' WITH FILE = 1, NOUNLOAD, STATS = 10
GO

System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo)

After searching the internet I found
http://blog.sqlauthority.com/2007/09/02/sql-server-fix-error-msg-3117-level-16-state-4-the-log-or-differential-backup-cannot-be-restored-because-no-files-are-ready-to-rollforward/

"This error happens when Full back up is not restored before attempting to restore differential backup or full backup is restored with WITH RECOVERY option. Make sure database is not in operational conditional when differential backup is attempted to be restored."

Example of restoring differential backup successfully after restoring full backup.

RESTORE DATABASE AdventureWorks
FROM DISK = 'C:\AdventureWorksFull.bak'
WITH NORECOVERY;
RESTORE DATABASE AdventureWorks
FROM DISK = 'C:\AdventureWorksDiff.bak'
WITH RECOVERY;
Reference : Pinal Dave (http://www.SQLAuthority.com)

In my case this meant:

RESTORE DATABASE [WS_MODELL]
FROM DISK = 'C:\PRECCP\Database\WS_MODELL.BAK'
WITH NORECOVERY;
RESTORE DATABASE [WS_MODELL]
FROM DISK = 'C:\PRECCP\Database\WS_MODELL_D1.BAK'
WITH RECOVERY;

Wednesday, 4 February 2009

Click Once Deployment Problems

I had an interesting problem using click once deployment. When the client started the application the hour glass would appear then disappear without starting the application. The CPU of the PC then went to 100% for some time. Looking at the Task Manager and the running processes there was an application called DW20.exe that was consuming the cpu resources.

Checking first the event log I got entries
EventType clr20r3, P1 preccpclient.exe, P2 5.0.1.0, P3 496c6e68, P4 preccpclient, P5 5.0.1.0, P6 496c6e68, P7 3, P8 5c, P9 system.typeloadexception, P10 NIL.

After hunting the internet I found that DW20.exe is a tool that reports microsoft office application crashes. To get a popup window with better error reporting goto the control panel and open System. Under the advanced tab click the button called “Error Reporting” and then Enable error reporting.

Then when the application is started you get the option to debug in Visual studio. The hen we get an error looking like:

System.TypeLoadException was unhandled
Message="Could not load type 'TableStructure' from assembly 'PartnerRe.UTILS.ValueObjects.CatFocusDB, Version=1.0.0.0, Culture=neutral, PublicKeyToken=67a0da06f9245dfe'."
Source="PartnerRe.DOM.CatFocusClientFacade"
TypeName="TableStructure"
StackTrace:
at PartnerRe.DOM.CatFocusClientFacade.CatFocusClientFacade.SYS_NO_SPLASH_SCREENFindRecord(String CLIENT_NAME, String& Message)
at PRECCPClient.My.MyApplication.OnInitialize(ReadOnlyCollection`1 commandLineArgs)
at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel()
at Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(String[] commandLine)
at PRECCPClient.My.MyApplication.Main(String[] Args)
InnerException:


In English this means that the application could not find the class TableStructure located in one of the deployed applications.

Uninstalling the application and redeploying the application did not solve the problem. I uninstalled the application and browsed to the location where the click once assemblies where being installed. This is quite a cryptic location, in my case it was
C:\Documents and Settings\nfindlater\Local Settings\Apps\2.0\DT2WB4H4.A65\DKNCBQ6H.MDK

I then deleted all subdirectories and reinstalled the application. This solved the problem. So it turns out that one of the assemblies that was being deployed with the click once application was not being updated. At some point additional functionality in the application required an object called TableStructure. Since the dll containing this function was not being correctly updated the application failed.