Wednesday, December 15, 2010

BOMGR 0220 errors on Business Objects server

The lads in SKSWood Brunei change the service account running Business Objects this week and it cause the server to bring up BOMGR 0220 errors. Below is the solution…

Steps for solution:

1. Stop the WebI service.

2. Log into the Business Objects Configuration Tool. Start\Programs\Business Objects\Configuration Tool 6.5

3. On the Cluster Manager Page. Go to the Service Parameters branch. At the bottom of the screen there is a drop down - select 'Update Service Parameters'. Edit the username and password. Hit next or finish

4. Refresh the ORB. On the Cluster Manager Page. Select ORB. Select 'Define ORB' from the drop down. Then hit the 'Test Ports' button. Once done, hit next.

5. Hit finish. and close the configuration tool.

6. You change the settings for the Distributed COM Configuration Properties by doing the following:

6.1 Start\Run\type 'dcomcnfg'

6.2 In the Applications Tab, you should see at least BusinessObjects.document

6.3 Click on BusinessObjects.document and select 'Default Security' tab. Ensure that Edit Default for Access Permissions and Launch Permissions includes Interactive, System and the service account you are using.

6.4 Go back to the Applications Tab, select BusinessObjects.document and hit properties.

6.5 Select the Identity Tab. Select 'This User'.

6.6 Type in the new account and password.

6.7 Once you have changed the settings there may be a need to reboot your server.

Monday, December 6, 2010

Mocking a View for Unit Testing

Had a very interesting question yesterday asking if it was possible to mock up a database view along side the in-memory SQLLite database we use for ActiveRecord.  Initially I figured it was a simple matter of just finding some form of attribute within the ActiveRecord class definition but it turns out that this is not the case.  Views not not actually supported by the CreateSchema() method we normally use with NHibernate/ActiveRecord.

As a work around you can create a view using the existing connection, for example I want to create a view called “mockview” which is based on the applications table.  All we need to do this is create a method:

private void CreateMockView()
{
    using (IDbCommand command = service.GetConnection().CreateCommand())
    {
        command.CommandText = "Create view mockview as select applicationid from applications";
        command.ExecuteNonQuery();
    }
}

If there is already an ActiveRecord class definition for this you need to decorate the class with the Schema=”none” parameter which will prevent the creation of a table with the same name.

[Serializable , ActiveRecord("mockview ", DynamicUpdate = true, Lazy = false, Schema="none")]
public partial class ApplicationsDAO : ActiveRecordBase    {
………………………….
}

Now you can query this to your hearts content.

If we want to query this but there is no ActiveRecord table definition we can just use a normal datareader, for example;

[Test]
public void GetApplications_Test()
{
    IList<Application> results = Application.FindAll();      // the AR Class
    Assert.IsTrue(results.Count > 0, "Returned values");
    _log.InfoFormat("GetAll Complete returned {0} records", results.Count);

    using (IDbCommand command = service.GetConnection().CreateCommand())
    {
        command.CommandText = "select applicationid from applications";  // the View on the database
        IDataReader reader = command.ExecuteReader();
        Assert.IsTrue(reader["applicationid"].ToString() == “1”, "there should be records in the view!");
    }
}

Wednesday, November 24, 2010

Authentication against Active Directory and ADAM

Today we were doing some work with authentication to see if we can improve the way it’s done on the external environments, Corp Website, Xtranet and the CEBs.  The plan was to use ADAM (Active Directory Application Mode) but this would mean a lot of nice features that are out-of-the-box with Active Directory.

To test both options I created a simple winform that will verify both options.

image

Using Active Directory

This is very well supported in the .Net Framework, you can use the built-in .Net references:
System.DirectoryServices.AccountManagement;
System.DirectoryServices;

Here is the code;

try

  // create a "principal context" - e.g. the domain (can also be a machine, too)
    using (PrincipalContext pc = new PrincipalContext(ContextType.Domain, txtDomain.Text))
    {
        // validate the credentials
        if (pc.ValidateCredentials(txtUsername.Text, txtPassword.Text))
            lblStatus.Text = "Login successful!";
        else
            lblStatus.Text = "Login unsuccessful!";
    }
}
catch (Exception ex)
{
    lblStatus.Text = ex.Message;
}

The PrincipleContext connects you to the domain, while the ValidateCredentials method will return True if its a valid name/password pair and false if not.

Using ADAM

This is not as well supported but it is there if needed.

try
{
     using (DirectoryEntry entry = new DirectoryEntry(txtPath.Text, txtUsername.Text, txtPassword.Text))
     {
         try
         {
             if (entry.Guid != null)
                 lblStatus.Text = "Login successful!";
             else
                 lblStatus.Text = "Login unsuccessful!";
         }
         catch (NullReferenceException ex)
         {
             lblStatus.Text = ex.Message;
         }
     }
}
catch (Exception ex)
{
     lblStatus.Text = ex.Message;
}

Here we create a Directory entry and connect to it using an LDAP path.  That is we tell the application where to find the Users information.  In my form I used; LDAP://localhost:389/cn=Groups,cn=XXX,cn=YYY,dc=ZZZ

The important thing here is that the address is entered in reverse order.  You enter the container for the User, then the container in which the User is located and ten any other container and so on until the top.

Someone might find it useful, but I’m happy to stick with Active Directory.

Friday, November 19, 2010

Setting up Email reporting using Business Objects

Today I had to configure Business Objects to send reports via email. Here are the steps needed to complete the task.

Configuring the server
  1. Logon to the Central Management Console as an Administrator

  1. From the drop down list select "Servers" and on the left select "Servers List"

  1. The server that runs the reports is called the "<servername>.AdaptiveJobServer" double click this to bring up the configuration settings and select Destination from the options on the left.
  2. From the Destination Drop down select "Email" from the list and click "Add".

  1. Enter the following details and then click "Save & Close"

    1. Domain Name: POG
    2. Host: <whatever>
    3. Port:25
    4. Authentication: None
    5. Click "Deliver Document(s) as Attachments
    6. Click "Use automatically Generated Name"
Scheduling a Report to be sent via email
  1. From the Central Management Console select Folders
  1. Using the folder list navigate down the folder list until you find the report you wish to schedule. When you find the report, double click and select "Schedule" from the left hand menu. This menu allows you to set up all the options you need, e.g. running hourly, weekly, who gets the email, output format etc.
  2. Click "Schedule"
  3. On the History List you will be able to see if the report was successfully generated and sent.

Wednesday, November 17, 2010

Running SQL scripts in PowerShell

I had the requirement to create a power shell script that would query a SQL database.  It turned out to be very easy indeed…

# Create SqlConnection object, define connection string, and open connection
$con = New-Object System.Data.SqlClient.SqlConnection
$con.ConnectionString = "Server=Livesqlserver; Database=WebCDB; Integrated Security=true"
$con.Open()

First create the connection…

$cmdSelect = "SELECT DATEDIFF(day, update_date, getdate()) as datedifference, DATENAME(dw, update_date) as theday , count(*)as TotalMails FROM  mail_tbl where sent=1 and mail_type='XTRANET' and  DATEDIFF(day, update_date, getdate()) < 7 group by DATEDIFF(day, update_date, getdate()), DATENAME(dw, update_date) order by datedifference desc"
$da = New-Object System.Data.SqlClient.SqlDataAdapter($cmdSelect, $con)

Create the SQL you want to return values on..

$dt = New-Object System.Data.DataTable
$da.Fill($dt) | Out-Null

Fill a dataset with the results

Foreach ($row in $dt.rows)
{  Write-Host $row.theday $row.TotalMails  }

Print out the results… easy

Wednesday, November 3, 2010

Workflow work-around made easy

I had a typical User request today, you know the type of thing “We have a rules that says ‘Reporting is always to Managers’, but we’ve these two people people who are ‘Manager’, but report to other ‘Managers’”.  “Simple”, I said", “Just don’t call them Managers”.. “Ahh yeah..but we can’t do that!” was the reply. You know yourself usual things we get thrown every now and again.  So I figured I could do something that was a little but better than just the usual big “If” statement, so I’ve implemented the Strategy Pattern.

Before the good stuff

The code used to look like this a big if statement which did a recursive call to itself, if the person was a Manager, Divisional Manager, Director or CEO then that was fine, if not try the next person.

/// <summary>
/// Finds the next in line by post ID.
/// </summary>
/// <param name="Id">The id.</param>
/// <returns></returns>
public ReportingStructure FindNextInLineByPostID(string Id)
{
IList<ReportingStructure> records = ReportingStructure.FindAll(Expression.Eq("PostId", Id));
if (records.Count > 0)
{   // OK now we have to check if this is a department or overseas Manager by the title
    if (records[0].JobTitleDescription.Contains("Department Manager") ||
        records[0].JobTitleDescription.Contains("Overseas Manager") ||
        records[0].JobTitleDescription.Contains("Divisional") ||
        records[0].JobTitleDescription.Contains("Director") ||
        records[0].JobTitleDescription.Contains("Chief Executive Officer")
        )
        return records[0];
    // not an exec so move up the line
    return NextInLine(records[0].ReportsToPostId);
}
return null;
}

So I figured, hey I’ll just hard code in the two exceptions; but then it occurred to me that this could (and probably would) change over time.  More exceptions would be added, more code and next year it might all change again.

Getting down with Strategy

This is really easy once you get your head around it.  First we create an abstract class with an abstract method.  This will be the blue-print for the class we’ll use for our reporting strategy.

abstract class ReportingStrategy
{
    public abstract ReportingStructure NextInLine(string Id);
}

Next we create a concrete class that contains the code we want to implement.

internal class Reporting2010 : ReportingStrategy
{
    public override ReportingStructure NextInLine(string Id)
    {
        IList<ReportingStructure> records = ReportingStructure.FindAll(Expression.Eq("PostId", Id));
        if (records.Count > 0)
        {   // OK now we have to check if this is a department Manager or overseas
            if (records[0].JobTitleDescription.Contains("Department Manager") ||
                records[0].JobTitleDescription.Contains("Department Manager") ||
                records[0].JobTitleDescription.Contains("Divisional") ||
                records[0].JobTitleDescription.Contains("Director") ||
                records[0].JobTitleDescription.Contains("Chief Executive Officer")
                )
                return records[0];
            // not an exec so move up the line
            return NextInLine(records[0].ReportsToPostId);
        }
        return null;
    }
}

As you will see this is the same workflow logic as before but now it’s out on it’s own in its own class.

Finally we update the context class to make use of this new object. I’ve clipped out all the other code to make it easier to read.

    public class ReportingStructure 
    {

        #region Private Members
        ……….

        private ReportingStrategy _reportingStrategy = new Reporting2010();
        #endregion

public ReportingStructure FindNextInLineByPostID(string Id)
{
    return _reportingStrategy.NextInLine(Id);

}

   }

So what does that give us?

Well, now if I want to make a change to the 2010 workflow I can simply update a small specific class, which can also be use in other places if needed.  More importantly however I can create any number of specific new workflows and rules and just swap out the private member to point to the right one.  I can even make it public or build it into the constructer to allow dependency injection.

e.g.   ReportingStructure reporting = new ReportingStructure (new Reporting2011());

or     ReportingStructure reprting = new ReportingStructure();
        reporting.Workflow = new (ReportingWhatEverIWant();

Monday, November 1, 2010

Charting in ASP.NET and Visual Studio 2008

I was doing some VS2010 migration research over the weekend and found that the Charting options available are really good, we could finally rid ourselves of our old Dundus Charting software.  My heart sank when I came back to the other problems in moving from VS2008 to VS2010, but after a bit of searching I found that all the charting options are backwardly compatible.

Things to install

You need to install two items onto your desktop development PC, both are very simple self extractors.  It would be best to close Visual Studio before doing this, but I don’t think it would make a lot of difference.

Following the installation you should now have a new Chart option available in your Data tool bar.

image

Using the new graph facilities

To create a simple graph I just created a new ASP.NET application with a single ASPX page called default. Dragging a chart from the toolbar to the design surface and switch to the source view will give you the code below.

<asp:Chart ID="Chart1" runat="server">
    <Series>
        <asp:Series Name="Series1">
        </asp:Series>
    </Series>
    <ChartAreas>
        <asp:ChartArea Name="ChartArea1">
        </asp:ChartArea>
    </ChartAreas>
</asp:Chart>

First thing we need to do is add soem values.  I could do this a code behind, but for this demo I’ll just use the page code.

<asp:Series Name="Column" BorderColor="180, 26, 59, 105" YValuesPerPoint="2">
    <points>
        <asp:DataPoint YValues="45,0" AxisLabel="Jan" />
        <asp:DataPoint YValues="34,0" AxisLabel="Feb" />
        <asp:DataPoint YValues="67,0" AxisLabel="Mar" />
        <asp:DataPoint YValues="31,0" AxisLabel="Apr" />
        <asp:DataPoint YValues="27,0" AxisLabel="May" />
        <asp:DataPoint YValues="87,0" AxisLabel="Jun" />
        <asp:DataPoint YValues="45,0" AxisLabel="Jul" />
        <asp:DataPoint YValues="32,0" AxisLabel="Aug" />
    </points>
</asp:Series>

Here I’ve added 8 random numbers and given them a label for each month.  Next we add in the chart area code.

<asp:ChartArea Name="ChartArea1" BorderColor="64, 64, 64, 64" BorderDashStyle="Solid" BackSecondaryColor="White" BackColor="64, 165, 191, 228" ShadowColor="Transparent" BackGradientStyle="TopBottom">
    <area3dstyle Rotation="10" perspective="10" Inclination="15" IsRightAngleAxes="False" wallwidth="0" IsClustered="False"></area3dstyle>
    <axisy linecolor="64, 64, 64, 64">
        <labelstyle font="Trebuchet MS, 8.25pt, style=Bold" />
        <majorgrid linecolor="64, 64, 64, 64" />
    </axisy>
    <axisx linecolor="64, 64, 64, 64">
        <labelstyle font="Trebuchet MS, 8.25pt, style=Bold" />
        <majorgrid linecolor="64, 64, 64, 64" />
    </axisx>
</asp:ChartArea>

Doing a preview will give you the following error: “Error executing child request for ChartImg.axd”

image

So what went wrong?  Well, as all the charts are generated on the fly, we need to make a few changes in the Web.Config.  Add the following  lines and all should be well; 

Within <system.web><httpHandlers>, add the following: <add path="ChartImg.axd" verb="GET,HEAD" type="System.Web.UI.DataVisualization.Charting.ChartHttpHandler, System.Web.DataVisualization, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" validate="false" />

Within <system.webServer><handlers>, add the following:

<add name="ChartImageHandler" preCondition="integratedMode" verb="GET,HEAD,POST" path="ChartImg.axd" type="System.Web.UI.DataVisualization.Charting.ChartHttpHandler, System.Web.DataVisualization, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />

Now when we run the programme, we get the following:

image

Putting on a bit more flash on our creation

Ok that graph looks a bit dull, so we add a legend and border by adding the following code to the page;

<legends>
    <asp:Legend IsTextAutoFit="False" Name="Default" BackColor="Transparent" Font="Trebuchet MS, 8.25pt, style=Bold"></asp:Legend>
</legends>
<borderskin skinstyle="Emboss"></borderskin>

Then we change the type to “Area” by modifying the series properties;

<asp:Series Name="Column" BorderColor="180, 26, 59, 105" YValuesPerPoint="2" ChartType="Area">

Now it looks like this:

image

Its simple easy to use and very powerful and should be compatible with SharePoint.  It may not be Sliverlight, but it work…

Friday, October 29, 2010

Getting Jiggy with Ajax and fields

I was asked to do something that I’ve done a million times before the other day, link a drop down field to a radio button so depending on what is selected the list of option should reduce. I decided to use Ajax and JQuery to do all the hard work and also because its really good at what it does.

The first requirement was to have a number of radio buttons on screen each showing the name of a department. These were very simple at first, not even taken from the database. So I created a very simple bit of HTML…

<html xmlns="http://www.w3.org/1999/xhtml" >
<head>
<title>Ajax Demo</title>
</head>
<body>
Corporate <input type="radio" id="radio1" name="BU" value="Corporate" /><br />
Food <input type="radio" id="radio2" name="BU" value="Food" /><br />
Investment <input type="radio" id="radio3" name="BU" value="Investment" /><br />
<select id="department">
<option value="0">-- No business unit selected --</option>
</select>
</body>
</html>

Nothing crazy here, just your basic setup, with three radio buttons and a select box with no values. It would look like this.

image

Adding the Ajax bit

Now we need to implement JQuery onto the page, and we do this by including the latest script file. You can get this on the internet, via a content delivery network or just download it. I’ve taken a copy as its quicker.

<script type="text/javascript" src="template/scripts/jquery-1.4.2.min.js"></script>

The line above adds the JQuery script into the page your are working with.

Below is the script to do all the wor, I’ve highlighted the interesting bit.

<script type="text/javascript" language="javascript">
$(document).ready(function() {
$("input[name*='BU']").click(function() {
callAjax($(this).val());
});
});

function callAjax(val) {

var selectedValue = val;
var servletUrl = '/OrganisationChartDataService.svc/Departments/'">http://<server>/OrganisationChartDataService.svc/Departments/' + val + '/JSon';

$.getJSON(servletUrl, function(options) {
var department = $('#department');
$('>option', department).remove(); // Clean old options first.
$.each(options, function(index) {
department.append($('<option/>').val(options[index].Id).text(options[index].Name));
});
});

}

</script>

The first section binds a “click” function to any element on the page with the name “BU”. This is handy because we don’t have to write a separate function for each element and if we add more we don’t have to do anything, it will just work.

The second highlighted bit is teh call to a WCF web service that takes in the name of a business Unit as being part of the URL, what’s returned can then be added to the select list.

The results can be see below, its fast simple and very powerful.

image

clicking another item shows this:

image

Wednesday, October 20, 2010

Old school CSV without creating a temp file

I had a request to export a report out to Excel today, which is something I’ve done a bunch of times before but always by producing a temporary file.  I figured I’d try something different, find a way to give me the same functions without having to worry about permissions on the file system, or deleting the files afterwards.

It turned out to be very easy… use a TextStream that writes out  a HTML header so Explorer does all the hard work.

On the ASPX page…

On the click event for the export button we turn off the usual ASP.NET view state stuff and make a new header with the content type of "application/vnd.ms-excel", this tells IE to start Excel regardless of the details sent.  Another interesting thing to see here is the Response.End call, you’ll need this to prevent ASP.NET sending the page refresh information along with your data.

protected void ExportButton_Click(object sender, EventArgs e)
{
      this.EnableViewState = false;
        Response.Clear();
        Response.Buffer = true;
        Response.ContentType = "application/vnd.ms-excel";
        Response.AddHeader("Content-Disposition", "inline;filename=TeamExport.csv");
        team.ExportToExcel(Response.Output, true);
        Response.Charset = "";
        Response.End();
}

On the Business Object its a simple matter of building up an array of results, making sure to wrap them in “” for each field.  Then added this to the HTML TextStream …..

/// <summary>
/// Export Global Team lists to Excel
/// </summary>
/// <param name="httpStream">The HTTP stream.</param>
public void ExportToExcel(TextWriter httpStream)
{
     // find all associated applications
     IList<Application> applications =  (from application in Application.FindAllByProperty("GlobalTeam", Id)
             select application).ToList<Application>();

     foreach(Application app in applications)
     { 
         string[] dataArr = new string[]
             {
                 WriteableValue(app.Id),
                 WriteableValue(app.GlobalTeamName),
                 WriteableValue(app.Title),
                 WriteableValue(app.SupportManager),
                 WriteableValue(app.ProjectManager),
                 WriteableValue(app.StartDate.ToShortDateString()),
                 WriteableValue(app.EndDate.ToShortDateString())
             };
         httpStream.WriteLine(string.Join(",", dataArr));
     }
}

public static string WriteableValue(object o)
{
    if (o == null || o == null)
        return "";
    else
        return "\"" + o.ToString() + "\"";
}

Its good to rediscover something old, simple but yet works so well ….

Thursday, October 14, 2010

More pain with Windows x64 migration

You have no idea how hard it was to get this to work!.  following on from my last posting I moved over from Windows XP to Windows 7 on x64 for development.  Although there was some pain in setting up the VB COMs it was nothing to the suffering when it comes to Registry settings.

Some applications were continually giving Generic 500 errors with little or nothing in the Event Logs, so I presumed it was a security problem.  Following many hours of messing about, it turns out I was on the wrong track altogether.  The specific COMs were failing because they were looking for Registry settings…

All the settings had been imported into their usual position under HKEY_LOCAL_MACHINE\SOFTWARE\SomeApplication, but no matter what I did the values always came back null. 

The solution

The answer was hidden away in a MS support article (http://support.microsoft.com/kb/896459)

32-bit programs and 64-bit programs that are running on an x64-based version of Windows operate in different modes and use the following sections in the registry:

  • Native mode 64-bit programs run in Native mode and access keys and values that are stored in the following registry sub key:
    HKEY_LOCAL_MACHINE\Software
  • 32-bit programs run in WOW64 mode and access keys and values that are stored in the following registry sub key:
    HKEY_LOCAL_MACHINE\Software\WOW6432node

So adding a copy of the REG settings to this second location made everything work again.

Tuesday, October 5, 2010

Setting up Windows 7 for Classic ASP

I’ve decided to move over from using the slow VPN to local development so decided to setup some old classic ASP applications on my local machine which is running IIS 7.5.  Unfortunately this process was far from easy.

Setting up IIS

First you need to be able to activate all the required roles.  You do this by selecting Control Panel/Programs/Turn Windows features on or off.  from the list you select the following:

Parallels Picture

Doing some configuration

Next you should create an Admin Console for you to work with.  Type  “MMC” into the start menu and select File/Add Remove Snap in from the menu.  Select IIS Manager and IIS Manager Ver.6, Also choose Event Log for the current machine.

Parallels Picture 1

File Save this to your desktop for later.

Getting it working

When I got a local copy of the application all the .NET code worked first time without any issue, however when I tried to run any Class ASP code I got a standard Error: This error (HTTP 500 Internal Server Error) means that the website you are visiting had a server problem which prevented the webpage from displaying.

After some checking about on the net it turns out that you can turn on messages for ASP code using a configuration setting in IIS Manager,

Within IIS Manager Browse to the Virtual Directory you need and Double click click on the ASP icon and expand the “Debugging Properties” tree.

Parallels Picture 2

Turn on “Log Errors to NT Log” and “Send Errors to Browser”.

Unfortunately after doing this I still did not get an error messages displayed so had to do some more hunting down of the error.

Tracing requests

You also need to turn on Tracing which can be done by reading the instructions on the following link: http://learn.iis.net/page.aspx/565/using-failed-request-tracing-to-troubleshoot-classic-asp-errors/

Following this you actually get to see a correct error!

Parallels Picture 3

LineNumber:21
ErrorCode:800a01ad
Description: ActiveX component can't create object

Checking the global.asa gave me the following line:
Set GetDirectory = Server.CreateObject("EnterpriseIreland.BusinessObjects.HumanResources.Directory")

Checking Permissions

This error is usually down to permissions or the worker process can’t find the DLL.  Setting “Everyone” with Full permissions on the D:\Applications\ folder did not work, neither did giving “Everyone” access to the D:\Development folder.

Next I checked to make sure the DLLs were correctly registered by re-running  the “register_assembly.bat” command file located with in the D:\Applications\Common folder.

Next open Regedit.exe and browse to the DLL that was failing:
HKEY_CLASSES_ROOT\EnterpriseIreland.BusinessObjects.HumanResources.Directory open the CLSID key and you find a GUID.  In my case it was “{C940B037-A429-303E-8B2E-162E4E19AC91}” search the registry for this GUID.

You should find it somewhere in the HKEY_CLASSES_ROOT\Wow6432Node\CLSID\ key group; in my case it was here: HKEY_CLASSES_ROOT\Wow6432Node\CLSID\{C940B037-A429-303E-8B2E-162E4E19AC91} click into the “InprocServer32” key and look for the value key “Codebase” this should show the file location of the DLL.  If it’s not pointing to the correct location change it.

Moving to App Pool

None of these fancy changes seemed to make any difference so I decided to look into the AppPool configuration.  The DefaultAppPool was working fine with .Net, so I created a second one based on this called “ASP”.

Parallels Picture

Next in the Advanced settings you need to make one minor but VERY IMPORTANT change;

Parallels Picture 1

You must set “Enable 32-bit Applications” to “True”

Finally assign your website to use this application pool by selecting it in IIS Manager and click Basic Settings in the Action Menu.

Parallels Picture 2

Click Select and choose “ASP” or whatever your AppPool name is called.

Success At last!!

After all this, we finally have a working legacy ASP site working on Windows 7.

Monday, April 5, 2010

Adventures when moving to GIT

I’ve been on Git hub a few times but never actually set this up on my laptop.  Following some Open Source work with a group of friends I finally got the point to the solution.  Distribution of the repositories and the ability to pick and choose the areas you want to take into your own, was such a simple and effective way to share code I decided to adopt it for most of my local development.

I am however not the best with the command line interface so when I heard that the tortoise group had got together and created an integrated windows version, I had to give it a go.

Setup

Download the software from Google code and run the MSI file worked first time without any issue.  You will however need to restart your PC/Laptop following the installation.

image

Following an reset you’ll get some extra options in your stadard windows explorer.

Connection to an existing repository

I already had an existing GitHub repository so it was a simple matter of connecting up to this.  First create a folder on your drive (in my case I called it Development) on your C Drive.  Right click on the empty directory and select Git Clone.

Select the URL on git hub of the git file and project you want to connect too, in this case the NXMPP code project.

image

Click Ok.

image

All going well you can click Close and have a copy of the code on your PC.

image

Pushing a change

So supposing you’ve a change to make to this code, how can you get it back to the cloud repository?  I’ve made a very simple change to one of the tests to illustrate the changes in one file.

image

Here in explorer you can see that Git has realised that there was a change to one of the files.

image

Right click the file and select “Diff” will bring up the a description of the changes.

Now lets say that change needs to be placed back into the repository, simply right click the file or directory and select Git Commit.  This will place the code back into the repository.

I’m looking forward to getting more into this very simple and interesting product.

Thursday, April 1, 2010

Resetting the Admin password on a Windows 2003 VM when you forget what it was.

Today I had a problem remembering the Administrator password on a VM I had created a long time ago.  Although I had created another Admin level account for myself, the VM was not on the domain so I could not even use my own account.  Luckily some came to my rescue with his magic Linux boot file which can remove the password.

Step 1

Download the Offline NT Password & Registry Editor from their website.  I used version v080526 as v080802 did not seem to be working correctly.  Unzip the ISO file 080526.iso to your local drive.

Open up the instructions page on their site.

Step 2

Start your VM and mount the ISO file by right clicking on the icon on the bottom left of the screen and selecting “Capture" ISO Image”.

Now shut down and restart the VM.

Step 3

At this point you can follow the instructions page on the site or just do the following for a quick overview.

  1. Press <Enter> to select the first disk partition
  2. Press <Enter> to select the default configuration directory
  3. Press <Enter> to select option 1; “Password Reset”.
  4. Press <Enter> to select “Edit user data and password”
  5. Press <Enter> to select the “Administrator” username or enter which ever you wish
  6. Enter “1” for option “Clear (blank) user password” and press Enter.  There are other options available if needed.
  7. Enter “!” to quit
  8. enter “q” to quit
  9. enter “y” to write the changes to the disk
  10. Enter “n” to try again option.

Now unmount the ISO by right clicking the icon and selecting “Release 080526.iso” and restart the VM.  You should now be able to login without having a password.

Monday, March 29, 2010

Playing about with REST on .Net

I’ve been planning to do more work with Azure and iPhone and figured I should embrace the new wave of developments with REST.  This seems to be the way in which the technology is going even for Microsoft.  I’m a bit of a fan of WCF, so figured I’ve give it a go implementing this using everything I’ve learned from that and with a little help from “RESTful .Net” by Jon Flanders it worked out well.

The Server

First step is to create standard VS2010 WCF Service Application project called WCFServer.

image

This will give you all the basic items needed to run an IIS Hosted WCF Service.  I won’t go into the details of these to much as I’ll assume you understand the requirement for deleting or renaming the various classes to fit those I’ve used.

The project will consist of 4 files:

  1. IRESTService.cs; which is an Interface definition for the server contracts.
  2. RESTService.svc; the web service definition file
  3. RESTService.svc.cs; the code behind for the service
  4. web.config; the application configuration and definition file.

IRESTService.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;

namespace WCFServer
{
    [ServiceContract]
    public interface IRESTService
    {
        [OperationContract()]
        [WebGet(UriTemplate="/GetRestData/{value}")]
        string GetRestData(string value);

        [WebGet(UriTemplate = "/")]
        [OperationContract]
        string GetRestDataRoot();
    }
}

My Service has 2 very simple HTTP GET methods, GetRestDataRoot which will return a simple text string and GetRestData which takes a string and then responds with a string.  These are hardly high tech and are only to illustrate hte process.

RESTService.svc

<%@ ServiceHost Language="C#" Debug="true" Service="WCFServer.RESTService" CodeBehind="RESTService.svc.cs" %>

This defines the service file and the link to the code behind.

RESTService.svc

using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
using System.ServiceModel.Activation;

namespace WCFServer
{
    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    public class RESTService : IRESTService
    {
        public string GetRestData(string value)
        {
            return string.Format("You entered: {0}", value);
        }

        public string GetRestDataRoot()
        {
            return string.Format("Root");
        }
    }
}

As you can see from the code behind I’m not really doing anything complicated at this stage, just returning strings.

web.config

<?xml version="1.0"?>
<configuration>

…………… < sniped > ………

<httpModules>
   <add name="NoMoreSVC" type="WCFServer.RestModule, WCFServer"/>
   <add name="ScriptModule" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</httpModules>

…………… < sniped > ………

  <system.serviceModel>
    <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>
    <services>
      <service name="WCFServer.RESTService" behaviorConfiguration="WCFServer.RESTServiceBehavior">
        <!-- Service Endpoints -->
        <endpoint address="" binding="webHttpBinding" contract="WCFServer.IRESTService" behaviorConfiguration="web">
          <identity>
            <dns value="localhost"/>
          </identity>
        </endpoint>
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/>
      </service>
    </services>
    <behaviors>
      <serviceBehaviors>
        <behavior name="WCFServer.RESTServiceBehavior">
          <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
          <serviceMetadata httpGetEnabled="true"/>
          <!-- To receive exception details in faults for debugging purposes, set the value below to true.  Set to false before deployment to avoid disclosing exception information -->
          <serviceDebug includeExceptionDetailInFaults="false"/>
        </behavior>
      </serviceBehaviors>
      <endpointBehaviors>
        <behavior name="web">
          <webHttp/>
        </behavior>
      </endpointBehaviors>

    </behaviors>
  </system.serviceModel>
</configuration>

In the web.config I’ve highlighted some of the more interesting elements for implementing REST.  The endpoint definition has defined the binding as “webHttpBinding" which is needed and we set the behaviorConfiguration to “web".  The endpointBehaviors is then defined to use webHttp.

You’ll also notice that I’ve added another called called RestModule and registered this as a httpModule, this was only done to remove the need to access the URL with a .svc on the end.  It’s probably over the top for what I need but it just looked better.

restmodule.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

namespace WCFServer
{
    public class RestModule : IHttpModule
    {
        public void Dispose()
        {
        }

        public void Init(HttpApplication app)
        {
            app.BeginRequest += delegate
            {
                HttpContext ctx = HttpContext.Current;
                string path = ctx.Request.AppRelativeCurrentExecutionFilePath;
                int i = path.IndexOf('/', 2);
                if (i > 0)
                {
                    string svc = path.Substring(0, i) + ".svc";
                    string rest = path.Substring(i, path.Length - i);
                    string qs = ctx.Request.QueryString.ToString();
                    ctx.RewritePath(svc, rest, qs, false);
                }
            };
        }
    }
}

Once you have all this up and running we can test it by running the manipulating the URL in my case (http://localhost:1286/Service1.svc).  If you try to access it via the REST interface you‘ll get the following http://localhost:1286/Service1/

image

and http://localhost:1286/RESTService/GetRestData/Testing123 should return the following:

image

The Client

OK, now we need a client to access the REST server, for this I’ll use a simple UnitTest project.

ServiceTest.cs

using System;
using System.Text;
using NUnit.Framework;
using NMock2;
using System.Data;

namespace UnitTests
{
    [TestFixture]
    public class CompanyObjectTests
    {
        private static readonly log4net.ILog _log = log4net.LogManager.GetLogger(System.Reflection.MethodInfo.GetCurrentMethod().DeclaringType);
        private Mockery _mockSQLDatabse;

        [SetUp]
        public void TestSetup()
        {
            log4net.Config.XmlConfigurator.Configure();
            _log.Info("Starting up for testing");
        }

        [TearDown]
        public void TestShutDown()
        {
            _log.Info("Shutting down test");
        }

        [Test(Description = "Test the service")]
        public void ServiceTest()
        {
            IClientContract client = new ClientContract();
            string result = client.GetRestDataRoot();
            Assert.IsTrue(result == "Root", "The result should be 'Root'");
            _log.DebugFormat("result={0}", result);

            result = client.GetRestData("Testing");
            Assert.IsTrue(result == "You entered: Testing", "The result should be 'You entered: Testing'");
            _log.DebugFormat("result={0}", result);
        }
    }
}

Here I’ve created an interface (IClientContract) and a concrete class (ClientContract).

IClientContract

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ServiceModel;
using System.ServiceModel.Web;

namespace UnitTests
{

    [ServiceContract]
    public interface IClientContract
    {
        [OperationContract]
        [WebGet(
            BodyStyle = WebMessageBodyStyle.Bare,
            ResponseFormat = WebMessageFormat.Xml,
            UriTemplate = ""
            )]
        string GetRestDataRoot();

        [OperationContract]
        [WebGet(
            BodyStyle = WebMessageBodyStyle.Bare,
            ResponseFormat = WebMessageFormat.Xml,
            UriTemplate = "/GetRestData/{value}"
            )]
        string GetRestData(string value);
    }
}

ClientContract

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ServiceModel;

namespace UnitTests
{

    public class ClientContract : ClientBase<IClientContract>, IClientContract
    {
        public string GetRestDataRoot()
        {
            return this.Channel.GetRestDataRoot();
        }
        public string GetRestData(string symbol)
        {
            return this.Channel.GetRestData(symbol);
        }
    }
}

App.config

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
  </configSections>
  <appSettings>
    <add key="ApplicationName" value="" />
  </appSettings>

  <log4net>
    <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <param name="Header" value="[Header]\r\n"/>
        <param name="Footer" value="[Footer]\r\n"/>
        <param name="ConversionPattern" value="%d [%t] %-5p %c %m%n"/>
      </layout>
    </appender>
    <root>
      <level value="ALL"/>
      <appender-ref ref="ConsoleAppender"/>
    </root>
  </log4net>

  <system.serviceModel>
    <bindings />
    <client>
      <endpoint address="http://localhost:1286/RESTService.svc"
                behaviorConfiguration="rest"
                binding="webHttpBinding"
                contract="UnitTests.IClientContract"/>
    </client>
    <behaviors>
      <endpointBehaviors>
        <behavior name="rest">
          <webHttp/>
        </behavior>
      </endpointBehaviors>
    </behaviors> 
  </system.serviceModel>
</configuration>

Running this in NUnitGUI will give you green lights all the way.

image

Source Code

All the source code for this project can be found here.

Monday, March 1, 2010

Automating Accessibility testing

I’ve been working on new a website recently and one of the major elements is to adhere to W3C WAI-AA compliant accessibility. I figured I could just hand the problem over the the designers but taking a Ronald Reagan signature of “Trust, but verify” I figured I’d need to check any output regardless. So my question was what would be the easiest way to check a reasonably large website in an automated way so I could be notified if anything was found. 

To my surprise most of the CMS’s don’t offer this facility out of the box and the online offerings need you to enter a URL on another site each day.  I wanted something simple and free that could be integrated into my existing continuous integration setup with Cruise Control. I’d already been successful using Selenium and NUnit so I figured I could reuse the same technology stack.  But what fun would that be?  So I figured I’d move to using WatiN.

The simple solution

I ended up with a solution using a combination of three main technologies.

NUnit – to hold the UnitTesting Code and having all the infrastructure.

WatiN – to interface with the Browser and get access to the HTML to test.

Tidy – A very interesting little utility that has all the imbedded accessibility tests that I really did not want to write myself.

First off you need to create a Unit Test with a continuous integration service environment, for instructions on that you can see my previous post.

One off page test

Below is a simple UnitTest that will test a HTML page for accessibility errors and warning. I’ve highlighted a number of the more interesting lines.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
using Tidy;
using WatiN.Core;

namespace HTMLTestExamples
{

    [TestFixture]
    public class TidyTest
    {
        private static readonly log4net.ILog _log = log4net.LogManager.GetLogger(System.Reflection.MethodInfo.GetCurrentMethod().DeclaringType);
        Tidy.Document _tdoc = new Tidy.Document();
        int _status = 0;
        string _ConfigFileName = @"Files\foo.tidy";

        [TestFixtureSetUp]
        public void TestFixtureSetup()
        {
            _tdoc.OnMessage += new ITidyDocumentEvents_OnMessageEventHandler(doc_OnMessage);
            log4net.Config.XmlConfigurator.Configure();
            _status = _tdoc.LoadConfig(_ConfigFileName);
            Assert.IsTrue(_status == 0, "Ensure no errors found in configuration");
            _log.Info("Starting up for testing");
        }
        /// <summary>
        /// Tests the file.
        /// </summary>
        [Test]
        public void TestHTMLPage()
        {
            String htmlResults = String.Empty;
            using (var browser = new IE("http://<whatever URL you want"))
            {
               _status = _tdoc.ParseString(browser.Html);
                _status = _tdoc.RunDiagnostics();
                Assert.IsTrue(_status == 0, "Oh No!, Error were errors found");
            }
        }

        /// <summary>
        /// Process messages from the Tidy parse process.
        /// </summary>
        /// <param name="level"></param>
        /// <param name="line"></param>
        /// <param name="col"></param>
        /// <param name="message"></param>
        void doc_OnMessage(Tidy.TidyReportLevel level, int line, int col, string message)
        {
            _log.InfoFormat("{3}:  {0}  Line: {1}  Col: {2}", message, line, col, level);
        }
    }
}

The first point of interest is the foo.config file. Tidy can take its configuration setting programmatically or via a config file.  I’ve chosen the config route as it was easier to modify on the fly.

accessibility-check:  2
show-warnings:      no
show-errors:          6

This file I used only had three setting, accessibility-check is set to 2 which means that it will warn at the level of AA-WAI.   There are a whole range of different values you can use and these are documented on the TIDY website.

The next point of interest is how we hook up to the OnMessage event created by Tidy when it reads the HTML pages.  Here we pass the even over to our delegate method called “doc_OnMessage”.  At the moment we simply want to print out the results on screen, but we can expand on this later.

The next few lines of code do all the real work.
     using (var browser = new IE("<URL>"))
Will tell WatiN to open a browser instance of and load the URL into memory.  
     _tdoc.ParseString(browser.Html);
Will take the raw HTML that has been read from the page and check it for any errors.  This will run basic HTML checks rather than the specific accessibility tests we want to look at, but its important to load the data before we can look the AA.
      _tdoc.RunDiagnostics();
This is the line is where we actually get to run the tests we are looking to do.

Running this against my iGoogle page came up with the following:

***** HTMLTestExamples.TidyTest.TestHTMLPage
2010-03-01 22:13:26,101 [TestRunnerThread] DEBUG HTMLTestExamples.TidyTest TidyInfo:  Document content looks like HTML Proprietary  Line: 0  Col: 0
2010-03-01 22:13:26,114 [TestRunnerThread] DEBUG HTMLTestExamples.TidyTest TidyError:  [3.2.1.1]: <doctype> missing.  Line: 1  Col: 1
2010-03-01 22:13:26,115 [TestRunnerThread] DEBUG HTMLTestExamples.TidyTest TidyError:  [13.2.1.1]: Metadata missing.  Line: 2  Col: 1
2010-03-01 22:13:26,117 [TestRunnerThread] DEBUG HTMLTestExamples.TidyTest TidyError:  [1.1.10.1]: <script> missing <noscript> section.  Line: 7  Col: 2
2010-03-01 22:13:26,119 [TestRunnerThread] DEBUG HTMLTestExamples.TidyTest TidyError:  [11.2.1.10]: replace deprecated html <u>.  Line: 9  Col: 745
2010-03-01 22:13:26,120 [TestRunnerThread] DEBUG HTMLTestExamples.TidyTest TidyError:  [11.2.1.10]: replace deprecated html <u>.  Line: 13  Col: 395
2010-03-01 22:13:26,121 [TestRunnerThread] DEBUG HTMLTestExamples.TidyTest TidyError:  [1.1.10.1]: <script> missing <noscript> section.  Line: 17  Col: 1

Humm…. not to good for our friends in Google, butg they can’t be good at everything.

One improvement

The first thing I did when moving on from the first example was to move away from the hard coded URL.  By simply using the TestCase attribute we are able to add a whole bunch of URLs.

[Test]
[TestCase("http://www.google.com")]
[TestCase("http://www.abc.com")]
[TestCase("http://www.irishtimes.com")]
public void TestHTMLPage(string url)
{
    Tidy.Document tdoc = new Tidy.Document();
    tdoc.OnMessage += new ITidyDocumentEvents_OnMessageEventHandler(doc_OnMessage);
    log4net.Config.XmlConfigurator.Configure();
    _status = tdoc.LoadConfig(_ConfigFileName);

    String htmlResults = String.Empty;
    using (var browser = new IE(url))
    {
        _status = tdoc.ParseString(browser.Html);
        _status = tdoc.RunDiagnostics();
        Assert.IsTrue(_status == 0, "There were errors found");
    }
    tdoc.OnMessage += new ITidyDocumentEvents_OnMessageEventHandler(doc_OnMessage);
}

One thing I did find is that I had to move the Tidy Document as being defined at the top level to being set at the method.

Testing from root to leaf

The next thing I did was to setup a simple root to leaf test.  I simply used the built in functionality available in WatiN to build myself a generic list of URLs on the page.  Then store this in a class that I can read back when needed.

// simple public property to hold all the pages.
List<Pages> _allPages = new List<Pages>(); 

/// <summary>
/// HTML Pages class
/// </summary>
private class Pages
{
    public string URL { get; set; }
    public string Title { get; set; }
    public ArrayList errors { get; set; }
    public ArrayList warnings { get; set; }
    public bool Tested { get; set; }       
}

// Use this code within your test
List<string> pageLinks = ExtractLinks(browser.Links);
_allPages.Add(currentPage); 
foreach (string link in pageLinks)
{
     if (!string.IsNullOrEmpty(link) && !PageOutsideSite(link) && !PageAlreadyTested(link))
         TestHTMLPage(link);
}

// This function will take all the links and build a list
private List<string> ExtractLinks(LinkCollection linkCollection)
{
     List<string> links = new List<string>();
     foreach (Link link in linkCollection)
         links.Add(link.Url);
     return links;
}

// Check to see if the page is outside of the main root URL
private bool PageOutsideSite(string urlRoot, string urlLink)
{
     if (!urlLink.Contains(urlRoot)
         return true;
     if (urlLink.Contains("?"))  // ignore param urls
         return true;
     if (urlLink.Split('/').Length > 5)  // two levels deep
         return true;
     return false;
}

// Check if the page has already been tested.
private bool PageAlreadyTested(string urlLink)
{
     foreach (Pages page in _allPages)
     {
         if (page.URL == urlLink && page.Tested)
             return true;
     }
     return false;
}

A few words of warning

This method of testing is very slow and processor intensive, WatiN is really not the best method for dealing with a large number of pages.  I’d probably recommend some form of download of the files before hand on a nightly basis and then run the Tidy tests.

Friday, February 5, 2010

Extender Methods in C#

Extender Methods were added to the .Net Framework in version 3.0 and allow developers to add additional methods to the current base types.  This is done by using static methods and adding a directive with the extender class type.

A simple example

You’ve probably seen this in many times in the code where we find the Name of the user from the current Windows security principle.

string[] strDomainAndUsername = Thread.CurrentPrincipal.Identity.Name.Split('\\');
Employee employee = employeeSearch.VerifyAccess(strDomainAndUsername[1]);

Although this works, its not very readable; what extender methods allow is the ability to add new methods to the base .Net types in this case “string”.

image

Normally hitting “.” at the end of the Name attribute will bring up all the methods that relate to whatever that returning type is, in this case a “string”, so you get .Split(), .Contains(), etc.  But with an Extender Method we can add additional ones, for example “.Username()” and “.Domain()” as shown below;

image

You know its an Extender Method by the icon next to the name.

Implementation

To do this you need to create a new class which is a static and a method within that class that is also static and returns the same type of object.

namespace EnterpriseIreland.Common.Extensions
{

    //Extension methods must be defined in a static class
    public static class StringExtender
    {
        /// <summary>
        /// Usernames the specified identity.
        /// </summary>
        /// <param name="identity">The identity.</param>
        /// <returns></returns>
        public static string Username(this string identity)
        {
            string[] networkidentity = identity.Split('\\');
            if(networkidentity.Length >1)
                return networkidentity[1];
            return "Unknown";
        }

        /// <summary>
        /// Domain Name specified by the identity.
        /// </summary>
        /// <param name="identity">The identity.</param>
        /// <returns></returns>
        public static string Domain(this string identity)
        {
            string[] networkidentity = identity.Split('\\');
            if(!String.IsNullOrEmpty(networkidentity[0]))
                return networkidentity[0];
            return "Unknown";
        }
    }
}

 

Now when you want to use it in your own code all you need to do is add the namespace and it should appear for every string.

using EnterpriseIreland.Common.Extensions;

Employee employee = employeeSearch.VerifyAccess(Thread.CurrentPrincipal.Identity.Name.Username());

Which is far more readable.  Although this is a fairly trivial example you can do fairly complex code functions using this method.  Imagine something like this for Project information put out in a HTML grid format by Year or perhaps generic Currency conversion?

     string projectsToHTMLGrid =  project.FindAllByProperty(“Year”, 2010).ToCSSHTMLOutput();

     double US  =  project.TotalIncentive.ConvertCurreny(“US”);

I’ve set out a standard in the Wiki for doing this type of thing in your projects, feel free to try it out if you see the need. 

Friday, January 29, 2010

Getting Mongo with MongoDB

Having a few hours to kill I went off on a tangent and decided to investigate a schema-free, document-oriented database called MongoDB.  The reason was because I’ve heard so much about them on Floss Weekly.

Getting it setup

First you’ll need the database software which can be downloaded from the site. For windows this will bring down a ZIP file which can simply be up packed onto your hard drive.  Create a “data” directory within the MonoDB folder and create a simple batch file called “monodb_start.bat” which starts the server and passed in a location for the database files.

@echo Off
echo -----------------------------------------
echo .    Starting the MongoDB instance
echo -----------------------------------------
.\bin\mongod --dbpath=./data

From the command line just type “mongodb_start” and you should get a command window appear.

image

The next step is to get the Database Driver for C#.  I’ve found the one listed on the site worked fine.  You can get this from Git Hub and click the Download Source link in the top right of the screen.  Once you get the Source open it in Visual Studio and compile the project to give you “MongoDB.Driver.dll”.

Finally you’ll need something that will read the JSON which is exported from database queries.  You can get a good Json.NET library from codeplex. Again download the ZIP file and extract the version you require.

Creating a project and query the database

Open up a new solution in Visual Studio and create a new class library called “mongoTest”.  I’m going to create an NUnit project which will allow us to do basic queries.  Add the references you’re going to need; MongoDB.Driver, Newtonsoft.Json, nunit.framework and log4net.

Now create a new class file called “DevelopmentAdvisors” which has the following code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NUnit.Framework;
using MongoDB.Driver;
using Newtonsoft.Json;

namespace mongoTest
{
    [TestFixture]
    public class DevelopmentAdvisors
    {

        private static readonly log4net.ILog _log = log4net.LogManager.GetLogger(System.Reflection.MethodInfo.GetCurrentMethod().DeclaringType);
        Database _mongoDB = null;
        IMongoCollection _daCollection = null;

        [TestFixtureSetUp]
        public void mongo_SetupDatabase()
        {
            log4net.Config.XmlConfigurator.Configure();
            var mongo = new Mongo();
            mongo.Connect();
            _mongoDB = mongo.getDB("MyDB");
            _daCollection = _mongoDB.GetCollection("DevelopmentAdvisors");
        }
}

This code will allow us to connect to a database called “MyDB” and connect to a “collection” (or table in any normal db) called “DevelopmentAdvisors”.

You’ll also need to create an App.config file so you can get the output. the contents of that is shown below:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>
  </configSections>

  <log4net>
    <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <param name="Header" value="[Header]\r\n"/>
        <param name="Footer" value="[Footer]\r\n"/>
        <param name="ConversionPattern" value="%d [%t] %-5p %c %m%n"/>
      </layout>
    </appender>
    <root>
      <level value="Debug"/>
      <appender-ref ref="ConsoleAppender"/>
    </root>
  </log4net>

</configuration>

Now that al the basic setups are out of the way we can do our first inserts.

[Test(Description="Insert a bunch of random record")]
[TestCase("Joe", "Smith", null, null)]
[TestCase("Joe", "Murphy", "Mr.", null)]
[TestCase("Paddy", "O'Brien", "Mr.", "Dublin")]
[TestCase("Fred", "Smith", "Mr.", "Eastwall, Dublin")]
[TestCase(null, "Martin", "Miss.", "Carlow")]
public void mongoTest_InsertNewDA(string firstName, string secondName, string title, string address)
{
    Document da = new Document();
    if(!String.IsNullOrEmpty(firstName))
        da["FirstName"] = firstName;
    if(!String.IsNullOrEmpty(secondName))
        da["SecondName"] = secondName;
    if(!String.IsNullOrEmpty(title))
        da["Title"] = title;
    if(!String.IsNullOrEmpty(address))
        da["Address"] = address;
    _daCollection.Insert(da);
    // find if the records were added
    ICursor cursor = _daCollection.FindAll();
    Assert.IsTrue(cursor.Documents.Count() > 0, "No records found");
}

Ok, here we go starting the project using NUnit GUI will give us the following:

image

Click Run and we should get a bunch of Green lights!

image

If you notice your MongoDB command window now has a new connection listed.

image

OK so that inserts records with different layouts into the collection. But there is no point in doing this if we can get the data out so we create a new test.  Using the code below we use the built in function called FindAll() to extract all the information from the database.

[Test(Description = "Search collection for results")]
public void mongoTest_SearchForResults()
{
    ICursor cursor = _daCollection.FindAll();
    Assert.IsTrue(cursor.Documents.Count() > 0, "No values found!");
    foreach (Document doc in cursor.Documents)
        _log.DebugFormat("Record {0};", doc.ToString());
}

Clicking run will present the records we’ve added in the last run.

image

The console shows that the database is returning JSON strings for each record item.

Lets get more professional with this

Dealing with Document and var objects is not really good enough to deal with when we code, so the best thing to do it create a class which holds that information.  Create a new class called “DAClass” and paste in the following code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using MongoDB.Driver;
using Newtonsoft.Json;
using mongoTest.ExtensionMethods;

namespace mongoTest
{
    /// <summary>
    /// Interface for the Entity
    /// </summary>
    public interface IMongoEntity
    {
        Document InternalDocument { get; set; }
    }

    /// <summary>
    /// Class holding the DA information
    /// </summary>
    sealed class DAClass : IMongoEntity
    {
        /// <summary>
        /// Gets or sets the first name.
        /// </summary>
        /// <value>The first name.</value>
        public string FirstName
        {
            get { return InternalDocument.Field("FirstName"); }
            set { InternalDocument["FirstName"] = value; }
        }

        /// <summary>
        /// Gets or sets the name of the second.
        /// </summary>
        /// <value>The name of the second.</value>
        public string SecondName
        {
            get { return InternalDocument.Field("SecondName"); }
            set { InternalDocument["SecondName"] = value; }
        }

        /// <summary>
        /// Gets or sets the title.
        /// </summary>
        /// <value>The title.</value>
        public string Title
        {
            get { return InternalDocument.Field("Title"); }
            set { InternalDocument["Title"] = value; }
        }

        /// <summary>
        /// Gets or sets the address.
        /// </summary>
        /// <value>The address.</value>
        public string Address
        {
            get { return InternalDocument.Field("Address"); }
            set { InternalDocument["Address"] = value; }
        }

        /// <summary>
        /// Gets the list of items
        /// </summary>
        /// <typeparam name="TDocument">The type of the document.</typeparam>
        /// <param name="whereClause">The where clause.</param>
        /// <param name="fromCollection">From collection.</param>
        /// <returns></returns>
        public static IList<TDocument> GetListOf<TDocument>(Document whereClause, IMongoCollection fromCollection) where TDocument : IMongoEntity
        {
            var docs = fromCollection.Find(whereClause).Documents;    
            return DocsToCollection<TDocument>(docs);
        }

        /// <summary>
        /// Documents to collection.
        /// </summary>
        /// <typeparam name="TDocument">The type of the document.</typeparam>
        /// <param name="documents">The documents.</param>
        /// <returns></returns>
        public static IList<TDocument> DocsToCollection<TDocument>(IEnumerable<Document> documents) where TDocument : IMongoEntity
        {
            var list = new List<TDocument>();
            var settings = new JsonSerializerSettings();
             foreach (var document in documents)
             {
                 var docType = Activator.CreateInstance<TDocument>();
                 docType.InternalDocument = document;
                 list.Add(docType);
            }
            return list;
        }

        /// <summary>
        /// Gets or sets the internal document.
        /// </summary>
        /// <value>The internal document.</value>
        public Document InternalDocument { get; set; }
    }
}

Here we have a property for each data item, which is generated from the JSON Document string.  Here I’ve added a new extension method to the Document type called “Field” which gets round the problem of having nulls in the dataset as a plain old ToString() will crash out.  The extension method is simply a new class file called “DocumentExtensions” and paste in the following code.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using MongoDB.Driver;

namespace mongoTest.ExtensionMethods
{
        //Extension methods must be defined in a static class
        public static class DocumentExtensions
        {
            /// <summary>
            /// Fields the specified in the document being passed
            /// </summary>
            /// <param name="doc">The document</param>
            /// <param name="fieldName">Name of the field to be found</param>
            /// <returns></returns>
            public static string Field(this Document doc, string fieldName)
            {
                return (doc[fieldName] == null ? null : doc[fieldName].ToString());
            }
        }
}

Now back in our DevelopmentAdvisors class file we’re going to do some searching.

[Test(Description = "Search the da records for results")]
private void mongoTest_SearchForOneDA()
{
    Document spec = new Document();
    spec["Title"] = "Mr.";
    IList<DAClass> das = DAClass.GetListOf<DAClass>(spec, _daCollection);
    Assert.IsTrue(das.Count > 0, "No values found!");
    _log.DebugFormat("Name: {0} {1}", das[0].FirstName, das[0].SecondName);
}

This method will find all DA’s with a title of “Mr.” and return that information into the IList.

Finally here is a method that uses a bit of Linq to order the results.

[Test(Description = "Use Linq query")]
private void mongoTest_SearchForAllDAsAndOrder()
{

   ICursor cursor = _daCollection.FindAll();
    var orderedList = from da in DAClass.DocsToCollection<DAClass>(cursor.Documents)
                      orderby da.SecondName
                      select da;
    foreach (DAClass da in orderedList)
        _log.DebugFormat("Name: {0} {1}", da.FirstName, da.SecondName);
    Assert.IsTrue(cursor.Documents.Count() > 0, "No records in collection!");
}

The source code is available here.