Thursday, 14 October 2010

MVC1 to MVC2 caching gotcha

I’m presently helping a client refresh their solution to the latest technology in K2, ASP.NET MVC, Sharepoint etc and found quite a poor issue with how caching works in MVC 2 compared to MVC 1.

The scenario, lets say we have a single controller with InitialiseA(), InitialiseB() and Start(). The two init methods would set some session variables or do something in the database and then send back a redirect to action to go off to Start. This is what I expect to happen;

The reality is significantly different though. The above is how it would work in MVC1, but in MVC2, the following happens;

The result of the first invocation of Start is being cached and so, when the browser gets the 302 response it realises it already has the content for the redirected url and just renders that. So what’s changed?

MVC1 used to send an expires HTTP response header set to the current time, meaning  the browser wouldn’t cache the result. MVC 2 however doesn’t send this by default, so I found myself with a bunch of issues around the scenario described above. In just so happens that in this case all of my controllers descend from one common abstract base controller, so I was able to add the OutputCache(Location=OutputCacheLocation.None) attribute to this base class as follows;

public abstract class MyBaseController : Framework.Web.Mvc.ControllerBase

Sunday, 3 October 2010

Get executable full path?

Today I needed to know where a command line executable was running from – msbuild.exe to be precise. My path environment variable is extremely long on my dev box, so I just wanted a quick way to find out where the exe would be run from. The following does exactly what I needed;

for %f in (msbuild.exe) do echo %~$PATH:f

And the output?


Just what I needed :)

Wednesday, 29 September 2010

Attach and Detach VHD files from the command line

Very simple and quick way to attach or detach a VHD file on demand, from the command line – use diskpart;

A) Create a diskpart script to attach your VHD;

Enter the following into a new text document (for the sake of argument, let’s call this attach-script.txt)

select vdisk file="your.vhd"
attach vdisk

B) Create a batch file to execute the diskpart script;

Invoke diskpart with the /s parameter as follows in your batch file (again, let’s call this attach.bat)

diskpart /s attach-script.txt

C) Create a diskpart script to detach your VHD;

Should be as follows (calling this detach-script.txt)

select vdisk file=”your.vhd”
attach vdisk

Create a batch file to execute the diskpart script;

As before, call diskpart with /s (detach.bat)

diskpart /s detach-script.txt

And your done…. just execute attach or detach as needed to mount/unmount your vhd file….

Getting a free lunch? Bitbucket is now free.

I’ve always been told there’s no such thing as a free lunch, but an email I received this morning appears to contradict this mantra!

I switched to Mercurial for my own personal/project source control earlier this year, abandoning subversion after many a happy year. When I made the switch, rather than host mercurial myself and have to worry about back-ups and the like, I signed up for a 10 repository plan with bitbucket, which was a meagre £10 a month. I had to do some creative merging of repositories to get all my projects into 10, but I was more than happy with the platform.

This morning however, I received a free lunch - an email that my plan was now free. They’ve teamed up with Atlassian and are now offering unlimited private and public repositories with 5 users for free!! I’ve not been able to find a catch :)

More info here:

Thursday, 2 September 2010

MVC2 validation samples

I’ve posted a project on codeplex here: to demonstrate using standard out of the box MVC2 validation with jQuery for both server and client side, including posting through a normal http form with full page post and through ajax – loading the form into the page with ajax and submitting it using jQuery, with full integration to the validation framework. This is to support my recent posts on the topic;

Friday, 27 August 2010

Custom MVC 2 validation using jQuery – implementing client side validation in addition to server side

When I was looking for information on wiring up custom validation in MVC 2 I couldn’t find a lot of information on getting the client side stuff working with jQuery that a) didn’t involve manually changing the MicrosoftMvcJQueryValidation.js file or b) worked, so I set about working it out myself. Here is what I found – a walk through for getting server and client side validation working using jQuery.

See my earlier post on getting validation working with AJAX loaded forms too.

Ok, so first, go off and read this guide from Phil Haack, this is the groundwork you need to get validation working, which I’ll briefly re-iterate here before talking about getting custom jQuery validator wired up.

Stage 1 – getting validation working on the server side first.

1A: your custom validation attribute

The following is a basic skeleton for a validator that will check a string’s minimum length (yes I know we have validators for min and max length, I’m trying to be concise here and show how to roll your own!);

public sealed class MinimumLengthAttribute : ValidationAttribute
    public int MinimumLength{ get; private set; }

    /// <remarks/>
    public MinimumLengthAttribute( int minimumLength )
        MinimumLength = minimumLength;

    /// <remarks/>
    public override bool IsValid(object value)
        if( value == null ) return false;

        string text = (string) value;
        if( text.Length < MinimumLength ) return false;

        return true;
1B: Consume your validation attribute in your view model

Mark up your target model property with your validation attribute.

public class AddUserViewModel
    [MinimumLength(6, ErrorMessage="Password must be specified and be at least 6 characters long")]
    public string PasswordOne{ get; set; }
1C: Output some validation messages

Use the MVC ValidationMessageFor helpers to output some validation messages.

<%=Html.LabelFor( m => m.PasswordOne) %>
<%=Html.EditorFor( m => m.PasswordOne )%>
<%=Html.ValidationMessageFor( m => m.PasswordOne) %>
1D: Check the model state in your controller

When your model is posted into your controller action, it will be automatically validated. You can check the model state and act accordingly, something like;

if (!ModelState.IsValid)
    return View(userModel);

That’s it for server side stuff, if you fire up your form, leave the field blank and then submit it, the error message will appear. For the client side stuff we need to go a little further;

Stage 2 – get client validation working on the client

2A: Include validation base scripts

Include the following in your page to include the jQuery validation stuff. You may be asking, “where is MicrosoftMvcJQueryValidation.js, I don’t seem to have it” – it’s presently part of the MvcFutures project – take a look on codeplex.

<script type="text/javascript" src="/Scripts/jquery.validate.min.js"></script>
<script type="text/javascript" src="/Scripts/MicrosoftMvcJQueryValidation.js"></script>

2B: Tell your form to output client validation information

This must be called BEFORE your Html.BeginForm – it tells the view context to output validation information in a script when the form is disposed. This doesn’t actually DO any validation, it just outputs the appropriate javascript data to tell your chosen engine what rules need to be implemented. The validation is actually wired up by a piece of javascript wired into the document.ready event from the MicrosoftMvcJQueryValidation.js file – again, if you’re loading your form using AJAX, your validation won’t get wired up, you need to take extra steps….

2C: Wiring up some client side code to the custom validation attribute

We now need to write some code that outputs the appropriate javascript data (at the end of the form) for our custom client validator, once we write it.

public class MinimumLengthValidator : DataAnnotationsModelValidator<MinimumLengthAttribute>
    private readonly int _mininumlength;
    private readonly string _message;

    public MinimumLengthValidator( ModelMetadata metadata, ControllerContext context, MinimumLengthAttribute attribute ) : base(metadata, context, attribute)
        _mininumlength = attribute.MinimumLength;
        _message = attribute.ErrorMessage;

    public override IEnumerable<ModelClientValidationRule> GetClientValidationRules()
        var rule = new ModelClientValidationRule
            ErrorMessage = _message,
            ValidationType = "tj-custom"
        rule.ValidationParameters.Add("minparam", _mininumlength);

        return new[] { rule };

Notice we don’t add the code to the validator attribute. That’s because the validator attributes aren’t MVC specific, you can use those in other technologies too, so adding MVC specific guff to those attributes would have been quite a pollution – instead the wrapper above takes an instance of the MinimumLengthAttribute we’ve defined in it’s constructor and then sets local members that we want to use on the client. The GetClientValidationRules() override then specifies what will be output in the javascript validation rules on the client – the basic stuff is the ErrorMessage which we pass through from the validator and the ValidationType which tells the validation stuff on the client what type of validation to execute (in this case it’s our custom validator which we need to setup called tj-custom). ValidationParameters is then used to build up any parameters we want to pass into our validator.

2D: Telling MVC that the above validator is the client side adaptor for our minimum length attribute;

We now need to tell MVC that when it comes across our validator (our MinimumLengthAttribute) it should use the MinimumLengthValidator class to generate the javascript rules for the client. We do this during Application_Start with the following code;

DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(MinimumLengthAttribute), typeof(MinimumLengthValidator));
2E: Registering our new client validation function

The final step is to actually write our new jQuery validator and register it with jQuery. (Now I think about it, I guess all the above steps apply to any client side validation technology you want to use and only this last step would be different!).

Remember in the ModelClientValidationRule we’re returning from our validator adapter above, we specified a validation type of “tj-custom”. We register a handler for this as follows;

jQuery.validator.addMethod("tj-custom", function (value, element, params)
    if (value == null) return false;
    if (value.length < params.minparam) return false;
    return true;

Notice the params structure is a mirror of the params we returned in the ValidationParameter from the adapter? All we do is check the value against the params and return true if validation is passed or false if it’s failed. Simple as that – no need to start messing around with the jQuery in the MicrosoftMvcJQueryValidation file, basically, if the mvc built in stuff doesn’t recognise the validation type it passes it through to __MVC_ApplyValidator_Unknown method which just passes data through to our code using the pattern above.

Disclaimer :)

I’ve unpicked this from the code I’m working on so I may have missed something minor, comment me if you have any questions. Enjoy……

Wednesday, 25 August 2010

MVC OOTB Validation when pulling in forms using AJAX and jQuery

I’m working on an MVC2 application that makes extensive use of forms being sucked into the current page using ajax like this, which issues the request and gets back html representing the form which is then presented in a jQuery UI modal dialog;

        type: postType,
        url: url,
        data: data,
        dataType: "html",
        async: true,
        cache: false,
        success: function (data, text)
            dialogContent(title, data, width);
        error: function (request, textStatus, errorThrown)
            handleStandardErrors(null, request);

I wanted to use the new out of the box validation toolset with data annotations, which on the face of it looks pretty cool, so I followed the guide on getting this working using jQuery validator. Namely, I got myself the MicrosoftMvcJQueryValidation.js from the MvcFutures project and then added data annotations to my view model, eg;

[Required(ErrorMessage="User email address is required")]
public string Email{ get; set; }

That’s it to get server side validation working, which works a treat, but to get client side working, I then added the following script includes to my master page;

<script type="text/javascript" src="/Scripts/jquery.validate.min.js"></script>
<script type="text/javascript" src="/Scripts/MicrosoftMvcJQueryValidation.js"></script>

Enabled client validation in my form and added some validation messages;


<%using( Html.BeginForm("AddUser", "Users", FormMethod.Post, null)){ %>

<%=Html.LabelFor( m => m.Email) %>
<%=Html.EditorFor( m => m.Email )%>
<%=Html.ValidationMessageFor( m => m.Email) %>

Ran the app, and…..nothing… nada, not a thing. So I started digging and tracing through the MVC source. All appeared to be working as it should. EnableClientValidation was setting a flag in the form context to tell the framework to output validation code. The dispose method of MvcForm (which is instantiated with the BeginForm using) was invoking the code to output some javascript structured describing what to validate and how, but it didn’t seem to be using this anywhere. I soon worked out why…

This little snippet of code is in MicrosoftMvcJQueryValidation.js, which remember we included in our master page (which is rendered in the host page, NOT our partial form view we’re getting using ajax).

$(document).ready(function() {
    var allFormOptions = window.mvcClientValidationMetadata;
    if (allFormOptions) {
        while (allFormOptions.length > 0) {
            var thisFormOptions = allFormOptions.pop();

That won’t be fired so to get the validation working, we just need to do the same thing right? Not quite. I added the above code to a jQuery startup function in my partial view, it gets called successfully but….nothing, it still didn’t work because window.mvcClientValidationMetadata was undefined. Now the reason is something different – the jQuery startup function is actually invoked before the inline <script></script> block that sets window.mvcClientValidationMetadata!

The way the window.mvcClientValidationMetadata is used can help us though – the inline script pushes the latest validation data for the form onto this variable and the code above pops it back off. As such, we can just interrogate the length of the array when we start up and if there is no data there yet, retry after a short delay. If we keep doing that until it’s been processed all should be well with the world. So, my modified startup script is as follows;

$(function ()


    function setupMvcValidation()
        if (window.mvcClientValidationMetadata == undefined || window.mvcClientValidationMetadata.length < 1)
            setTimeout(setupMvcValidation, 100);

        var allFormOptions = window.mvcClientValidationMetadata;
        if (allFormOptions)
            while (allFormOptions.length > 0)
                var thisFormOptions = allFormOptions.pop();

and all is indeed well with the world. I think this should even cover the edge cases where you have multiple forms in your partial view, each with their own validation, but I’ve yet to test it any more thoroughly.

Thursday, 15 July 2010

Undoing early morning/late night stupidity with Mercurial

So laaaate this morning (around 3am) I was putting some finishing touches to a refactor I was working on for my own project and I came to use tortoiseHg to look at the status of my files. Loads were selected for modification, many for addition and several that I’d removed were ready for removal.

Unfortunately for me with all these changes selected in TortoiseHG’s file status dialog I managed to hit the “Forget” button on the toolbar. Immediately my additions went back to unknown status and all my modifications were marked as removals (which I would assume is a revert not actually a remove!).

Anyway, completely NOT what I was looking for, so now I needed to work out how to get my files added back into the repository and how to make sure my modifications would be taken into account. You can’t just click on the modification that is now a removal in Tortoise and select add again – it doesn’t allow it, not sure why, so it’s time to go back to the command line.

First off, dealing with the adds that have now been forgotten is simple, we can just re-run a hg add and they will get pulled back in.

hg add

Dealing with the modifies which are now removals is more difficult. You can use hg add to add each individual file in one by one, but this is a massive pain in the ass, so instead use come clever old skool dos;

for /f %f in ('hg stat -rn') do hg add %f

the /f says we’re looking to process a file list, each file being pushed into %f. The command to execute is in the brackets and quotes (-rn lists only things tagged for removal and skips outputing the initial status character). The do command then specifies what to do for each one of the files returned… in this case, hg add it. And voila, problem solved and I can go to bed with my repository still in order….

Friday, 9 July 2010

Nugget of the week.... sleep at a given time;

Got a long running task that you want to let finish, but need to remember to sleep your machine?

at [time] "rundll32 powerprof.dll,SetSuspendState"


Should do the trick.


I've got a small, cheap, linux server that I use purely to host my SVN repository. Just noticed something;



Wednesday, 30 June 2010

This is ridiculous!


Ever heard of atomic operations? If one bit fails, the entire thing should fail – don’t take my money and then not book my tickets!!! If you MAY have taken my money then you’d better damn well get in touch with ME, especially as it’s 10:01pm and your line’s are closed. Ridiculous!!!

Tuesday, 29 June 2010

Overriding the style of a jQuery UI datepicker

Recently, in my spare time, I’ve been working on a simple booking system for a friend of mine. I had the need to present a date selector using jQuery UI’s date picker, but I wanted to change the default behaviour as follows;

  • Dates in the past can’t be selected and must be stylised
  • Dates in the future can be selected, but the calendar should highlight any dates that are not available
  • I don’t want to go hunting and changing my jQuery UI theme styles, nor do I want to modify any jQuery UI code

The starting point – vanilla datepicker

Adding the following code;

    numberOfMonths: 3,
    showButtonPanel: true,
    dateFormat: 'dd MM yy',

Provided the following default behaviour;


Disabling dates in the past and dates that have bookings already

So this part is pretty simple, the component provides us with two hooks of note – onChangeMonthYear which is fired when the user navigates the calendar between months or years etc (but not on first display) and the second – beforeShowDay, which is called before an actual day is rendered into the control and allows you to specify whether the date is available, any extra css to apply and a tooltip to show.

I could use the onChangeMonthYear event to load my known events via a JSON call and then check the variables in the beforeShowDay event, but to be honest I’m going to only be interested in future bookings and there aren’t going to be a massive volume. As such, I can afford to load all the events using a single ajax call during page start up and then interrogate the result in beforeShowDay.

So, my code to implement the datepicker now looks like this;

        numberOfMonths : 3,
        showButtonPanel : true,
        dateFormat : 'dd MM yy',
        beforeShowDay : calendarDayShow

with the following function;

  1. function calendarDayShow(targetDate)
  2. {
  3.     var availableResult = [true, '', ''];
  4.     var bookedResult = [false, '', ''];
  6.     var now = new Date();
  7.     if (targetDate < now) return bookedResult;
  9.     var targetYear = targetDate.getFullYear();
  10.     var targetMonth = targetDate.getMonth();
  11.     var targetDay = targetDate.getDate();
  12.     if (typeof (availability[targetYear]) == "undefined") return availableResult;
  13.     if (typeof (availability[targetYear][targetMonth]) == "undefined") return availableResult;
  14.     if (typeof (availability[targetYear][targetMonth][targetDay]) == "undefined") return availableResult;
  15.     return bookedResult;
  16. }

The component expects this function to return an array in the format of [<<availability>>, <<css>>, <<tooltip>>]. It’s worth noting that my availability data is stored as a JSON object in this hierarchy; (If there is an entry, that date is booked).

  • YYYY
    • MM
    • MM
      • DD = true
      • DD = true
  • eg: 2010
    • 6
      • 15 = true
      • 16 = true

So, on the above code, lines 3+4 are defining the available and booked responses. Line 6-7 checks if the date being rendered is in the past, and if it is, it returns a booked result to prevent it from being selected. This now results in the following;


Notice that 1st Aug and 12 July are booked in this example.

Making it look how I want it.

This is all well and good, but I want past dates to appear as disabled and booked dates to appear with a red X through them, like this end result:


I’m pretty sure you already know the answer – the beforeShowDay event expects us to pass back availability, and extra CSS classes to apply plus any tooltip we want. So we change our function thus;

function calendarDayShow(targetDate)
    var availableResult = [true, '', ''];
    var bookedResult = [false, 'bookedDayCalendar', 'Booked'];

    var now = new Date();
    if (targetDate < now) return [false, 'pastDayCalendar', 'Can\'t make bookings in the past!'];

    var targetYear = targetDate.getFullYear();
    var targetMonth = targetDate.getMonth();
    var targetDay = targetDate.getDate();
    if (typeof (availability[targetYear]) == "undefined") return availableResult;
    if (typeof (availability[targetYear][targetMonth]) == "undefined") return availableResult;
    if (typeof (availability[targetYear][targetMonth][targetDay]) == "undefined") return availableResult;
    return bookedResult;

So we’re now returning the extra CSS. We define this extra CSS is our site’s stylesheet;

/* Overides for jquery UI */
.bookedDayCalendar { opacity: 1; }
.bookedDayCalendar span
    background-color: Black;
    background-position: center center;
    background-repeat: no-repeat;
    background-image: url(booked.png);
    color: #aaa;
    border: none;
.pastDayCalendar { opactiy: 0.85; }
.pastDayCalendar span
    background-color: Black;
    background-image: none;
    text-decoration: line-through;

And voila, we get….


hmmm – not quite the desired result! The problem here is many of the styles are inherited from jquery UI, our reset.css and so on, and some of those styles are taking precedence over our new styles. As we are confident we want these attributes on this particular class, we can ensure they are applied with preference by adding the !important moniker to them in CSS;

.bookedDayCalendar { opacity: 1 !important; }
.bookedDayCalendar span
    background-color: Black !important;
    background-position: center center !important;
    background-repeat: no-repeat !important;
    background-image: url(booked.png) !important;
    color: #aaa !important;
    border: none !important;
.pastDayCalendar { opactiy: 0.85 !important; }
.pastDayCalendar span
    background-color: Black !important;
    background-image: none !important;
    text-decoration: line-through !important;

And we get the desired result.



Of course, you’ll want to use filter:Alpha(Opacity=…. for internet explorer;

.bookedDayCalendar { opacity: 1 !important; filter:Alpha(Opacity=100) !important; }
.bookedDayCalendar span
    background-color: Black !important;
    background-position: center center !important;
    background-repeat: no-repeat !important;
    background-image: url(booked.png) !important;
    color: #aaa !important;
    border: none !important;
.pastDayCalendar { opactiy: 0.85 !important; filter:Alpha(Opacity=85) !important; }
.pastDayCalendar span
    background-color: Black !important;
    background-image: none !important;
    text-decoration: line-through !important;

Monday, 28 June 2010

6 week sprints? You’re not doing SCRUM!

I come across this quite a lot, if you’re doing more than 30 day sprints, you’re doing scrum all wrong. Whilst I agree with the sentiment of this, there are some projects where a longer sprint cycle can be beneficial – it all depends on your team structure and the work being delivered.

To set the background, for my current project, my dev team consists of;

  • 1 x architect (me)
  • 1 x onshore architect / team lead
  • 2 x onshore developers
  • 2 x offshore developers
  • 1 x sharepoint architect

We’re implementing a reasonably large custom built system for our client and each piece of work is reasonably lengthy, with each epic consisting of complex UI along with substantial work to implement a workflow spine process, which would then be supported by several common sub-processes (with complimentary UIs).

Obviously we break out the sub-processes into their own backlog items, and then break the epics down into use-cases (formal stories) that could be implemented in each iteration, but we found that with 2-4 week iterations, it wasn’t possible to cut it any other way than to deliver demonstrable product every other iteration. Literally at the end of iteration A, we would be in an unfinished, unusable state, whilst half way through iteration B, we would finish that piece of work and start the next.

The reason for this is we would implement 2 or 3 of the main use cases per iteration, essentially having multiple concurrent work streams. Wrong? not in this case.

Many would argue that this was the wrong way to go about it, that we should have concentrated the entire effort in the sprint to one use case in order to complete it within the 4 week sprint, but I argue differently. Each use case is quite narrowly focussed, making it difficult for more than 2 people to work on the same use case at the same time. As such, Brookes Law applies. Too many chef’s spoil the broth – we’d have developers standing on each others toes left and right, so can’t add more resource to the use case implementation and the implementation of that one piece of work is too substantial to be delivered in a a 2-4 week iteration.

So, we broke the project down into 6 week sprints. More regimented scrum masters would argue that we aren’t doing scrum as a result of this decision, but I argue that such a comment is ridiculous. We’re still adhering to the very principles of scrum – we’re working from a backlog, we plan sprints into a sprint backlog, we deliver burn downs, we hold daily scrum meetings, we have a scrum master, a product owner, cross functional teams, regular demo’s to the business, and retrospectives – the only discernable difference is another 2 weeks on the sprint length to pragmatically cope with implementation of complex, atomic, use cases.

We’ve been running this way for 12 months now, and the results have been great. Open, honest, transparency between the project team and the business, and just as importantly, we delivered a month early. We’ve now rolled into another 6-8 months worth of work on the project and will be continuing to use this methodology.

Thursday, 24 June 2010

Display issues with the new editor in VS2010

I’m having major issues with the new editor in VS2010. The source window corrupts so often that it’s sometimes un-usable. This morning for instance, I’m opening .JS files and am getting the following corruption;


In addition if I have two files open, and I CTRL-TAB to switch between them, the display doesn’t refresh, so it appears that file 2 has the same source as file 1. I have the latest display drivers etc, and my laptop is a 6Gb Lenovo W500, so performance shouldn’t really be an issue.

The only way I’ve been able to resolve this is by zooming the editor in and back out again to force it to redraw it’s contents, but other than that, this is just a rant… I have no real solution, sorry.

Friday, 7 May 2010

T4 template to exclude LINQ to SQL generated classes from code analysis

I often use the sqlmetal tool to generate a L2SQL model from a database as follows;

sqlmetal /server:(local) /database:myDatabaseName 
         /views /functions /sprocs
         /language:C# /namespace:myProject.UI.Models.LinqSql
         /context:myDatabaseContext /pluralize

This then generates all of my database structure out to LINQ to SQL classes, eg;

  1. [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.Bills")]
  2. public partial class Bill : INotifyPropertyChanging, INotifyPropertyChanged

However, with FxCop code analysis turned on (you DO have it turned on and set to error don’t you!?!?), this then results in a bunch of errors from the generated code. Such as CA2227, about making collections read only;


However, this is generated code, but sqlmetal by default doesn’t add the necessary attributes to exclude the code from code analysis (Adding the GeneratedCodeAttribute). You could manually edit the generated code each time you run sqlmetal in order to add this attribute, or manually maintain a file of partial classes with the attributes added there. I wanted something a bit more elegant though, so went off looking at T4 templates.

T4 templates allow us to write template based code that will generate the actual source code either when you save the template, when you manually “run custom tool” or by incorporating it into your build process. T4 templates use an ASP.NET type syntax, but slightly different. First of all, I heartily recommend you get the T4 template editor from these guys: the free version is fine and gives you syntax hilighting and intellisense, which is missing in VS 2010.

Simply adding a .TT file to your solution, then writing the template code results in a code file being generated for you, nested within the .TT file;


Within this template, I am able to get a handle on the containing project, inspect the code model and generate partial classes with the necessary GeneratedCodeAttribute applied. This results in the following class being generated.

using System.CodeDom.Compiler;

namespace myProject.UI.Models.LinqSql
    [GeneratedCode("T4 LinqSqlFixer", "1.0")]
    public partial class EntityA {}

    [GeneratedCode("T4 LinqSqlFixer", "1.0")]
    public partial class EntityB {}

    [GeneratedCode("T4 LinqSqlFixer", "1.0")]
    public partial class EntityC {}

    [GeneratedCode("T4 LinqSqlFixer", "1.0")]
    public partial class Bill {}

With this in place, code analysis now ignores the generated LINQ SQL code and I don’t have to muck around suppressing errors or, God forbid, turn off code analysis all together.

My full T4 template is as follows;

<#@ template hostSpecific="true" #>
<#@ assembly name="System.Core.dll" #>
<#@ assembly name="EnvDTE.dll" #>
<#@ import namespace="EnvDTE" #>
<#@ import namespace="System.Collections.Generic" #>

This file was generated by a T4 Template. It attempts to add the GeneratedCode
attribute to any classes generated by sqlmetal - provided your sqlmetal classes
are in an isolated namespace.

The reason being, if the GeneratedCode attribute is not associated with the
generated classes, they are anaylsed by the code analysis tool.

Don't modify the generated .cs file directly, instead edit the .tt file.
using System.CodeDom.Compiler;

    // HERE: Specify the namespaces to fix up

void FixClassNameSpaces(string fixNamespace)
    WriteLine("namespace {0}\r\n{{", fixNamespace);
    string [] classes = GetClassesInNamespace(fixNamespace);
    foreach( string className in classes )
        WriteLine("\t[GeneratedCode(\"T4 LinqSqlFixer\", \"1.0\")]");
        WriteLine("\tpublic partial class {0} {{}}\r\n", className);

string [] GetClassesInNamespace(string ns)
    List<string> results = new List<string>();
    IServiceProvider host = (IServiceProvider) Host;
    DTE dte = (DTE) host.GetService(typeof(DTE));
    ProjectItem containingProject = dte.Solution.FindProjectItem(Host.TemplateFile);
    Project project = containingProject.ContainingProject;
    CodeNamespace nsElement = SearchForNamespaceElement(project.CodeModel.CodeElements, ns);
    if( nsElement != null )
        foreach( CodeElement code in nsElement.Members )
            if( code.Kind == vsCMElement.vsCMElementClass )
                CodeClass codeClass = (CodeClass) code;
                results.Add( codeClass.Name );
    return results.ToArray();

CodeNamespace SearchForNamespaceElement( CodeElements elements, string ns )
    foreach( CodeElement code in elements )
        if( code.Kind == vsCMElement.vsCMElementNamespace )
            CodeNamespace codeNamespace = (CodeNamespace)code;

            if( ns.Equals(code.FullName) )
                // This is the namespace we're looking for
                return codeNamespace;
            else if( ns.StartsWith(code.FullName))
                // This is going in the right direction, descend into the namespace
                return SearchForNamespaceElement(codeNamespace.Members, ns);
    return null;

Friday, 16 April 2010

Programmatically clear down K2 process information?

In K2, your development environment can become very cluttered, very quickly. I searched for a tool to be able to clear down all of the process instance data for processes that were still running and the archive/log data for processes that have completed to give me a clear, pristine and virginal K2 workspace ready to muck up again with more work in progress processes :)

That tool didn’t exist, so I wrote my own (and it was surprisingly simple!). Here are the key components.

First of all, to clear all currently running, active or errored process instances;

  1. WorkflowManagementServer server = new WorkflowManagementServer();
  2. try
  3. {
  4.     K2Connection.CreateConnection(server);
  6.     ProcessInstanceCriteriaFilter filter = new ProcessInstanceCriteriaFilter();
  8.     foreach (ProcessInstance instance in server.GetProcessInstancesAll(filter))
  9.         server.DeleteProcessInstances(instance.ID, true);
  10. }
  11. catch (Exception ex)
  12. {
  13.     Program.Error(ex);
  14. }
  15. finally
  16. {
  17.     server.Connection.Close();
  18. }

And secondly, the log data – this comes from a separate database which needs to be archived out.

  1. // Create archive temp db
  2. try
  3. {
  4.     CreateSqlTempDb();
  5. }
  6. catch (Exception)
  7. {
  8.     // ... code elided for clarity ...
  9.     return;
  10. }
  12. WorkflowManagementServer server = new WorkflowManagementServer();
  13. try
  14. {
  15.     K2Connection.CreateConnection(server);
  16.     server.Archive(ArchiveConnectionString, "K2ServerLog", "_Archive", DateTime.Now.AddMonths(-24), DateTime.Now);
  17. }
  18. catch (Exception)
  19. {
  20.     // ... code elided for clarity ...
  21. }
  22. finally
  23. {
  24.     server.Connection.Close();
  25. }
  27. try
  28. {
  29.     DropSqlTempDb();
  30. }
  31. catch (Exception ex)
  32. {
  33.     // ... code elided for clarity ...
  34. }

CreateSqlTempDb and DropSqlTempDb simply create and drop an empty database in SQL, which the archive tool then moves the data to. The ArchiveConnectionString is a standard connection string to the archive db you create.

Obviously you don’t want to be running this on ANY production environments!!!!

Update existing sharepoint content types when deploying using a feature

I’ve been a little quiet recently, mainly because I’ve had my head down with a large SharePoint and K2 blackpearl project in Manchester, in which I’ve got lots to blog about but just haven’t had the time. However, I think this is pretty important - today I solved a little problem that was bugging me.

My scenario is this;

We have SharePoint sites, created from STP files as part of our line of business application (I know, I know, the STP bit is a bit stupid and it’s hard to update, but we are where we are), and the document libraries within these created sites use content types deployed in the root site which are defined and deployed with a feature.

I needed to add a new field to one of the content types and remove a field from another one. Sounds easy enough – I updated the XML for the definition, redeployed and the definition in the site content type library updated as expected. What I didn’t expect though was all the sites already provisioned didn’t get those changes applied, and even worse, when I provisioned a new site (from the STP) the document library in that also didn’t have the changes. It was almost like the site document libraries had their own copy of the content types.

Of course that’s EXACTLY what the problem is. Solution 1 is not to deploy changes through the features. Using the UI to add or remove the columns as necessary and select the option to also update anything that uses that content type. For me this is a non starter – I’d have to re-export all my STPs so that future provisioned sites get the changes as well as make the manual changes to the content type and remember to propagate down – something that doesn’t sit well with my build and deploy strategy.

So, I found another solution. By attaching a feature receiver I can programmatically change content types and have those changes propagated down through everything that uses it. It’s still not ideal, but it works well. I keep my XML definition static and apply additive changes through the feature receiver in code, which looks like this;

  1. public class ContentTypeFeatureReceiver : SPFeatureReceiver
  2. {
  3.     /// <remarks/>
  4.     [SharePointPermission(SecurityAction.LinkDemand, ObjectModel = true)]
  5.     public override void FeatureInstalled(SPFeatureReceiverProperties properties)
  6.     {
  7.     }
  9.     /// <remarks/>
  10.     [SharePointPermission(SecurityAction.LinkDemand, ObjectModel = true)]
  11.     public override void FeatureUninstalling(SPFeatureReceiverProperties properties)
  12.     {
  13.     }
  15.     /// <remarks/>
  16.     [SharePointPermission(SecurityAction.LinkDemand, ObjectModel = true)]
  17.     public override void FeatureActivated(SPFeatureReceiverProperties properties)
  18.     {
  19.         SPSite site = properties.Feature.Parent as SPSite;
  20.         if (site == null)
  21.             return;
  23.         SPWeb rootWeb = site.RootWeb;
  25.         // Here, modify any of the existing content types
  26.         rootWeb.AddFieldToContentType("FieldToAdd", "Content Type Add To");
  27.         rootWeb.RemoveFieldFromContentType("FieldToRemove", "Content Type To Remove From");
  28.     }
  30.     /// <remarks/>
  31.     [SharePointPermission(SecurityAction.LinkDemand, ObjectModel = true)]
  32.     public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
  33.     {
  34.     }
  36. }

The AddFieldToContentType and RemoveFieldFromContentType methods are extension methods as follows;

  1. public static class SPWebExtensions
  2. {
  3.     /// <remarks/>
  4.     public static void AddFieldToContentType(this SPWeb site, string fieldName, string contentTypeName)
  5.     {
  6.         if (!site.Fields.ContainsField(fieldName))
  7.         {
  8.             Console.WriteLine("Could not add {0} to {1} - the field {0} does not exist!", fieldName, contentTypeName);
  9.             return;
  10.         }
  12.         SPField field = site.Fields[fieldName];
  13.         SPContentType contentType = site.ContentTypes[contentTypeName];
  14.         if (contentType == null)
  15.         {
  16.             Console.WriteLine("Could not add {0} to {1} - the content type {1} does not exist!", fieldName, contentTypeName);
  17.             return;
  18.         }
  20.         if (contentType.FieldLinks[field.InternalName] != null) return;
  21.         contentType.FieldLinks.Add(new SPFieldLink(field));
  22.         contentType.Update(true);
  23.     }
  25.     /// <remarks/>
  26.     public static void RemoveFieldFromContentType(this SPWeb site, string fieldName, string contentTypeName)
  27.     {
  28.         if (!site.Fields.ContainsField(fieldName))
  29.         {
  30.             Console.WriteLine("Could not remove {0} from {1} - the field {0} does not exist!", fieldName, contentTypeName);
  31.             return;
  32.         }
  34.         SPField field = site.Fields[fieldName];
  35.         SPContentType contentType = site.ContentTypes[contentTypeName];
  36.         if (contentType == null)
  37.         {
  38.             Console.WriteLine("Could not remove {0} from {1} - the content type {1} does not exist!", fieldName, contentTypeName);
  39.             return;
  40.         }
  41.         if (contentType.FieldLinks[field.InternalName] == null) return;
  42.         contentType.FieldLinks.Delete(field.InternalName);
  43.         contentType.Update(true);
  44.     }
  45. }

Finally, you still have to define the site column before you can attach it.

Friday, 12 February 2010

Unexpected property persistence in a web part

I’m writing a web part at the moment, which, to cut a long story short, tracks a bunch of properties between post-backs using view state as you might expect;


What I’ve found though is that if I add one of these web parts to my page and then use it so that CurrentPage is set to, say, 5 and then edit the web part configuration, this property gets persisted too - coming back to the page later defaults current page to 5! Is this a bug or does SharePoint insist on persisting all properties during configuration? Even though I haven’t added any attributes to tell it do so?

All I know is the only way I could get rid of this behaviour is to make the property protected or private, which isn’t a big deal, but it’s confusing that this happens at all.

Wednesday, 10 February 2010

Balsamiq Mockups - Review

Since the dawn of time, I’ve been producing user interface mock ups for my projects, using them not only to specify what I’m building, but also to workshop these ideas and concepts with users. They ultimately form documentation passed to developers along with the use cases and data model etc. I’ve always found the process of creating UI mocks quite tedious (cutting and pasting in something like fireworks for instance) and find that the more fidelity I put into these diagrams, the more users don’t provide valuable feedback. They either;

  • Become afraid to comment fearing that I will be offended by their input or
  • Concentrate too much of trivial issues like the exact spacing between fields in the image, thinking it’s the final UI

High fidelity diagrams offer an illusion of accuracy and therefore trigger these states of mind, and the cumbersome nature of the tools means I wouldn’t work with the users directly on the designs in workshops, so overall I tend to not share the screen annotations with the users directly, preferring instead to use just a whiteboard and then use the formal annotations just for the technical team.

However, this week I’ve got my hands on a tool called Balsamiq Mockups, which is an easy to use tool that offers a white-board type interface and low fidelity design elements. I decided to use it to mock up a new sharepoint data grid control I needed to build:


I produced this in about 15 minutes and it offers a good, low fidelity idea of how the UI should function that I can share with both the technical team and the users. The users will feedback honest ideas as they won’t be afraid to insult the low fidelity diagram and they won’t get bogged down with the details and colours of the UI. At the same time, I can share this with the developers and they won’t lose their initiative on how the forms should be implemented as it’s clearly not that prescriptive.

The tool is very simple to use, is extensible with new control sets from Mockups to Go, exports to a variety of formats, and the different mockups can be linked together. Overall you can use this tool for just about anything. I’ve just bought 3 licences for the analysts and architects on my current project (it’s reasonably priced too!).

The one thing I won’t do though is start using this as a replacement for the whiteboard sessions with the users. Sure, you could easily sit with them and design UI’s on a projector using this toolset, but I just find the whole whiteboard, scribbling and chatting method far more tactile. I’ll document those workshops using Balsamiq though and use that as part of the documentation sets for the technical and business teams respectively.