System.IO.FileNotFoundException : Filename: redirection.config

Written by Troy on November 15, 2016 Categories: Uncategorized Tags: , ,

A quick note for anyone who is tripped up by this. I am working on a project that uses some rather groovy SpecFlow.Net system tests to perform and end-to-end test of a web service. I was trying to set this up on our TeamCity build server, and decided to leverage Microsoft.Web.Administration (ServerManager class) to manage the IIS set up.

Got everything working wonderfully on my local machine, checked in and… build fail.

The rather ambiguous exception was:

System.IO.FileNotFoundException : Filename: redirection.config
Error: Cannot read configuration file

Huh?

This error is caused by the IIS Express version of Microsoft.Web.Administration trying to find a local configuration file. Why is this even happening? I’m trying to use proper IIS!! On the build server, my assembly was referencing the wrong version of Microsoft.Web.Administration that it found in the GAC. I think Microsoft are trying to make life easy here. I get that IIS Express should be a transparent surrogate for IIS, but I think there is some flaws with this system. At least, please use some meaningful exception text! (Even the file path property of the exception is empty, which is weak. It would have provided a meaningful clue since ‘IISExpress’ is in the path).

To fix this issue, ensure when you reference C:\Windows\system32\inetsrv\Microsoft.Web.Administration.dll that you specify Specific Version=true. You can check your built assembly (with ILSpy or similar). The assembly should be 7.0 and note that the file version is “6.1.7601.17514″. If you’re reference to Microsoft.Web.Administration is 7.9 (file version 8.0.8418.0) then you’ve got the wrong one.

No Comments

Fixing ?disco URL ref and docRef attributes for asmx service

Written by Troy on August 23, 2016 Categories: Uncategorized Tags: , , , , , , ,

So I had to touch an old .asmx service (byarhg) making minimal changes. So that I could keep the old service running (in case of breakage) the new one is hosted on a different and non-standard port number. This causes that old issue, that when the host sits behind a load balancer, poor old ASP.Net is none the wiser and sticks the port number in the generated WSDL URL (e.g. http://loadbalancer.url/myService.asmx?wsdl)

  <wsdl:service name="Service">
    <wsdl:port name="ServiceSoap" binding="tns:ServiceSoap">
      <soap:address location="http://loadbalancer.url:1234/myService.asmx" />
    </wsdl:port>
    <wsdl:port name="ServiceSoap12" binding="tns:ServiceSoap12">
      <soap12:address location="https://loadbalancer.url:1234/myService.asmx" />
    </wsdl:port>
  </wsdl:service>

But that is easy to solve using the <soapExtensionReflectorTypes> configuration (under <webServices>) and providing an implementation of a SoapExtensionReflector. There are examples online (and this is very dated). That fixes the ?wsdl URL, but ?disco URL is left in a half-fixed state:

<?xml version="1.0" encoding="utf-8"?>
<discovery xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.xmlsoap.org/disco/">
  <contractRef ref="https://loadbalancer.uri:1234/Service.asmx?wsdl" docRef="https://loadbalancer.uri:1234/Service.asmx" xmlns="http://schemas.xmlsoap.org/disco/scl/" />
  <soap address="http://loadbalancer.uri/myService.asmx" xmlns:q1="http://foo.com/bar/" binding="q1:ServiceSoap" xmlns="http://schemas.xmlsoap.org/disco/soap/" />
  <soap address="http://loadbalancer.uri/myService.asmx" xmlns:q2="http://foo.com/bar/" binding="q2:ServiceSoap12" xmlns="http://schemas.xmlsoap.org/disco/soap/" />
</discovery>

Note that for the <contractRef> element, the ref and docRef attributes have the port number of the backend web service. If you have tools (or people) who simply put in the URL of the service (that is, without ?wsdl on the end) this will cause issues (for instance, using the New-WebServiceProxy Powershell cmdlet or even in Visual Studio) – they will attempt to request the WSDL at the wrong URL. This was the case for me, meaning this was a breaking change! How to fix? I searched for ways to do it with the SoapExtensionReflector or otherwise, including using ILSpy to inspect the framework classes looking for hacks. But alas, there is no way to do it.

To solve it, I instead wrote a HTTP Module to adjust the generated discovery file, using a ‘Filter’ stream. The filter stream can adjust data being written to the response. A nifty feature I had not used until now.

HTTP Module:

    public class DiscoFixHttpModule : IHttpModule
    {
        private static bool IsDisco(HttpApplication context)
        {
            return IsDisco(context.Request.Url);
        }

        private static bool IsDisco(Uri uri)
        {
            return string.Equals(uri.Query, "?disco", StringComparison.OrdinalIgnoreCase);
        }

        public void Init(HttpApplication context)
        {
            context.BeginRequest += HandleBeginRequest;
        }

        void HandleBeginRequest(object sender, EventArgs e)
        {
            HttpApplication context = (HttpApplication)sender;
            if (!IsDisco(context))
            {
                return;
            }
            context.Response.Filter = new DiscoFixFilter(context.Response.Filter);
        }
    }

My ‘filter’ (stream):

    public class DiscoFixFilter : Stream
    {
        private readonly Stream output;
        private static readonly string trimPattern = ConfigurationManager.AppSettings["TrimWsdlPortUriPattern"];

        public DiscoFixFilter(Stream outputStream)
        {
            output = outputStream;
        }

        private static string RemovePortEvaluator(Match match)
        {
            return Regex.Replace(match.Value, @"\:\d+", string.Empty);
        }

        public override void Write(byte[] buffer, int offset, int count)
        {
            string data = Encoding.UTF8.GetString(buffer, offset, count);
            data = Regex.Replace(data, trimPattern, RemovePortEvaluator);
            buffer = Encoding.UTF8.GetBytes(data);
            output.Write(buffer, 0, buffer.Length);
        }
    }

Add the entry for the module, and I have the regular expression in my web.config file in case it needs to accommodate different URLs.

...
  <system.webServer>
    <modules>
      <add name="DiscoFix" type="Foo.Bar.DiscoFixHttpModule, Foo.Bar"/>
    </modules>
  </system.webServer>
...
  <appSettings>
    <add key="TrimWsdlPortUriPattern" value="\bhttp:\/\/loadbalancer(?:qa|test)?\.uri\:\d+\/"/>
  </appSettings>

And that is enough to fix my ?disco URL:

<?xml version="1.0" encoding="utf-8"?>
<discovery xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.xmlsoap.org/disco/">
  <contractRef ref="https://loadbalancer.uri/myService.asmx?wsdl" docRef="https://loadbalancer.uri/myService.asmx" xmlns="http://schemas.xmlsoap.org/disco/scl/" />
  <soap address="http://loadbalancer.uri/myService.asmx" xmlns:q1="http://foo.com/bar/" binding="q1:ServiceSoap" xmlns="http://schemas.xmlsoap.org/disco/soap/" />
  <soap address="http://loadbalancer.uri/myService.asmx" xmlns:q2="http://foo.com/bar/" binding="q2:ServiceSoap12" xmlns="http://schemas.xmlsoap.org/disco/soap/" />
</discovery>

No more port numbers.

No Comments

Creating a CSR with subject alternate names using BouncyCastle(.Net)

Written by Troy on August 4, 2016 Categories: Uncategorized

I’ll firstly proclaim that I am out of my depth and don’t know what I’m doing. I’m not familiar with the structure of certificates or CSRs, but internet searches did not reveal much information for what I was trying to do. So here is the results of my hacking, if they prove useful to you :)

To create a simple CSR with SANs (subject alternate names), do the following:

private string GenerateRequest(string subjectDn, ICollection sans)
{
    X509Name name = new X509Name(subjectDn);

    RsaKeyPairGenerator kpg = new RsaKeyPairGenerator();
    kpg.Init(new KeyGenerationParameters(new SecureRandom(), 2048));
    AsymmetricCipherKeyPair kp = kpg.GenerateKeyPair();

    ISignatureFactory sigFactory = new Asn1SignatureFactory("SHA256WITHRSA", kp.Private);

    Asn1Set attributes = null;
    if (sans.Count > 0)
    {
        GeneralNames names = new GeneralNames(
            sans.Select(n => new GeneralName(GeneralName.DnsName, n)).ToArray()
            );

        Asn1Sequence sanSequence = new DerSequence(X509Extensions.SubjectAlternativeName, new DerOctetString(names));
        Asn1Sequence container = new DerSequence(sanSequence);
        Asn1Set extensionSet = new DerSet(container);
        Asn1Sequence extensionRequest = new DerSequence(PkcsObjectIdentifiers.Pkcs9AtExtensionRequest, extensionSet);
        attributes = new DerSet(extensionRequest);
    }

    Pkcs10CertificationRequest csr = new Pkcs10CertificationRequest(sigFactory, name, kp.Public, attributes, kp.Private);

    StringBuilder pemString = new StringBuilder();
    PemWriter pemWriter = new PemWriter(new StringWriter(pemString));
    pemWriter.WriteObject(csr);
    pemWriter.Writer.Flush();

    return pemString.ToString();
}

Now, I can’t account for the structure. If you know how the internal structure of a CSR should look, well I guess you’re not reading this :) Anyway, I used the following code to dump out a CSR that I created with openssl, and built up my sets and sequences based on that dump. Good luck!

private string GetIndent(int indentLevel)
{
    return new string(' ', indentLevel * 2);
}

private void WriteIndent(int indentLevel, string format, params object[] args)
{
    string detail = string.Format(format, args);
    Console.WriteLine("{0}- {1}", GetIndent(indentLevel), detail);
}

private void DumpItem(Pkcs10CertificationRequest certificate)
{
    Console.WriteLine("algorithm: {0}", certificate.SignatureAlgorithm.Algorithm);
    CertificationRequestInfo requestInfo = certificate.GetCertificationRequestInfo();
    Console.WriteLine("subject: {0}", requestInfo.Subject);
    DumpItem(0, requestInfo.Attributes);
}

private void DumpItem(int indentLevel, Asn1Object item)
{
    if (item == null)
    {
        return;
    }
    DerObjectIdentifier identifier = item as DerObjectIdentifier;
    if (identifier != null)
    {
        DumpItem(indentLevel, identifier);
        return;
    }
    DerSequence sequence = item as DerSequence;
    if (sequence != null)
    {
        DumpItem(indentLevel, sequence);
        return;
    }
    DerSet set = item as DerSet;
    if (set != null)
    {
        DumpItem(indentLevel, set);
        return;
    }
    DerOctetString octet = item as DerOctetString;
    if (octet != null)
    {
        DumpItem(indentLevel, octet);
        return;
    }
    Assert.Fail("Can't yet handle a '{0}'", item.GetType()); // Yeah I'm using NUnit
}

private void DumpItem(int indentLevel, DerObjectIdentifier identifier)
{
    WriteIndent(indentLevel, "identifier {0}", identifier.Id);
}

private void DumpItem(int indentLevel, DerOctetString octet)
{
    StringBuilder asciiString = new StringBuilder();
    foreach (byte b in octet.GetOctets())
    {
        if (b < ' ' || b > 126)
        {
            asciiString.Append('_');
        }
        else
        {
            asciiString.Append((char) b);
        }
    }
    WriteIndent(indentLevel, "octet string: {0}", octet);
    WriteIndent(indentLevel, "\"{0}\"", asciiString.ToString());
}

private void DumpItem(int indentLevel, DerSet set)
{
    WriteIndent(indentLevel, "set {0} with {1} items", set.GetType(), set.Count);
    foreach (Asn1Object entry in set)
    {
        DumpItem(indentLevel + 1, entry);
    }
}

private void DumpItem(int indentLevel, DerSequence sequence)
{
    WriteIndent(indentLevel, "sequence {0}, with {1} items", sequence.GetType(), sequence.Count);
    foreach (Asn1Object entry in sequence)
    {
        DumpItem(indentLevel + 1, entry);
    }
}

private Pkcs10CertificationRequest ParseString(string request)
{
    using (StringReader stringReader = new StringReader(request))
    {
        PemReader reader = new PemReader(stringReader);
        Pkcs10CertificationRequest result = (Pkcs10CertificationRequest) reader.ReadObject();
        Assert.IsNotNull(result);
        return result;
    }
}

public void Inspect_Cert()
{
    string csr = @"-----BEGIN CERTIFICATE REQUEST----- blah blha...";
    Pkcs10CertificationRequest inspectThis = ParseString(csr);
    DumpItem(inspectThis);
}

/*
The output will look something like:
algorithm: 1.2.840.113549.1.1.5
subject: C=...
- set Org.BouncyCastle.Asn1.DerSet with 1 items
  - sequence Org.BouncyCastle.Asn1.DerSequence, with 2 items
    - identifier 1.2.840.113549.1.9.14
    - set Org.BouncyCastle.Asn1.DerSet with 1 items
      - sequence Org.BouncyCastle.Asn1.DerSequence, with 1 items
        - sequence Org.BouncyCastle.Asn1.DerSequence, with 2 items
          - identifier 2.5.29.17
          - octet string: #30368...
          - "0___foo-na.com__foo-eu.com..."

*/
No Comments

Setting the proxy used for PowerShell Gallery

Written by Troy on July 18, 2016 Categories: Uncategorized Tags: 
> Register-PSRepository -Name PSGallery -SourceLocation 'https://www.powershellgallery.com/api/v2/' -InstallationPolicy Trusted

Register-PSRepository : The specified Uri 'https://www.powershellgallery.com/api/v2/' for parameter 'SourceLocation' is an invalid Web Uri. Please ensure that it meets the Web Uri requirements.
At line:1 char:1
+ Register-PSRepository -Name PSGallery -SourceLocation 'https://www.po ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (https://www.pow...ery.com/api/v2/:String) [Register-PSRepository], ArgumentException
    + FullyQualifiedErrorId : InvalidWebUri,Register-PSRepository

Wow… what a frustrating error. I’m sure I’ve done this before… What – is – wrong…?

The delay experienced before getting an error felt indicative of a web timeout. And my corporate life lives behind a team of proxies. Certainly I need to set my System.Net.WebClient proxy before retrieving anything external. I took a look under the hood, and the PowerShellGet module leverages System.Net.WebRequest. Well, luckily this means we can fix our PowerShellGet repository registration by setting the proxy:

> [system.net.webrequest]::defaultwebproxy = new-object system.net.webproxy('http://foo-bar-baz:8080')

> [system.net.webrequest]::defaultwebproxy


Address               : http://foo-bar-baz:8080/
BypassProxyOnLocal    : False
BypassList            : {}
Credentials           :
UseDefaultCredentials : False
BypassArrayList       : {}


> Register-PSRepository -Name PSGallery -SourceLocation 'https://www.powershellgallery.com/api/v2/' -InstallationPolicy Trusted

> find-module xNetworking

Version    Name            Type       Repository           Description
-------    ----            ----       ----------           -----------
2.10.0.0   xNetworking     Module     PSGallery            Module with DSC Resources for Networking area

Okay… we’re on our way again :)

No Comments

Re-mapping Lenovo Thinkpad Yoga Keyboard – [end] and [insert] keys

Written by Troy on February 2, 2015 Categories: Uncategorized Tags: , , , ,

I recently purchased a new Lenovo Thinkpad Yoga. I bought it to be a workhorse and chose a Lenovo due to the good quality keyboards (sorry, Surface Pro). Like many modern machines, the function keys double up to control screen brightness, volume etc, and this is the default behaviour. To access the actual function keys (e.g. [F5] – run!) you need to use the [Fn] key. Helpfully ‘FnLk’ is enabled full time by pressing [Fn]+[Esc]. The dilemma as a programmer is that [end] and [insert] share a key, and by enabling function lock, you lose the [end] key and have the cursed [insert] key by default. Productivity killer!

Lenovo laptop keyboard

The layout of a compact Lenovo keyboard, as used on the Thinkpad Yoga. Note the repurposed function keys.

The home, end, delete and backspace keys, and their unloved cousin, insert.

The home, end, delete and backspace keys, and their unloved cousin, insert.

I found information on the Windows registry key that allows you to re-map keys (search for ‘Scan code mapper for keyboards’). That’s well and good… what on earth are the hexadecimal key values for [insert] and [end]? Some page on Geocities? :)

Using this information, here is the key mapping block I want:

00000000 <-- header version
00000000 <-- header flags
00000003 <-- number of entries (3, including null terminator)
e04fe052 <-- map [end] to [insert]
e052e04f <-- map [insert] to [end]
00000000 <-- null terminator

Or, as exported from the registry (the values are little-endian):

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Keyboard Layout]
"Scancode Map"=hex:00,00,00,00,00,00,00,00,03,00,00,00,52,e0,4f,e0,4f,e0,52,e0,\
00,00,00,00

Now the mapping is reversed, I can keep function lock enabled to access the function keys, and keep [end] at hand for getting to line/document endings. Happily, I typed this post with a handy [end] key. Hurrah!

You can download the registry file here. Remove the .txt extension and you can merge it into your registry.

6 Comments

Managing IIS configSections programmatically via script

Written by Troy on October 17, 2012 Categories: Uncategorized Tags: , , , ,

Looking for a way to add configSections to the IIS schema, I came across this useful post by Kanwaljeet Singla of the IIS team, about using the Microsoft.ApplicationHost.WritableAdminManager class to manage IIS configuration. I’m not aware of another API that allows you to do so.

Based on that, I created the following handy script to add and remove configuration sections from the IIS schema.

var action;
var path;
var overrideMode = 'Deny';
var allowDef = 'MachineToApplication';

function addSection(sectionGroup, sectionName) {
	var section;
	for (var i = 0; i < sectionGroup.Sections.Count; i++) {
		section = sectionGroup.Sections.Item(i);
		if (section.Name == sectionName) {
			WScript.Echo('The section "' + sectionName + '" is already installed...');
			if (section.OverrideModeDefault == overrideMode && section.AllowDefinition == allowDef) {
				WScript.Echo(' ... and configured correctly');
				return false;
			}
			break;
		}
	}

	if (section == null) {
		WScript.Echo('Creating new section, "' + sectionName + '".');
		section = sectionGroup.Sections.AddSection(sectionName);
	}
	else {
		WScript.Echo(' ... updating section.');
	}
	section.OverrideModeDefault = overrideMode;
	section.AllowDefinition = allowDef;
	return true;
}


function checkArgs() {
	if (action == null || path == null) {
		printUsage();
		WScript.Quit(1);
	}
}


function doWork() {
	var admin = new ActiveXObject("Microsoft.ApplicationHost.WritableAdminManager");

	path.match(/^(.*)\/(\S+?)$/);
	var parentPath = RegExp.$1;
	var sectionName = RegExp.$2;
	var sectionGroup = getSectionGroup(admin, parentPath);
	var saveRequired = false;
	
	switch (action) {
		case 'add':
			saveRequired = addSection(sectionGroup, sectionName);
			break;
		case 'remove':
			saveRequired = removeSection(sectionGroup, sectionName);
			break;
		default:
			WScript.Echo('The action is unknown: ' + action);
			printUsage();
			WScript.Quit(2);
	}
	
	if (saveRequired) {
		WScript.Echo('Saving changes.');
		admin.CommitChanges();
	}
	else {
		WScript.Echo('No save required.');
	}
	WScript.Echo('Done.');
}


function getSectionGroup(admin, path) {
	var configManager = admin.ConfigManager;
	var appHostConfig = configManager.GetConfigFile("MACHINE/WEBROOT/APPHOST");
	var sectionGroup = appHostConfig.RootSectionGroup

	var pathSegments = path.split('/');
	while (pathSegments.length > 0) {
		var sectionGroupName = pathSegments.shift();
		sectionGroup = sectionGroup.Item(sectionGroupName);
		WScript.Echo(" ... section group: " + sectionGroup.Name);
	}
	return sectionGroup;
}


function printUsage() {
	WScript.Echo('usage: cscript configSections.js action:add|remove path:system.webServer/something/or/other');
	WScript.Echo('                                 allowDef:Everywhere|MachineOnly|MachineToWebRoot|MachineToApplication|AppHostOnly');
	WScript.Echo('                                 defaultMode:Allow|Deny');
	WScript.Echo();
}


function processArgs() {
	var args = WScript.Arguments;

	for (var i = 0; i < args.length; i++) {
		var arg = args(i);
		if (arg.match(/^action:(add|remove)$/)) {
			action = RegExp.$1;
			continue;
		}
		if (arg.match(/^path:([\S]+)$/)) {
			path = RegExp.$1;
			continue;
		}
		if (arg.match(/^allow[dD]efinition:(Everywhere|MachineOnly|MachineToWebRoot|MachineToApplication|AppHostOnly)$/)) {
			allowDef = RegExp.$1;
			continue;
		}
		if (arg.match(/^override[mM]ode:(Allow|Deny)$/)) {
			overrideMode = RegExp.$1;
			continue;
		}
		WScript.Echo('*** Unknown arg: ' + arg);
		printUsage();
		WScript.Quit(1);
	}
}


function removeSection(sectionGroup, sectionName) {
	for (var i = 0; i < sectionGroup.Sections.Count; i++) {
		var section = sectionGroup.Sections.Item(i);
		if (section.Name == sectionName) {
			WScript.Echo('Removing section, "' + sectionName + '".');
			sectionGroup.Sections.DeleteSection(sectionName);
			return true;
		}
	}

	WScript.Echo('The section "' + sectionName + '" does not exist.');
	return false;
}


processArgs();
checkArgs();
doWork();

You can use it as follows:

cscript //nologo configSections.js action:add path:system.webServer/security/authentication/superDuperAuthentication

Normally I'd have a wrapper .bat script. Also, make sure you copy your schema file into the inetsrv folder.

Interestingly I could not get this working with Powershell. Can anyone tell me why? Is this a limitation of Powershell and COM?

1 Comment

Mocking your functions in Powershell unit tests

Written by Troy on September 13, 2012 Categories: Uncategorized Tags: , , , , , ,

I’m currently using PSUnit to unit test my Powershell scripts. It can be tricky unit testing a ‘glue’ language, since you may be calling many legacy scripts and console applications. However the beauty of Powershell’s syntactic style is that it leads you to cloak underlying operations with meaningful ‘verb-noun’ function names. For example:

function Install-MyThing
{
   if ( !(Test-MyThing) )
   {
      Invoke-Expression -Command "\\foo\bar\thing.exe install /wibble"
   }
}

function Test-MyThing
{
   $output = \\foo\bar\thing.exe validate
   return ($output -ne $null) -and ($output -like "*things are good*")
}

Which then allows us to think about mocking. My first forays led me to use aliases to mock calls, which is entirely useful. However I came up with a simple mechanism to allow me to mock functions:

function script:Backup-Function($name)
{
    $tempName = "function:script:pre-mock-$name"
    if ( !(Test-Path $tempName) )
    {
        copy function:\$name $tempName
    }
}

function script:New-MockFunction($name, $scriptBlock)
{
    Backup-Function $name
    Set-Item -Path function:\$name -Value $scriptBlock
}

function script:Remove-MockFunctions
{
    dir function:pre-mock-* | % { Restore-Function $_.Name.substring(9) }    
}

function script:Restore-Function($name)
{
    copy function:\pre-mock-$name function:script:$name
    del function:\pre-mock-$name
}
...

function Get-FooBar
{
    if (Test-Wibble)
    {
         return "baz"
    }
    else { ... }
}

function Test-Wibble
{
    # something terribly complex...
}

function Test.Get-FooBar_ReturnsBaz_IfWibble
{
    SetUp
    New-MockFunction -Name Test-Wibble -ScriptBlock { return $true }

    $foo = Get-FooBar

    Assert-That -ActualValue $foo -Constraint { $actualValue -eq "baz" }
    Remove-MockFunctions # or put in SetUp or TearDown etc
}

The Powershell function: provider is a tricky thing!

  • Backup-Function on line 1 simply duplicates the function with a new name, if it does not already exist.
  • New-MockFunction on line 10 will back up our function before replacing the definition with our mocked script block.
  • Restore-Function on line 21 copies what we backed up over the redefined (mock) function body, and is called for all backed up functions by Remove-MockFunctions

Take note of your scopes here, this sample only applies to script: scope, but that seemed appropriate given I was unit testing a Powershell script file.

No Comments

Fixing csproj builds that reference vcxproj files using AssignProjectConfiguration task

Written by Troy on August 25, 2012 Categories: Uncategorized Tags: , , , , , , , , , , ,

The problem with the solution architecture of Visual Studio is that you cannot customize it, since it’s not a msbuild file (I hope the MSBuild team change this.) This means for any non-trivial build, you are likely to implement your own root build file in order to support custom targets (for testing, deploying, etc). This subjects you to the vagaries and nuances of msbuild that you’re normally shielded from by Visual Studio.

A problem I ran into was with C# projects (.csproj) that were referencing a C++ project (.vcxproj). For 32-bit builds, a csproj uses the platform ‘x86′, while a vcxproj used ‘Win32′. Hence if you attempt to build your csproj file, you will get an error like:

C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\Microsoft.Cpp.InvalidPlatform.Targets(23,7): error MSB8007: The Platform for project ‘blah.vcxproj’ is invalid. Platform=’x86′. You may be seeing this message because you are trying to build a project without a solution file, and have specified a non-default Platform that doesn’t exist for this project. [C:\dev\spike\blah.vcxproj]

So you need to find a way to map x86->Win32 for vcxproj only. Luckily, there is the somewhat cryptic and barely documented task AssignProjectConfiguration. I’m not going to pretend I really understand how it works, but from the docs, and some peeking under the hood, I kludged together the following working solution:

<Target Name="PrepProjectConfiguration" BeforeTargets="PrepareForBuild" Condition="'$(Platform)' == 'x86'">
   <AssignProjectConfiguration
         CurrentProjectConfiguration="$(Configuration)"
         CurrentProjectPlatform="$(Platform)"
         ProjectReferences="@(ProjectReference)"
         ResolveConfigurationPlatformUsingMappings="true">
      <Output TaskParameter="AssignedProjects" ItemName="ProjectReferenceWithConfiguration" />
   </AssignProjectConfiguration>
   <ItemGroup>
      <ProjectReference Remove="@(ProjectReferenceWithConfiguration)" />
   </ItemGroup>
   <Message Text="  regular reference %(ProjectReference.Identity)" />
   <Message Text="re-mapped reference %(ProjectReferenceWithConfiguration.Identity) - %(ProjectReferenceWithConfiguration.Configuration)|%(ProjectReferenceWithConfiguration.Platform)" />
</Target>

To explain this:

  • Why wire up before the target PrepareForBuild? Running msbuild with detailed output indicated this was a decent enough place to plop this in.
  • Why the condition for x86? I’m at a loss as to why, but this task behaves differently for x64. Anyway, we only need to solve this problem for x86.
  • You’ll notice I have to remove the output of AssignedProjects from the ProjectReference item group. Without this, the csproj will attempt to build the vcxproj twice, once for Win32 and again for x86.

Happy building!

8 Comments

CI Builds for WCF/ASP.Net apps using MSBuild

Written by Troy on July 11, 2012 Categories: Uncategorized Tags: , , , , , , , , , , , ,

MSBuild can be a bit of a bear, and having a build procedure around .Net web apps has never been neat or easy. Apparently hotmail and bing are deployed by some guy at Microsoft clicking on the ‘publish’ button in Visual Studio. But for those of us creating mission critical web apps, this is not the thing to do (to quote Green Velvet).

We use a set of centralized imports in all our projects to standardize the behaviour of all builds with a minimum of fuss. These excerpts are the hacks you can use to get your build server churning out your web apps. You may need to work with the order in your *proj file since some of the default msbuild targets will override some properties (WebProjectOutputDir for instance.)

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<!-- ... -->

<PropertyGroup>
	<!-- path where you want your builds -->
	<BuildRoot>c:\build\foo\bar</BuildRoot>

	<!-- Can be a handy property, here we use it to name our output path -->
	<ProjectFolderName>$( [System.Text.RegularExpressions.Regex]::Match( $(MSBuildProjectDirectory), "[^\\]+$").Value )</ProjectFolderName>

	<!-- Makes our scripts a little neater, you will find this GUID in your web app csproj -->
	<WebProjectGuid>349c5851-65df-11da-9384-00065b846f21</WebProjectGuid>
</PropertyGroup>


<!-- Properties we set/override if it's a web project.  Notice the condition? -->
<PropertyGroup Condition="$(ProjectTypeGuids.Contains($(WebProjectGuid)))">
	<!-- Just for our use, not used by any standard msbuild targets -->
	<WebBuildRoot>$(BuildRoot)\web</WebBuildRoot>

	<!-- Content gets copied here by _WPPCopyWebApplication with transforms, etc -->
	<WebProjectOutputDir>$(WebBuildRoot)\$(ProjectFolderName)</WebProjectOutputDir>

	<!-- This is where assemblies end up for a csproj -->
	<OutputPath>$(WebProjectOutputDir)\bin\</OutputPath>
</PropertyGroup>

<!-- ... -->

<!-- This target does nothing, but causes the '_WPPCopyWebApplication' to get invoked after
the build target, neatly putting our web project in our build folder, yay! -->
<Target Name="PostWebBuildContentCopy" AfterTargets="Build"
	Condition="$(ProjectTypeGuids.Contains($(WebProjectGuid)))"
	DependsOnTargets="_WPPCopyWebApplication">
</Target>

<!-- ... -->
</Project>

Now, invoking msbuild (or msbuild /t:build if ‘build’ is not your default) will cause your assemblies to get built to c:\build\foo\bar\web\WcfAppName\bin and then appropriate web files placed in c:\build\foo\bar\web\WcfAppName

Handy link:

No Comments

Quick bit – How to remove isapi filters and script mappings from IIS7 with Powershell

Written by Troy on June 27, 2012 Categories: Uncategorized Tags: , , , , ,

The documentation for the WebAdministration module, to administer IIS7 with Powershell, has simple examples of how to add many things. Removing items in the pipeline is a little less clear and requires a bit of trial-and-error. Here’s a quick example that hopefully saves you a bit of digging:

$isapiName = "fooBarIsapi.dll"

# Remove the ISAPI filter from restriction list
Get-WebConfiguration -PSPath IIS:\Sites -Filter "/system.webServer/security/isapiCgiRestriction/add" `
	| where { $_.Path -ilike "*$isapiName" } `
	| % { Clear-WebConfiguration -PSPath $_.PSPath -Filter $_.ItemXPath -Location $_.Location }

# Remove all instances of the ISAPI filter
Get-WebConfiguration -PSPath IIS:\Sites -Recurse -Filter "/system.webServer/isapiFilters/filter" `
	| where { $_.Path -ilike "*$isapiName" } `
	| % { Clear-WebConfiguration -PSPath $_.PSPath -Filter $_.ItemXPath -Location $_.Location }

# Remove all script handlers leveraging the ISAPI filter
Get-WebConfiguration -PSPath IIS:\Sites -Recurse -Filter "/system.webServer/handlers/add" `
	| where { $_.scriptProcessor -ilike "*$isapiName" } `
	| % { Clear-WebConfiguration -PSPath $_.PSPath -Filter $_.ItemXPath -Location $_.Location }

You may even like to encapsulate these into functions to make your pipeline a little more manageable.

function Get-WebIsapiFilters
{
	Get-WebConfiguration -PSPath IIS:\Sites -Recurse -Filter "/system.webServer/isapiFilters/filter"
}

function Get-WebScriptHandlers
{
	Get-WebConfiguration -PSPath IIS:\Sites -Recurse -Filter "/system.webServer/handlers/add"
}

function Clear-WebItem
{
	Clear-WebConfiguration -PSPath $_.PSPath -Filter $_.ItemXPath -Location $_.Location
}

# Do I have any 'foo' script handlers?
Get-WebScriptHandlers | where { $_.scriptProcessor -ilike "*foobar.dll" }

# Delete some ISAPI filters...
Get-WebIsapiFilters | where { $_.Path -ilike "*fooIisapi.dll" } | % { Clear-WebItem }
No Comments