Updating Visual Studio Extensions

I spent some time this week researching what would be needed to update some of my Visual Studio 2010 extensions to support Visual Studio 2012. I’ve now managed to do so, and would like to share what I found in case anyone else finds it useful.

Warning: This post was written and tested with the Visual Studio 2012 release candidate and I have no clue how well it will work on the final release. I’ll update it or post again once it comes out if needed.

The first thing to try was, of course, to take the existing code, migrate it into a VS2012 project, and update all references to Visual Studio assemblies. That worked, as far as building goes, but the extensions would still not work. It is not that they would cause any errors; they just didn’t do anything.

In this particular case, I focused on two of my extensions: KeywordClassifier and BetterXml. Both of them rely on the same mechanism, which was layering an ITagger<ClassificationTag> on top of the tags produced by the original provider. This was done through the use of an ITagAggregator<ClassificationTag>.

After a bit of debugging, I discovered the reason the original code was no longer working was because the ITagAggregator<ClassificationTag> instance returned by Visual Studio would simply return an empty list when GetTags() was called.

With some experimentation, I realized that while asking for an ITagAggregator<ClassificationTag> no longer worked, asking for an ITagAggregator<IClassificationTag> (that is, use the interface instead of the specific type) would indeed work. Plus, the same code would work just as well in VS2010!

return new KeywordTagger(
  ClassificationRegistry,
  Aggregator.CreateTagAggregator<IClassificationTag>(buffer)
) as ITagger<T>;

I was still not terribly thrilled about having to keep separate branches of the extensions with different project files and manifests to support both Visual Studio versions, so I started digging a bit more to see what other options there were. After a bunch of tests, I came up with something that works and allows me to keep a single VSIX file that works across both versions:

  • Modify the extension manifest to make it installable under VS2012. I did this modifying the <SupportedProducts> tag in the .vsixmanifest file to add an entry for VS2012, like this:
    <VisualStudio Version="11.0">
      <Edition>Ultimate</Edition>
      <Edition>Premium</Edition>
      <Edition>Pro</Edition>
      <Edition>IntegratedShell</Edition>
      <Edition>VST_All</Edition>
    </VisualStudio>
    

    Now, I do not know if these are the correct edition strings, though, but they work with the VS2012 release candidate ultimate edition that is in MSDN. If anyone knows what the right strings should be, let me know and I’ll fix it.

  • I changed the MaxVersion attribute of the SupportedFrameworkRuntimeEdition tag to specify .NET 4.5. I don’t know if that is needed (or useful), but probably wouldn’t hurt :)
    <SupportedFrameworkRuntimeEdition MinVersion="4.0" MaxVersion="4.5" />
    
  • Build the extension and package using VS2010, without changing the existing (and VS2010-specific) assembly references.

After trying this, the two extensions would load and run just fine in both VS2010 and VS2012, even if just one of them had been installed. I guess that VS2012 might be doing some assembly redirection when the extension is loaded, to ensure references are loaded correctly despite the fact that they have changed versions for 2012.

I’ve updated the code of KeywordClassifier and BetterXml on GitHub with the necessary changes. A big ‘Thank you!’ goes out to Oren Novotny for the help.

On the plus side, I discovered that my XAML Classifier Fix extension is no longer needed in Visual Studio 2012, now that the team introduced an explicit XAML Text classification.

Using PowerShell with Clustered MSMQ

My good friend Lars Wilhelmsen asked me a couple of days ago if I knew of a way to automate creating Queues on a clustered instance of MSMQ. Normally, a cluster of machines using Microsoft Cluster Service (MSCS) with MSMQ in Windows Server 2008 and up will have several MSMQ instances installed. For example, a 2-node cluster will typically have 3 MSMQ instances: 2 local ones (one for each node) and the clustered instance, which will be configured/running on a single node at a time.

So what usually happens is that if you try using System.Messaging on PowerShell on an MSMQ cluster, you’ll end up creating the queues in the local instance, not the clustered instance. And MessageQueue.Create() doesn’t allow you to specify a machine name either.

The trick to getting this to work lies in KB198893 (Effects of checking ”Use Network Name for Computer Name” in MSCS). Basically all you have to do is set the _CLUSTER_NETWORK_NAME_ environment variable to the name of the Virtual Server that hosts the clustered MSMQ resource:

$env:_CLUSTER_NETWORK_NAME_ = 'myclusterMSMQ'
[System.Messaging.MessageQueue]::Create('.\Private$\MyQueue')

One thing to keep in mind, though: You have to set this environment variable before you attempt to use any System.Messaging feature; otherwise it will simply be ignored because the MSMQ runtime will already be bound to the local MSMQ instance.

MSMQ and External Certificates

I’ve spent some time lately playing around with MSMQ, Authentication and the use of External Certificates. This is an interesting scenario, but one that I found to be documented in relatively unclear terms, with somewhat conflicting information all over the place. Plus, the whole process isn’t very transparent at first.

Normally, when you’re using MSMQ Authentication, you’ll have MSMQ configured with Active Directory Integration. When this setup, MSMQ will normally create an internal, self-signed certificate for each user/machine pair and register it in Active Directory. The sender would then request that the message be sent authenticated, which will cause the local MSMQ runtime to sign the message body + a set of properties using this certificate, and then include the public key of the certificate and the signature (encrypted with the certificate’s private key) alongside the message.

The MSMQ service where the authenticated queue is defined would then verify the signature using the certificate public key to ensure it wasn’t tampered with. If that check passed, it could then lookup a matching certificate in AD, allowing it to identity the user sending the message for authentication (there are a few more checks in there, but this is the basic process).

This basic setup runs fairly well and is easy to test. Normally all you have to do here is:

  • Mark the target queue with the Authenticated property:
    msmq_auth
  • Have the sender make sure to request authentication. For System.Messaging, this would be the MessageQueue.Authenticate property, while using the C/C++ API it would mean setting the PROPID_M_AUTH_LEVEL property to an appropriate value (normally MQMSG_AUTH_LEVEL_ALWAYS).

External Certificates

So what if you didn’t want to use the MSMQ internal certificate, for some reason? Well, MSMQ also allows you to use your own certificates, which could be, for example, generated using your Certificate Server CA installation trusted by your domain, or some other well known Certification Authority. To do this you need to:

  1. Register your certificate + private key on the sender machine for the user that is going to be sending the messages. The docs all talk about registering the certificate on the “Microsoft Internet Explorer Certificate Store” which confused me at first, but turns out it just meant that it would normally be the CurrentUser\MY store.
  2. Register the certificate (public key only) on the corresponding AD object so that the receiving MSMQ server can find it. You can do this using the MSMQ admin console from the MSMQ server properties window, or using the MQRegisterCertificate() API.

I decided to try this using a simple self-signed certificate generated using MakeCert (a tool I hate having to use because I can never remember the right arguments to use!). I created my certificate into CurrentUser\MY, then registed the public part of the cert using with the MSMQ management console.

The next part was changing my sender application code to use the external certificate, which is done by setting MessageQueue.SenderCertificate (or the PROPID_M_SENDER_CERT property). It wasn’t immediately obvious at first what format the certificate was to be provided, but turns out it’s just the DER-encoded representation of the X.509 certificate. In .NET, this means you can just use X509Certificate.GetRawCertData()/X509Certificate2.RawData or X509Certificate.Export() with X509ContentType.Cert value, which is equivalent. Here’s the basic code:

X509Certificate cert = ...;

Message msg = new Message();
msg.Body = data;
msg.UseDeadLetterQueue = true;
msg.UseAuthentication = true;
msg.SenderCertificate = cert.GetRawCertData();
queue.Send(msg, MessageQueueTransactionType.Single);

I ran this on my Windows 7 machine and all I got was this error:

System.Messaging.MessageQueueException (0x80004005): Cryptographic function has failed.
   at System.Messaging.MessageQueue.SendInternal(Object obj, MessageQueueTransaction internalTransaction, MessageQueueTransactionType transactionType)
   at System.Messaging.MessageQueue.Send(Object obj, MessageQueueTransactionType transactionType)
   at MsmqTest1.Program.Main(String[] args) in C:\tomasr\tests\MSMQ\MsmqTest1\Program.cs:line 73

What follows is a nice chase down the rabbit hole in order to figure it out.

The Problem

After bashing my head against this error for a few hours, I have to admit I was stumped and had no clue what was going on. My first impression was that I had been specifying the certificate incorrectly (false) or that I was deploying my certificate to the wrong stores. That wasn’t it, either. Then I thought perhaps there was something in my self-signed certificate that was incorrect, so I started comparing its properties with those of the origina MSMQ internal certificate used in the first test. As far as I could see, the only meaningful difference was that the MSMQ one had 2048-bit keys while mine had 1024-bit ones, but that was hardly relevant at this point.

After some spelunking and lots of searching I ran into the MQTrace script [1]. With that in hand, I enabled MSMQ tracing and notice a 0×80090008 error code, which means NTE_BAD_ALGID ("Invalid algorithm specified"). So obviously somehow MSMQ was using a "wrong" algorithm, but which one, and why?

In the middle of all this I decided to try something: I switched my sender application to delivering the message over HTTP instead of the native MSMQ TCP-based protocol; all it required was changing the Format Name used to reference the queue. This failed with the same error, but the MSMQ log gave me another bit of information: The NTE_BAD_ALGID error was being returned by a CryptCreateHash() function call!

Armed with this dangerous knowledge, I whipped out trusty WinDBG, set up a breakpoint in CryptCreateHash() and ran into this:

003ee174 50b6453f 005e14d0 0000800e 00000000 CRYPTSP!CryptCreateHash

0x800e is CALG_SHA512. So MSMQ was trying to use the SHA-512 algorithm for the hash; good information! Logically, the next thing I tried was to force MSMQ to use the more common SHA-1 algorithm instead by setting the appropriate property in the message (PROPID_M_HASH_ALG):

msg.HashAlgorithm = HashAlgorithm.Sha;

And it worked! Well, for HTTP anyhow, as it broke again as soon as I switched the sender app back to the native transport. It turns out that the PROPID_M_HASH_ALG property is ignored when using the native transport.

Changes in MSMQ 4.0 and 5.0

Around this time I ran into a couple of interesting documents:

The key part that came up from both of these is that in MSMQ 4.0 and 5.0 there were security changes around the Hash algorithm used for signing messages. In MSMQ 4.0, support for using SHA-2 was added and MAC/MD2/MD4/MD5 were disabled by default, while keeping SHA-1 as the default one. In MSMQ 5.0, however, SHA-1 was disabled by default again and SHA-2 (specifically SHA-512) was made the default [2]. So this explains readily why my test case was using SHA-512 as the hashing algorithm.

At this point I went back to using internal MSMQ certificates and checked what Hash algorithm was being used in that case and, sure enough, it was using SHA-512 as well. Obviously then something in the certificate I was providing was triggering the problem, but what?

And then it hit me that the problem had nothing to do with the certificate itself, but with the private/public key pair associated with the certificate. When you create/request a certificate, one aspect is what Cryptographic (CAPI) Provider to use to generate (and store it?) and I had been using the default, which is the "Microsoft Strong Cryptographic Provider"; according to this page it apparently isn’t quite strong enough to support SHA-2 [3].

So I created a new certificate, this time explicitly using a cryptographic provider that supports SHA-2, by providing the following arguments to my MakeCert.exe call

-sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 12

And sure enough, sending authenticated messages over the native protocol succeeded now. Sending over HTTP also worked with the updated certificate without having to set the HashAlgorithm property.

Exporting and Using Certificates

Something to watch out for that might not be entirely obvious: If you export a certificate + private key from a system store into a PFX (PKCS#12) file, your original choice of cryptographic service provider is stored alongside the key in the file:

pfx

Once you import the PFX file into a second machine, it will use the same crypto provider as the original one. This might be important if you’re generating certificates in one machine for using in another. The lesson so far seems to be:

When using external certificates for use with MSMQ 5.0, ensure you choose a cryptographic provider that supports SHA-2 when generating the public/private key pair of the certificate, like the "Microsoft Enhanced RSA and AES Cryptographic Provider".

At this point I’m not entirely sure how much control you have over this part if you’re generating the certificates in, say, Unix machines where this concept doesn’t apply (or how exactly it might work with external certification authorities).

External Certificates in Workgroup Mode

Another aspect I was interested in was the use of External Certificates when MSMQ is in Workgroup mode. In this mode, authenticated queues aren’t all that useful (at least as far as I understand things right now), because without AD the receiving MSMQ server has no way to match the certificate used to sign the message with a Windows identity to use for the access check on the queue. In this scenario messages appear to be rejected when they reach the queue with a "The signature is invalid" error.

However, if the queue does not have the Authenticated option checked, then messages signed with external certificates will reach the queue successfully. The receiving application can then check that the message was indeed signed because the Message.DigitalSignature (PROPID_M_SIGNATURE)property will contain the hash encrypted with the certificate private key as expected. The application could then simply retrieve the public key and look it up in whatever application-specific store it had to check it against known certificates.

My understanding here is that despite that it cannot lookup the certificate in DS, the MSMQ receiving server will still verify that the signature is indeed valid according to the certificate attached to the message. That’s only half the work so that’s why the application should then verify that the certificate is known and trusted.

[1] John Breakwell’s MSMQ blog on MSDN is a godsend when troubleshooting and understanding MSMQ. Glad he continued posting about this stuff on his alternate blog after he left MS.

[2] The System.Messaging stuff does not provide a way to specify SHA-2 algorithms in the message properties in .NET 4.0. No idea if this will be improved in the future.

[3] I always thought the CAPI providers seemed to be named by someone strongly bent on confusing his/her enemies…

Having Fun with WinDBG

I’ve been spending lots of quality time with WinDBG and the rest of the Windows Debugging Tools, and ran into something I thought was fun to do.

For the sake of keeping it simple, let’s say I have a sample console application that looks like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Runtime.CompilerServices;

class Program {
  static void Main(string[] args) {
    Program p = new Program();
    for ( int i = 0; i < 10; i++ ) {
      p.RunTest("Test Run No. " + i, i);
    }
  }
  [MethodImpl(MethodImplOptions.NoInlining)]
  public void RunTest(String msg, int executionNumber) {
    Console.WriteLine("Executing test");
  }
}

Now, imagine I’m debugging such an application and I’d like to figure out what is passed as parameters to the RunTest() method, seeing as how the application doesn’t actually print those values directly. This seems contrived, but a classic case just like this one is a method that throws an ArgumentException because of a bad parameter input but the exception message doesn’t specify what the parameter value itself was.

For the purposes of this post, I’ll be compiling using release x86 as the target and running on 32-bit Windows. Now, let’s start a debug session on this sample application. Right after running it in the debugger, it will break right at the unmanaged entry point:

Microsoft (R) Windows Debugger Version 6.11.0001.404 X86
Copyright (c) Microsoft Corporation. All rights reserved.

CommandLine: .\DbgTest.exe
Symbol search path is: *** Invalid ***
****************************************************************************
* Symbol loading may be unreliable without a symbol search path.           *
* Use .symfix to have the debugger choose a symbol path.                   *
* After setting your symbol path, use .reload to refresh symbol locations. *
****************************************************************************
Executable search path is:
ModLoad: 012f0000 012f8000   DbgTest.exe
ModLoad: 777f0000 77917000   ntdll.dll
ModLoad: 73cf0000 73d3a000   C:\Windows\system32\mscoree.dll
ModLoad: 77970000 77a4c000   C:\Windows\system32\KERNEL32.dll
(f9c.d94): Break instruction exception - code 80000003 (first chance)
eax=00000000 ebx=00000000 ecx=001cf478 edx=77855e74 esi=fffffffe edi=7783c19e
eip=77838b2e esp=001cf490 ebp=001cf4c0 iopl=0         nv up ei pl zr na pe nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000246
*** ERROR: Symbol file could not be found.  Defaulted to export symbols for ntdll.dll -
ntdll!DbgBreakPoint:
77838b2e cc              int     3

Now let’s fix the symbol path and also make sure SOS is loaded at the right time:

0:000> .sympath srv*C:\symbols*http://msdl.microsoft.com/download/symbols
Symbol search path is: srv*C:\symbols*http://msdl.microsoft.com/download/symbols
Expanded Symbol search path is: srv*c:\symbols*http://msdl.microsoft.com/download/symbols
0:000> .reload
Reloading current modules
......
0:000> sxe -c ".loadby sos clr" ld:mscorlib
0:000> g
(1020.12c8): Unknown exception - code 04242420 (first chance)
ModLoad: 70b80000 71943000   C:\Windows\assembly\NativeImages_v4.0.30319_32\mscorlib\246f1a5abb686b9dcdf22d3505b08cea\mscorlib.ni.dll
eax=00000001 ebx=00000000 ecx=0014e601 edx=00000000 esi=7ffdf000 edi=20000000
eip=77855e74 esp=0014e5dc ebp=0014e630 iopl=0         nv up ei pl zr na pe nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000246
ntdll!KiFastSystemCallRet:
77855e74 c3              ret

At this point, managed code is not executing yet, but we’ve got SOS loaded. Now, what I’d like to do is set an initial breakpoint in the RunTest() method. Because it’s a managed method, we’d need to wait until it is jitted to be able to grab the generated code entry point. Instead of doing all that work, I’ll just use the !BPMD command included in SOS to set a pending breakpoint [1] on it, the resume execution:

0:000> !BPMD DbgTest.exe Program.RunTest
Adding pending breakpoints...
0:000> g
(110c.121c): CLR notification exception - code e0444143 (first chance)
(110c.121c): CLR notification exception - code e0444143 (first chance)
(110c.121c): CLR notification exception - code e0444143 (first chance)
JITTED DbgTest!Program.RunTest(System.String, Int32)
Setting breakpoint: bp 001600D0 [Program.RunTest(System.String, Int32)]
Breakpoint 0 hit
eax=000d37fc ebx=0216b180 ecx=0216b180 edx=0216b814 esi=0216b18c edi=00000000
eip=001600d0 esp=002fece0 ebp=002fecf4 iopl=0         nv up ei pl nz na po nc
cs=001b  ss=0023  ds=0023  es=0023  fs=003b  gs=0000             efl=00000202
001600d0 55              push    ebp

Now he debugger has stopped execution on the first call to RunTest, so we can actually examine the values of the method arguments:

0:000> !CLRStack -p
OS Thread Id: 0x121c (0)
Child SP IP       Call Site
002fece0 001600d0 Program.RunTest(System.String, Int32)
    PARAMETERS:
        this () = 0x0216b180
        msg () = 0x0216b814
        executionNumber (0x002fece4) = 0x00000000

So the first parameter is the this pointer, as this is a method call. The msg parameter is a string, so let’s examine that as well:

0:000> !dumpobj -nofields 0x0216b814
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 0

Now let’s look at this at a slightly lower level:

0:000> kbn3
 # ChildEBP RetAddr  Args to Child
WARNING: Frame IP not in any known module. Following frames may be wrong.
00 002fecdc 001600ab 00000000 00000000 004aa100 0x1600d0
01 002fecf4 727221db 002fed14 7272e021 002fed80 0x1600ab
02 002fed04 72744a2a 002fedd0 00000000 002feda0 clr!CallDescrWorker+0x33
0:000> !IP2MD 0x1600d0
MethodDesc:   000d37fc
Method Name:  Program.RunTest(System.String, Int32)
Class:        000d1410
MethodTable:  000d3810
mdToken:      06000002
Module:       000d2e9c
IsJitted:     yes
CodeAddr:     001600d0
Transparency: Critical
0:000> !IP2MD 0x1600ab
MethodDesc:   000d37f0
Method Name:  Program.Main(System.String[])
Class:        000d1410
MethodTable:  000d3810
mdToken:      06000001
Module:       000d2e9c
IsJitted:     yes
CodeAddr:     00160070
Transparency: Critical

Here we see the top 3 stack frames including the first 3 parameters to the call, and from the !IP2MD calls you can see the first 2 are the calls to RunTest() and Main(), just as we would expect.

The parameters displayed by the kb command, however, seem a bit weird for the RunTest call: 00000000 00000000 004aa100. These are, literally, the values on the stack:

0:000> dd esp L8
002fece0  001600ab 00000000 00000000 004aa100
002fecf0  002fed20 002fed04 727221db 002fed14

Notice that at the top of the stack we have the return address to the place in Main() where the method call happened, followed by the “3 parameters” displayed by kb. However, this isn’t actually correct.

The CLR uses a calling convention that resembles the FASTCALL convention a bit. That means that in this case, the left-most parameter would be passed in the ECX register, the next one in EDX and the rest on the stack. In our case, this means that the value of the this pointer will go in ECX:

0:000> r ecx
ecx=0216b180
0:000> !dumpobj ecx
Name:        Program
MethodTable: 000d3810
EEClass:     000d1410
Size:        12(0xc) bytes
File:        C:\temp\DbgTest\bin\release\DbgTest.exe
Fields:
None

It also means that the msg argument will go in EDX:

0:000> r edx
edx=0216b814
0:000> !dumpobj -nofields edx
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 0

So the value executionNumber argument will go in the stack, and we’ll find it at [esp+4]:

0:000> dd [esp+4] L1
002fece4  00000000

We could even disassemble the small piece of code in Main that calls RunTest, by backing up a bit before the current return address, and you’ll see how the value of i is pushed into the stack from the edi register and how the ecx and edx are likewise prepared for the call:

0:000> u 001600ab-12 L8
0016009b e87076b870      call    mscorlib_ni+0x2b7710 (70e37710)
001600a0 57              push    edi
001600a1 8bd0            mov     edx,eax
001600a3 8bcb            mov     ecx,ebx
001600a5 ff1504380d00    call    dword ptr ds:[0D3804h]
001600ab 47              inc     edi
001600ac 83ff0a          cmp     edi,0Ah

Knowing all this, if we wanted to print out the values of the msg and executionNumber parameters on all remaining calls to RunTest, we could replace the breakpoint setup by the !BPMD command with a regular breakpoint that executes a command and then continues execution. This would look something like this:

0:000> * remove existing breakpoint
0:000> bc 0
0:000> * check start address of RunTest
0:000> !name2ee DbgTest.exe Program.RunTest
Module:      000d2e9c
Assembly:    DbgTest.exe
Token:       06000002
MethodDesc:  000d37fc
Name:        Program.RunTest(System.String, Int32)
JITTED Code Address: 001600d0
0:000> * set breakpoint
0:000> bp 001600d0 "!dumpobj -nofields edx; dd [esp+4] L1; g"
0:000> g
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 1
002fece4  00000001
Executing test
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 2
002fece4  00000002
Executing test
Name:        System.String
MethodTable: 70e9f9ac
EEClass:     70bd8bb0
Size:        42(0x2a) bytes
File:        C:\Windows\Microsoft.Net\assembly\GAC_32\mscorlib\v4.0_4.0.0.0__b77a5c561934e089\mscorlib.dll
String:      Test Run No. 3
002fece4  00000003
...

As you can see, we’re indeed getting the values of both our arguments without problems in the debugger log (which we could easily write to a file using the .logopen command). This is a simple scenario, but can still prove useful sometimes. Of course, you could argue that going through all these contortions might be over the top, given that the !ClrStack -p command can give you parameters to each function in the call stack. The answer is that !ClrStack doesn’t make it easy to dump just the first frame, not does it combine with other commands so that you can easily use !DumpObj on the parameter values.

[1] If !BPMD doesn’t seem to work, it’s likely because the CLR debugger notifications are disabled. See this post on how to fix it (for .NET 4.0, just remember to replace mscorwks for clr).



                  
                          

Github Repo for Molokai

After a couple of requests, I’ve created a separate git repository on github for my Molokai color scheme for Vim. Enjoy!

BizTalk Send Handlers/SSO Bug?

My good friend Carlos ran into a situation while working with one of his clients that seems to be triggering what looks like a bug in BizTalk Server 2010. The specific case came up when changing the Windows User Group associated with a BizTalk Host when that host is already associated as a Send Handler for an adapter.

Apparently, when doing the change, BizTalk will correctly update the information stored in the Enterprise Single Sign-On (ENTSSO) database (SSODB) for Receive Handlers, but not with Send Handlers, which leaves the system generating errors when the host later tries to access the SSODB to access the stored adapter settings.

Here’s  the repro instructions:

  1. Create a new BizTalk Host for the test. Let’s name it BTTestHost. Assign to it the default BizTalk Application Users group, and then create a new Host instance.
  2. Add the BTTestHost as a send handler for an adapter; we’ll use the FILE adapter in this example.
  3. At this point, if you’ check the ssodb.dbo.SSOX_ApplicationInfo table, you should see an entry associated with FILE_TL_BTTestHost, so that’s our newly created send handler. You’ll notice that the ai_admin_group_name column has the value ”BizTalk Application Users”, as expected.
  4. Now create a send port associated to the new Send Handler and test to verify everything is working correctly.
  5. Stop and delete the existing host instance.
  6. Now let’s create a new Windows Users Group, let’s name it “BizTalk Test Group”. We’ll also create a new windows user we’ll use to create a new host instance in a bit; and we’ll make that user a member of BizTalk Test Group. Make sure the user is not a member of “BizTalk Application Users”.
  7. Open the properties for BTTestHost and change the associated Windows User Group, to the new “BizTalk Test Group”.
  8. Go ahead and create a new Host Instance, add a new Send Port associated with our Send Handler and run a test. You’ll notice you’ll get an error with something like “Access denied. The client user must be a member of one of the following accounts to perform this function.” (followed by a list of windows user groups).
  9. At this point, if you go back and check the SSOX_ApplicationInfo table again, you’ll notice that the row FILE_TL_BTTestHost still has the same value on the ai_admin_group_name column referencing the original group associated with the host, not the new group we assigned to it. This causes the access denied error for the host instance, because the host user is not considered to have access to the SSODB to read the adapter settings.

The ugly part is that working around this problem requires you to delete the original Send Handler and recreating it, which requires you to move all send ports already associated with it one by one.

BizTalk 2010 Config Tool Hanging

Last week I was installing BizTalk Server 2010 on my development Virtual Machine, which previously had 2009 installed. Installation went fine, but when I started the BizTalk Configuration tool, it started hanging for minutes at a time.

Strangely enough, the tool let me configure Enterprise Single Sign-On (ENTSSO) without any problems, but would hang every time I tried to configure a new BizTalk Group. After some tests, it became obvious it was hanging when trying to connect to SQL Server to check if the group databases could be created until the connection timeout would expire.

It would indeed eventually respond again, but trying to do anything would only cause it to hang again. It was really weird that it would hang here because of the database connection, when the SQL Server used was local and ENTSSO had configured straight away without any problems.

Fortunately, I managed to figure it out: The problem was that the SQL Server instance had TCP/IP connectivity disabled (only shared memory was enabled). Enabling TCP/IP and restarting the SQL Server service fixed the problem.

WCF Messages Not Getting Closed

I spent some time yesterday evening tracking down an issue with a custom WCF transport channel. This particular channel would support the IInputChannel channel shape, but with a twist: it would return a custom implementation of System.ServiceModel.Channels.Message instead of one of the built-in WCF implementations.

There’s nothing wrong with that and it was working just fine in most cases, until I ran into a few scenarios where the OnClose() method of my custom Message class wasn’t being called at all.

After some digging, I discovered that the specific messages this was happening to were not being processed normally by the ServiceHost infrastructure. In particular, they were not being dispatched because no method in the service contract matched the action in the messages, so instead the UnknownMessageReceived event in the ServiceHost instance was being raised.

Our UnknownMessageReceived implementation wasn’t closing messages explicitly, but that was easily corrected, so no problems, right? Wrong. Turns out, that it appears that in WCF 3.0/5 (haven’t checked 4.0 yet), if you do not have an event handler for UnknownMessageReceived, then the messages won’t get closed either.

This seems like a bad bug to me, because since Message implements IDisposable, there’s obviously the expectation that it will hold resources that should be released as soon as possible, so not calling Dispose() or Closed() will leak resources and potentially cause trouble.

Attaching an event handler to UnknownMessageReceived just isn’t an option always, and even if it were, there’s no reason why WCF itself shouldn’t be guaranteeing that messages are closed as soon as they aren’t needed.

BetterXml

BetterXml is a Visual Studio 2010 extension I’ve been working on recently in an attempt to improve the experience of the built-in XML editor in VS. Right now it’s only on its early stages, so it doesn’t add much, but I hope to improve it as I find new things I’d like to add.

What does it do? BetterXml has two main features right now: Syntax highlighting extension and namespace tooltips.

Syntax Highlighting

BetterXml provides two new classification format definitions: XML Prefix and XML Closing Tag.

  • XML Prefix will change the color/format used to highlight prefixes in XML names (the ‘x’ in x:name).
  • XML Closing Tag will change the color/format used to highlight closing element tags. This is one feature that some color schemes use in Vim that I always missed in VS, and it’s pretty cool that the extensibility model in VS2010 allows me to provide this; it makes reading long documents a lot easier.

Here’s a screenshot showing both of these:

syntax highlighting

This is supported on regular XML documents (including XSD) as well as XAML and HTML documents.

Namespace Tooltips

If you hover the mouse pointer over a prefix in an XML document, BetterXml will try to figure out the URI of the namespace that prefix maps to, and present a tooltip with that information:

tooltips

I haven’t done much tweaking of this feature yet so it will probably be a bit slow on large documents, since it requires partially parsing the document. This feature is only supported on XML and XAML documents.

Other Plans

I’ve been looking into other improvements I’d like to add to BetterXml. One I really wanted to provide was extending Intellisense completion based on previously used element/attribute names, which would be pretty useful for XML documents without schema.

VS2010 does provide ways to extend completion, and while it requires a lot of boilerplate code, it works. Unfortunately, after much trial and error I’ve been unable to make it work correctly, and certainly could never get it to behave the same way the built-in completion works.

While VS does seem to support multiple concurrent completion providers on the same buffer and will display the completion sets for all of them, I could not figure out the magic incantations to make it work reliably and in ways that behavior was predictable. Probably my own fault, but without clear documentation on how they are supposed to work together (if it’s even supported at all), it’s not trivial to do.

Source

Source code for BetterXml is available as usual on github.

Trying out Resharper 5.0

There are many .NET developers that can’t live without Jetbrain’s R# product. I’m not really one of them. Don’t get me wrong, I like the idea of R# and some of the features it offers, it’s just that it could be so much better if it didn’t keep getting in my way!

Let’s get something out of the way before I dive into what I’d like to see improved in R#: I bought my own R# license out of my own money. I do think it’s a fairly reasonably priced product for all the features it offers.

Improvements I’d like to see

  • More modularity: I work in a fairly specific environment where some of the tools R# provides don’t help at all, and in fact, I don’t need them or want them in my way.
    Hence, I’d love to see top-level options to turn off broad features. Right now pretty much only two features have this: IntelliSense and Code Analysis. I’d love to see the same for things like code formatting and the like.
  • Better options for saving/sharing settings: It’s a pain in the neck to configure R# after installing it (or keeping it configured after that). The Options dialog is a mess, and there are way too many settings and no easy way (that I can find) to export your settings to a file I can save and keep around for later installations.
    Yes, I’m aware that R# saves its settings in an external XML file you can save, but it’s stored some place I can never remember, gets overwritten and messed up all the time when running multiple VS options at the same time and I always forget to fish it out before dumping a VM or something, so it’s as if it was never there.
    Just give me a way to export/import settings; that’s a lot easier to live with.
  • Unit Testing: I’m rather happy using the test runner in R#, use it all the time. However, it would be really nice if there was a way to set it up so that the Debug option worked for mixed managed/unmanaged solutions.
    Right now, the Debug option only enables the managed debugger, but I also work with lots of code that includes managed tests over a mixture of managed and unmanaged C++ code and to debug that I always have to use the NUnit test runner instead.

Things that just piss me off

There are a few things that I can’t stand when using R#, however; to the point where I have to disable it just to get any work done without losing my mind:

  • Crashes: R# 4.5 was notoriously unstable. It would crash all the freaking time for me. R# 5.0 seems better on VS2010, but not on VS2008 to the point where the simplest reproducible crash that R# 4.5 would cause is still present in 5.0. Here’s just an example: Launch a new, empty VS2008 instance (with no auto-project opening or anything), then drag a C# file from explorer to VS. Watch VS crash and burn.
  • Stop messing with ViEmu: I’m a long time user of ViEmu, absolutely love it. However, R# often messes with ViEmu on VS2008, in particularly with the keyboard handling. The one that always causes me trouble is firing a new empty VS2008 instance and loading a solution that contains only unmanaged C++ projects, where R# shouldn’t even get involved, and watch the keyboard go all crazy on you.
    Only way to work when this happens is to disable the R# add-in temporarily, which is really annoying. I know people love R#, but for me ViEmu >>> R#. Mess with ViEmu and it’s you that’s getting canned.
  • Smarter Settings: Given my concerns about making it easier to export/import my R# settings, and my desire to disable code formatting, it’s not surprising this is an issue for me.
    See, I already have all my VS code formatting rules properly configured and persisted in my .VSSETTINGS file. Install VS again and in seconds its there. Install R# and instead of defaulting its own code formatting rules to my existing VS settings, it applies its own coding conventions. That’s just disrespectful and annoying, to be honest.

Anyway, let’s see how life with R# 5.0 goes. So far, the VS2010 integration seems nice, but I will still be primarily a VS2008 user for a long time, so it sucks a lot that stability there hasn’t improved.