Viasfora v1.2 Released

Viasfora version 1.2 has been released! Most of the work on this released went into significantly improving the performance of the extension, particularly around Rainbow Parentheses and the Keyword Classifier.

One of the issues that I ran into was that when both features were enabled, they could interact in ways that would easily kill performance, particularly causing the C# editor to blink the highlight on some words, which was not a good thing. Another big challenge was the JavaScript editor, which, because of how it parses JS (in VS2010, in particular), it becomes very sensitive on anything that forces tag updates.

Much of the extra performance work went into improving the lexing code to extract braces when the document changes. I’m still not terribly happy with the current code, but it seems to be holding up for now. Two of the things that improved this the most are keeping a list of all the braces found and then only doing partial invalidation when the document changes, and also providing a quick way to look up ranges of braces based on the document lines. Still lots of room for improvement here!

As for new features? Rainbow parentheses is now supported on VB (often asked about!). Initial support for F# is also provided, but I am sure the default set of F# keywords I chose for this version could use some improvement. If you’re an F# fan, let me know of any suggestions!.

Viasfora v1.1 Published

Tonight I pushed a new version of my Viasfora extension for Visual Studio. In this version, I fixed some features that were not working on Visual Studio 2013 due to the introduction of a new HTML editor, particularly highlighting closing HTML tags.

What’s exiting to me, however, is the new feature: Rainbow Parentheses:

Rainbow Parentheses

Rainbow Parentheses

This is a Visual Studio version inspired by one of my favorite Vim plugins. Features:

  • Supported for C#, C/C++ and JavaScript files.
  • Highlights {}, [] and () braces.
  • Supports 4 different nesting levels, format for each one can be customized through the Tools -> Options dialog (Rainbow Parentheses 1-4).

Comments, bug reports and feature requests are always welcome. Enjoy!

Introducing Viasfora

A couple of days ago, I unveiled Viasfora, my latest attempt at building a decently packaged extension for Visual Studio 2010, 2012, and 2013. I had already published a few VS Extensions before (Keyword Classifier, BetterXml, Line Adornments, and Xaml Classifier Fix), but it was not overly successful. The reasons for this were several:

  • I originally published those extensions not really as something useful in their own right, but rather as samples on how to implement VS extensions. They were successful in that regard, but none were ever very widely used.
  • While useful in their own right, the extensions weren’t very polished. They were not very easy to customize and the code needed some cleanup to be easier to maintain.
  • The names, frankly, sucked.
  • I’m terrible at promoting stuff, so I never did much about them other than a few posts on this blog. I even was so absentminded that I uploaded BetterXml to the Visual Studio Gallery, only to forget to publish it. No wonder no one used it!

What is Viasfora?

Viasfora is a combination of my 3 most significant previous extensions. Keyword Classifier, BetterXml and Line Adornments. It puts them all in a single, nice package that includes full customizability through the Tools -> Options dialog in Visual Studio, including the ability to turn individual features on/off.

So what does Viasfora offer? Check the website for the full details, but here are some highlights:

  • Current Line Highlighting, a native feature in VS2013, but supported on VS2010 and VS2012.
  • Custom highlighting of Control Flow keywords, LINQ-related Keywords and Visibility keywords for C#, C/C++, JavaScript and Visual Basic (new!).
  • Highlighting of character escape sequences in C# strings, which makes it real easy to spot them!
  • Custom highlighting of XML namespace prefixes in XML/XAML/HTML documents.
  • Highlighting closing element tags in XML/XAML/HTML documents in a different color as the opening element tag. This is one of my favorite features and one I often miss from Vim.
  • Matching (through highlight) of opening/closing element tags in XML documents (new!).
  • Tooltips for easy lookup of XML namespace prefixes.

Hopefully having a nice (if simple) website for the extension with all the information makes it easier for people to find and get interested in it. As with my previous extensions, complete source is available on the < a href='https://github.com/tomasr/viasfora/'>github repository.

Where does the name come from?

Sorry, I still suck at naming :). The name Viasfora comes from my attempt of mixing the greek word "Diáfora" and, well, obviously VS for Visual Studio. It sounds catchy, I think!

Please let me know if anyone runs into any problems, bugs or feature requests.

SPListAnalysis: A sample DebugDiag v2.0 Analysis Rule

DebugDiag is a tool for collecting and analyzing memory dumps. Version 2.0 was just released and makes it extremely easy to write your own custom analysis rules using any language that target the CLR.

SPListAnalysis is a sample analysis rule I implemented to show the basic steps involved in the process. The rule does the following:

  • Iterates over all threads found in the process dumps to identify any threads that are querying an SPList object (or causing it to load the items contained therein).
  • Identifies the CAML query associated with the operation.
  • Generates a warning if the query contains more than 10 fields, loads all the fields in the list, or if the query fails to specify a RowFilter.
  • Prints the stack traces and query details for each matching thread.

The code for this sample can be found here.

This particular sample if very simple, consisting of a single class called SPListAnalysis, which implements the IHangDumpRule interface to process one dump at a time. It also implements the IAnalysisRuleMetadata interface to define the category and description for this analysis.

DebugDiag Analysis

DebugDiag Analysis

The basis for the analysis is looking at all threads that contain a significant function in the call stack, and then looking for a matching object in the stack frames (similar to doing a !DumpStackObjects) command):

private void RunSPListAnalysis(NetDbgObj debugger)
{
  // initialize report
  foreach ( var thread in debugger.Threads )
  {
    AnalyzeThread(thread);
  }
  // report findings
}

private void AnalyzeThread(NetDbgThread thread)
{
  // ...
  if ( ContainsFrame(thread, SPLIST_FILL_FRAME) )
  {
    //...
    dynamic obj = thread.FindFirstStackObject("Microsoft.SharePoint.SPListItemCollection");
    if ( obj != null )
    {
      String viewXml = (string)obj.m_Query.m_strViewXml;
      //...
      XDocument doc = XDocument.Parse(viewXml);
      AnalyzeViewXml(thread, doc);
      //...
    }
    PrintThreadStack(thread);
  }
}

Reporting the results is mostly a matter of generating HTML. However, we can also generate warnings that are included at the top of the report, allowing us to quickly alert the user that something of interest was found in the dump:

private void ReportThreadsWithLargeSPQueries()
{
  if ( threadsWithLargeSPQueries.Count > 0 )
  {
    StringBuilder sb = new StringBuilder();
    sb.Append("The following threads appear to be executing SPList queries requesting many fields.");
    sb.Append("<br/><ul>");
    foreach ( int threadId in threadsWithLargeSPQueries.Keys )
    {
      sb.AppendFormat("<li><a href='#thread{0}'>{0}</a> ({1} fields)</li>",
          threadId, threadsWithLargeSPQueries[threadId]);
    }
    sb.Append("</ul>");
    this.manager.ReportWarning(sb.ToString(), "");
  }
}

The resulting report will look like this:

DebugDiag Warnings

DebugDiag Warnings

Enjoy!

Keyword Classifier v1.4

Published another minor update to my KeywordClassifier Visual Studio extension yesterday. The new version adds support for Visual Studio 2013 preview, and also adds a new feature: A custom classification tag to highlight escape sequences within strings:

Screenshot of string escape sequence

Use the "String Escape Sequence" classification tag to change the appearance of these. Enjoy!

ClrMD: Fetching DAC Libraries From Symbol Servers

Last week, the first public beta of the Microsoft.Diagnostics.Runtime library was released. This is a very cool library that can be used to write automated dump analysis of processes hosting the CLR.

One of the first things you will need in order to use ClrMD is get a hold of the DAC library for the specific version of the CLR that your dump/live process is using. If this is a local dump/process, then you’ll have the DAC handy, as it will be part of your .NET Framework installation (example: c:\windows\Microsoft.NET\Framework64\v2.0.50727\mscordacwks.dll).

If you’re inspecting a dump from another machine, then you could also copy the mscordacwks.dll from the right folder in the remote machine. A more interesting option, however, is to dynamically fetch the right DAC library from the public Microsoft Symbol Server. ClrMD does not have built-in code to do this, and it can be a bit tricky, but it’s possible to implement it relatively easy in many scenarios.

In order to do this, you first need find two native libraries and copy them over to the same directory your application executable is located:

  • dbghelp.dll
  • symsrv.dll

The right place to pick these up is as part of the Debugging Tools For Windows package. Remember that you need to pick these for the right bitness (x64 or x86) depending on your process architecture, which needs to match the architecture of the dump/process you’re going to be inspecting. For this case, I picked the debugger tools package that comes with the Windows 8 SDK. When you install the tools from the SDK installer, these get installed to c:\Program Files (x86)\Windows Kits\8.0\Debuggers.

For this sample, you need to put the x64 and x86 versions of the libraries in the corresponding folder under .\dbglibs in the project folder. A custom build action will then copy the right version over to the output directory:

copy "$(ProjectDir)dbglibs\$(PlatformName)\*.dll" "$(TargetDir)"

The relevant code can be found in the DacLocator class. This class will load the dbghelp.dll library and initialize it. Using it is relatively simple:

dacloc = DacLocator.FromPublicSymbolServer(localCachePathTextBox.Text);
DataTarget dt = DataTarget.LoadCrashDump(dumpFileTextBox.Text);
String dac = dacloc.FindDac(dt.ClrVersions[0]);

Here we’re just initializing the library to use the public Symbol Server and using the specified path as the local cache, and then attempting to locate the DAC library that is required. Finding the DAC itself is done using the SymFindFileInPath() function:

StringBuilder symbolFile = new StringBuilder(2048);
if ( SymFindFileInPath(ourProcess.Handle, searchPath, dacname,
    timestamp, fileSize, 0, 0x02, symbolFile, IntPtr.Zero, IntPtr.Zero) ) {
  return symbolFile.ToString();
} else {
  throw new Win32Exception(String.Format("SymFindFileInPath() failed: {0}", Marshal.GetLastWin32Error()));
}

The rest of the sample is pretty straightforward: It just iterates through all objects in the heap, looking for HttpContext items and then
printing out some basic details of each one:

private IEnumerable<HttpCtxtInfo> FindHttpContexts(ClrRuntime clr) {
  ClrHeap heap = clr.GetHeap();
  foreach ( ulong addr in heap.EnumerateObjects() ) {
    ClrType type = heap.GetObjectType(addr);
    if ( type == null ) continue;
    if ( type.Name != "System.Web.HttpContext" ) continue;

    yield return GetHttpContextInfo(heap, addr, type);
  }
}

private HttpCtxtInfo GetHttpContextInfo(ClrHeap heap, ulong addr, ClrType type) {
  HttpCtxtInfo info = new HttpCtxtInfo {
    Address = addr
  };
  ulong reqAddr = (ulong)type.GetFieldByName("_request").GetFieldValue(addr);
  ClrType reqType = heap.GetObjectType(reqAddr);
  info.Method = (string)reqType.GetFieldByName("_httpMethod").GetFieldValue(reqAddr);
  info.Url = (string)reqType.GetFieldByName("_rawUrl").GetFieldValue(reqAddr);
  return info;
}

Running the sample app will look something like this:

DacSample Screenshot

The sample code can be downloaded here. Enjoy!

Updating Visual Studio Extensions

I spent some time this week researching what would be needed to update some of my Visual Studio 2010 extensions to support Visual Studio 2012. I’ve now managed to do so, and would like to share what I found in case anyone else finds it useful.

Warning: This post was written and tested with the Visual Studio 2012 release candidate and I have no clue how well it will work on the final release. I’ll update it or post again once it comes out if needed.

The first thing to try was, of course, to take the existing code, migrate it into a VS2012 project, and update all references to Visual Studio assemblies. That worked, as far as building goes, but the extensions would still not work. It is not that they would cause any errors; they just didn’t do anything.

In this particular case, I focused on two of my extensions: KeywordClassifier and BetterXml. Both of them rely on the same mechanism, which was layering an ITagger<ClassificationTag> on top of the tags produced by the original provider. This was done through the use of an ITagAggregator<ClassificationTag>.

After a bit of debugging, I discovered the reason the original code was no longer working was because the ITagAggregator<ClassificationTag> instance returned by Visual Studio would simply return an empty list when GetTags() was called.

With some experimentation, I realized that while asking for an ITagAggregator<ClassificationTag> no longer worked, asking for an ITagAggregator<IClassificationTag> (that is, use the interface instead of the specific type) would indeed work. Plus, the same code would work just as well in VS2010!

return new KeywordTagger(
  ClassificationRegistry,
  Aggregator.CreateTagAggregator<IClassificationTag>(buffer)
) as ITagger<T>;

I was still not terribly thrilled about having to keep separate branches of the extensions with different project files and manifests to support both Visual Studio versions, so I started digging a bit more to see what other options there were. After a bunch of tests, I came up with something that works and allows me to keep a single VSIX file that works across both versions:

  • Modify the extension manifest to make it installable under VS2012. I did this modifying the <SupportedProducts> tag in the .vsixmanifest file to add an entry for VS2012, like this:
    <VisualStudio Version="11.0">
      <Edition>Ultimate</Edition>
      <Edition>Premium</Edition>
      <Edition>Pro</Edition>
      <Edition>IntegratedShell</Edition>
      <Edition>VST_All</Edition>
    </VisualStudio>
    

    Now, I do not know if these are the correct edition strings, though, but they work with the VS2012 release candidate ultimate edition that is in MSDN. If anyone knows what the right strings should be, let me know and I’ll fix it.

  • I changed the MaxVersion attribute of the SupportedFrameworkRuntimeEdition tag to specify .NET 4.5. I don’t know if that is needed (or useful), but probably wouldn’t hurt :)
    <SupportedFrameworkRuntimeEdition MinVersion="4.0" MaxVersion="4.5" />
    
  • Build the extension and package using VS2010, without changing the existing (and VS2010-specific) assembly references.

After trying this, the two extensions would load and run just fine in both VS2010 and VS2012, even if just one of them had been installed. I guess that VS2012 might be doing some assembly redirection when the extension is loaded, to ensure references are loaded correctly despite the fact that they have changed versions for 2012.

I’ve updated the code of KeywordClassifier and BetterXml on GitHub with the necessary changes. A big ‘Thank you!’ goes out to Oren Novotny for the help.

On the plus side, I discovered that my XAML Classifier Fix extension is no longer needed in Visual Studio 2012, now that the team introduced an explicit XAML Text classification.

Using PowerShell with Clustered MSMQ

My good friend Lars Wilhelmsen asked me a couple of days ago if I knew of a way to automate creating Queues on a clustered instance of MSMQ. Normally, a cluster of machines using Microsoft Cluster Service (MSCS) with MSMQ in Windows Server 2008 and up will have several MSMQ instances installed. For example, a 2-node cluster will typically have 3 MSMQ instances: 2 local ones (one for each node) and the clustered instance, which will be configured/running on a single node at a time.

So what usually happens is that if you try using System.Messaging on PowerShell on an MSMQ cluster, you’ll end up creating the queues in the local instance, not the clustered instance. And MessageQueue.Create() doesn’t allow you to specify a machine name either.

The trick to getting this to work lies in KB198893 (Effects of checking ”Use Network Name for Computer Name” in MSCS). Basically all you have to do is set the _CLUSTER_NETWORK_NAME_ environment variable to the name of the Virtual Server that hosts the clustered MSMQ resource:

$env:_CLUSTER_NETWORK_NAME_ = 'myclusterMSMQ'
[System.Messaging.MessageQueue]::Create('.\Private$\MyQueue')

One thing to keep in mind, though: You have to set this environment variable before you attempt to use any System.Messaging feature; otherwise it will simply be ignored because the MSMQ runtime will already be bound to the local MSMQ instance.

MSMQ and External Certificates

I’ve spent some time lately playing around with MSMQ, Authentication and the use of External Certificates. This is an interesting scenario, but one that I found to be documented in relatively unclear terms, with somewhat conflicting information all over the place. Plus, the whole process isn’t very transparent at first.

Normally, when you’re using MSMQ Authentication, you’ll have MSMQ configured with Active Directory Integration. When this setup, MSMQ will normally create an internal, self-signed certificate for each user/machine pair and register it in Active Directory. The sender would then request that the message be sent authenticated, which will cause the local MSMQ runtime to sign the message body + a set of properties using this certificate, and then include the public key of the certificate and the signature (encrypted with the certificate’s private key) alongside the message.

The MSMQ service where the authenticated queue is defined would then verify the signature using the certificate public key to ensure it wasn’t tampered with. If that check passed, it could then lookup a matching certificate in AD, allowing it to identity the user sending the message for authentication (there are a few more checks in there, but this is the basic process).

This basic setup runs fairly well and is easy to test. Normally all you have to do here is:

  • Mark the target queue with the Authenticated property:
    msmq_auth
  • Have the sender make sure to request authentication. For System.Messaging, this would be the MessageQueue.Authenticate property, while using the C/C++ API it would mean setting the PROPID_M_AUTH_LEVEL property to an appropriate value (normally MQMSG_AUTH_LEVEL_ALWAYS).

External Certificates

So what if you didn’t want to use the MSMQ internal certificate, for some reason? Well, MSMQ also allows you to use your own certificates, which could be, for example, generated using your Certificate Server CA installation trusted by your domain, or some other well known Certification Authority. To do this you need to:

  1. Register your certificate + private key on the sender machine for the user that is going to be sending the messages. The docs all talk about registering the certificate on the “Microsoft Internet Explorer Certificate Store” which confused me at first, but turns out it just meant that it would normally be the CurrentUser\MY store.
  2. Register the certificate (public key only) on the corresponding AD object so that the receiving MSMQ server can find it. You can do this using the MSMQ admin console from the MSMQ server properties window, or using the MQRegisterCertificate() API.

I decided to try this using a simple self-signed certificate generated using MakeCert (a tool I hate having to use because I can never remember the right arguments to use!). I created my certificate into CurrentUser\MY, then registed the public part of the cert using with the MSMQ management console.

The next part was changing my sender application code to use the external certificate, which is done by setting MessageQueue.SenderCertificate (or the PROPID_M_SENDER_CERT property). It wasn’t immediately obvious at first what format the certificate was to be provided, but turns out it’s just the DER-encoded representation of the X.509 certificate. In .NET, this means you can just use X509Certificate.GetRawCertData()/X509Certificate2.RawData or X509Certificate.Export() with X509ContentType.Cert value, which is equivalent. Here’s the basic code:

X509Certificate cert = ...;

Message msg = new Message();
msg.Body = data;
msg.UseDeadLetterQueue = true;
msg.UseAuthentication = true;
msg.SenderCertificate = cert.GetRawCertData();
queue.Send(msg, MessageQueueTransactionType.Single);

I ran this on my Windows 7 machine and all I got was this error:

System.Messaging.MessageQueueException (0x80004005): Cryptographic function has failed.
   at System.Messaging.MessageQueue.SendInternal(Object obj, MessageQueueTransaction internalTransaction, MessageQueueTransactionType transactionType)
   at System.Messaging.MessageQueue.Send(Object obj, MessageQueueTransactionType transactionType)
   at MsmqTest1.Program.Main(String[] args) in C:\tomasr\tests\MSMQ\MsmqTest1\Program.cs:line 73

What follows is a nice chase down the rabbit hole in order to figure it out.

The Problem

After bashing my head against this error for a few hours, I have to admit I was stumped and had no clue what was going on. My first impression was that I had been specifying the certificate incorrectly (false) or that I was deploying my certificate to the wrong stores. That wasn’t it, either. Then I thought perhaps there was something in my self-signed certificate that was incorrect, so I started comparing its properties with those of the origina MSMQ internal certificate used in the first test. As far as I could see, the only meaningful difference was that the MSMQ one had 2048-bit keys while mine had 1024-bit ones, but that was hardly relevant at this point.

After some spelunking and lots of searching I ran into the MQTrace script [1]. With that in hand, I enabled MSMQ tracing and notice a 0×80090008 error code, which means NTE_BAD_ALGID ("Invalid algorithm specified"). So obviously somehow MSMQ was using a "wrong" algorithm, but which one, and why?

In the middle of all this I decided to try something: I switched my sender application to delivering the message over HTTP instead of the native MSMQ TCP-based protocol; all it required was changing the Format Name used to reference the queue. This failed with the same error, but the MSMQ log gave me another bit of information: The NTE_BAD_ALGID error was being returned by a CryptCreateHash() function call!

Armed with this dangerous knowledge, I whipped out trusty WinDBG, set up a breakpoint in CryptCreateHash() and ran into this:

003ee174 50b6453f 005e14d0 0000800e 00000000 CRYPTSP!CryptCreateHash

0x800e is CALG_SHA512. So MSMQ was trying to use the SHA-512 algorithm for the hash; good information! Logically, the next thing I tried was to force MSMQ to use the more common SHA-1 algorithm instead by setting the appropriate property in the message (PROPID_M_HASH_ALG):

msg.HashAlgorithm = HashAlgorithm.Sha;

And it worked! Well, for HTTP anyhow, as it broke again as soon as I switched the sender app back to the native transport. It turns out that the PROPID_M_HASH_ALG property is ignored when using the native transport.

Changes in MSMQ 4.0 and 5.0

Around this time I ran into a couple of interesting documents:

The key part that came up from both of these is that in MSMQ 4.0 and 5.0 there were security changes around the Hash algorithm used for signing messages. In MSMQ 4.0, support for using SHA-2 was added and MAC/MD2/MD4/MD5 were disabled by default, while keeping SHA-1 as the default one. In MSMQ 5.0, however, SHA-1 was disabled by default again and SHA-2 (specifically SHA-512) was made the default [2]. So this explains readily why my test case was using SHA-512 as the hashing algorithm.

At this point I went back to using internal MSMQ certificates and checked what Hash algorithm was being used in that case and, sure enough, it was using SHA-512 as well. Obviously then something in the certificate I was providing was triggering the problem, but what?

And then it hit me that the problem had nothing to do with the certificate itself, but with the private/public key pair associated with the certificate. When you create/request a certificate, one aspect is what Cryptographic (CAPI) Provider to use to generate (and store it?) and I had been using the default, which is the "Microsoft Strong Cryptographic Provider"; according to this page it apparently isn’t quite strong enough to support SHA-2 [3].

So I created a new certificate, this time explicitly using a cryptographic provider that supports SHA-2, by providing the following arguments to my MakeCert.exe call

-sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 12

And sure enough, sending authenticated messages over the native protocol succeeded now. Sending over HTTP also worked with the updated certificate without having to set the HashAlgorithm property.

Exporting and Using Certificates

Something to watch out for that might not be entirely obvious: If you export a certificate + private key from a system store into a PFX (PKCS#12) file, your original choice of cryptographic service provider is stored alongside the key in the file:

pfx

Once you import the PFX file into a second machine, it will use the same crypto provider as the original one. This might be important if you’re generating certificates in one machine for using in another. The lesson so far seems to be:

When using external certificates for use with MSMQ 5.0, ensure you choose a cryptographic provider that supports SHA-2 when generating the public/private key pair of the certificate, like the "Microsoft Enhanced RSA and AES Cryptographic Provider".

At this point I’m not entirely sure how much control you have over this part if you’re generating the certificates in, say, Unix machines where this concept doesn’t apply (or how exactly it might work with external certification authorities).

External Certificates in Workgroup Mode

Another aspect I was interested in was the use of External Certificates when MSMQ is in Workgroup mode. In this mode, authenticated queues aren’t all that useful (at least as far as I understand things right now), because without AD the receiving MSMQ server has no way to match the certificate used to sign the message with a Windows identity to use for the access check on the queue. In this scenario messages appear to be rejected when they reach the queue with a "The signature is invalid" error.

However, if the queue does not have the Authenticated option checked, then messages signed with external certificates will reach the queue successfully. The receiving application can then check that the message was indeed signed because the Message.DigitalSignature (PROPID_M_SIGNATURE)property will contain the hash encrypted with the certificate private key as expected. The application could then simply retrieve the public key and look it up in whatever application-specific store it had to check it against known certificates.

My understanding here is that despite that it cannot lookup the certificate in DS, the MSMQ receiving server will still verify that the signature is indeed valid according to the certificate attached to the message. That’s only half the work so that’s why the application should then verify that the certificate is known and trusted.

[1] John Breakwell’s MSMQ blog on MSDN is a godsend when troubleshooting and understanding MSMQ. Glad he continued posting about this stuff on his alternate blog after he left MS.

[2] The System.Messaging stuff does not provide a way to specify SHA-2 algorithms in the message properties in .NET 4.0. No idea if this will be improved in the future.

[3] I always thought the CAPI providers seemed to be named by someone strongly bent on confusing his/her enemies…