The other day at work I spent some time investigating an OutOfMemoryException being thrown by third-party component being used in a .NET 3.5 application. After a closer look and weighing up some options on how to fix it, we realized that the problem went away if the application was recompiled in .NET 4. This ended up causing some discussion among the team – how could simply recompiling the application that the .NET 3.5 third-party library was being used in fix the problem in the library itself?
One of our team members was convinced that if the third-party library had been built using .NET 3.5 and it was then used in an application that targeted .NET 4, that the third-party library would continue to run under the context of .NET 3.5. To that end, he was sure that recompiling the application in .NET 4 couldn’t have actually solved the problem.
Of course this triggered a Google search frenzy which lead us to this article on msdn:
To give all add-ins the best chance of working, we always choose the latest runtime for managed COM activation. Even if you only have older add-ins installed, there is no way for us to know that when that add-in gets activated, so the latest runtime still gets loaded.
An unfortunate side effect of this activation policy is that when a user installs a new application with a new version of the runtime, completely unrelated applications that use managed COM add-ins, built against older versions, suddenly start running on a newer runtime and can fail.
For the .NET Framework 3.0 and 3.5, we solved this problem through an extremely strict policy: each release was additive and only added new assemblies to the prior version with the same runtime underneath. This prevented any compatibility issues when installing them on a machine running the .NET Framework 2.0.
This means that when you are running an app on the .NET Framework 3.5, you are really running it on the 2.0 runtime, with a few extra assemblies on top of it.
What the above implies is that the update from .NET 2.0 to .NET 3.5 was only ‘additive’, that is, Microsoft added some new functionality (dlls) but kept existing functionality the same for backwards compatibility. Actually, if you take a look in the v3.5 folder under Microsoft.NET in the Windows directory, you’ll see that there are a handful of new dlls but no new core libraries to replace the .NET 2.0 versions.
Therefore, if the application we were dealing with had been using a .NET 2.0 third-party component and we were rebuilding the whole application in .NET 3.5, all core types would still be exactly the same under the covers and our colleague would have indeed been correct in his conclusion that rebuilding the application couldn’t have solved the problem.
However, this doesn’t seem to be the case from .NET 3.5 to .NET 4… if one reads on:
This means that when you are running an app on the .NET Framework 3.5, you are really running it on the 2.0 runtime, with a few extra assemblies on top of it. However, it also means that we couldn’t innovate in the .NET 2.0 assemblies, which include key functionalities, such as the garbage collector, just in time (JIT) and base class libraries.
With the .NET Framework 4 we have implemented an approach that allows high compatibility, including never breaking existing add-ins, and also lets us innovate in the core. We can now run both .NET 2.0 and .NET 4 add-ins in the same process, at the same time. We call this approach In-Process Side-by-Side, or In-Proc SxS.
So the .NET Framework 4 allowed Microsoft to ‘innovate in the core’ – in other words, there are new versions of the core libraries. Fair enough, Microsoft do need to be able to make improvements to existing code after all, otherwise we’d still be using .NET 1.0 at the core! But hang on, doesn’t this mean that if you re-build your application in .NET 4.0 that you’ll have to regression test the whole lot because who knows what core type implementations may have changed under the covers? Well no, not really, because of this new approach that Microsoft have adopted called In-Process Side-by-Side execution!
This all sounds great, until you read…
What Does In-Process Side-by-Side Mean to You? > Library Developers and Consumers:
In-Proc SxS does not solve the compatibility problems faced by library developers. Any libraries directly loaded by an application—either via a direct reference or an Assembly.Load*—will continue to load directly into the runtime and AppDomain of the application loading it. This means that if an application is recompiled to run against the .NET Framework 4 runtime and still has dependent assemblies built against .NET 2.0, those dependents will load on the .NET 4 runtime as well. Therefore, we still recommend testing your libraries against all version of the framework you wish to support.
Yup, that’s right. If you’re either developing or consuming a library (and really, what developer isn’t doing at least one of those two?) and you decide to upgrade your application to .NET 4, Microsoft actually recommend that you test, test, test!
This tweaked my interest – how different could these new core libraries really be?. I installed dotPeek and opened up both the .NET 2.0 and .NET 4.0 mscorlib.dll assemblies. I drilled down to the Dictionary class and found that there were in fact quite a few, albeit minor, differences between the two implementations!
Anyway, the moral of the story is – upgrading from .NET 2.0 to .NET 3.5 really isn’t a big deal – it’s always wise to do some regression testing, just in case, but you really shouldn’t find many issues with this migration. Upgrading from .NET 2.0 or .NET 3.5 to .NET 4.0 is a different story though. If you’re lucky, In-Proc SxS will help you with backwards compatibility, but if you’re developing or consuming libraries then you’d better have some regression test plans up your sleeve for this migration.
So at the end of the day, it seems like our third-party component bug could well be resolved by rebuilding the whole application in .NET 4 – who would’ve thought!