Tuesday, August 25, 2009

Adventures while building a Silverlight Enterprise application part #20

This post is about some of the less technical (a.k.a functional) stuff around code generation, especially when using XAML, it being in either Silverlight or WPF. This is a no-code post, that is intended to give you some food for thought when planning for code generation on different levels.

The basics
If you're involved in applications that have large amounts of, well, something, it may very well be feasible to use some form of code generation. You could use this for many things and most of them, if not all, have been done already. Some common items in software that may be candidate to code generation are:
  • Business objects
  • Screens / windows / forms in the GUI
  • Reports
  • Database scripts
And obviously there are a lot more. There are only two basic requirements to code generation. What you plan to generate needs to be in some format you can actually supply from some peace of code and what you plan to generate needs to have some form of repetitiveness to it.

The most common thing to generate is your Business objects. They tend to be very repetitive and as they are in some programming language it usually is enough to generate simple text. If you're application is based on some relational database, you might even be able to leverage the metadata in your database as a source for generating your business objects.

Setting goals
The first thing you should do when thinking about code generation is set goals. What I mean by that is you should think about not only what you're going to generate but also why. Is this generated code only a starting point and are you never going to regenerate? Or are you interested in regenerating code at some point? The answers you give on these questions have a big impact on how your generation process should look.

If you're generating only once, you need to make sure that any metadata you use in the generation process is definitive. Experience tells us this is nearly impossible to achieve, so at least make sure the impact of a change is as minimal as possible and also make sure your generated code is readable.

If you are going to regenerate, consider the fact that not only do you need to run the process and make sure you're not loosing any customizations on the generated code, you also need to make sure that any source of metadata is consumed again. More on that later. Another concern when regenerating is integration testing. When you regenerate part of your applications source, you've altered all that source and changes are something is going to fail because of it. Make sure you think about how you're going to at least test for failures, or better yet, prevent failures in the first place.

Have a business case
You may or may not have to pitch code generation in your organization, however you should always have a business case for a code generation process. What I mean by that is you should at least have some clue as to how much effort it will take to build the generation process and how much time it will safe you in the end. Basically you need to have a Return On Investment calculation, at least for yourself, before you go off and invest a lot of time into this. I know it may seem that you won't invest that much at first, but trust me, you'll most likely end up spending at least twice the time you figured before you actually set down and made a decent estimate.

When looking at this business case, make sure you do not only include the obvious. Everyone can figure out that it saves time not having to write a specific amount of code. What is getting left out a lot is the quality of the code that is generated. It's consistent, which means you can test one instance of the code and you'll know the rest will work. This also means it reduces the amount of bugs and if bugs are found in generated code, it reduces the amount of effort to fix these bugs.

Having worked for several software companies in different roles I know from experience that having less bugs is an even bigger win than spending less money on writing code. Having a bug doesn't only cost money, it also impacts reputation and customer satisfaction at some point. Having less bugs is good for everybody.

When things get tricky
You obviously can't generate every bit of code in your application, however most of us will be tempted at some point to try and generate something to hard. One of the most underestimated peaces of code to generate is part of the GUI.
Let's say you're building a Line Of Business application that has one hundred forms in the application that are being used for data entry. This is obviously a very common scenario. Not many developers like to go out and build that many forms. It just feels silly doing repetitive work if your job is all about automating repetitive tasks.

However, there is more then meets the eye here. Let's say you want to do this for a Silverlight application. At first glance, all you need to generate is some XAML for each form and it needs to contain some TextBlocks to contain the labels and some other controls to be used for input. All seems fine so far.
Now consider the metadata you need to actually generate this. You may say that you already have this as it is the same metadata you've used for generating your business objects. Is it? I doubt that. The only thing your business object tells you is what fields it contains and what types they are. They don't tell you anything about their position in a form. There is no information about any special behavior that this one specific control needs different from most others.

All that extra information needs to come from a functional designer, it being some application expert or even one of the developers in the team. Now let´s say you have this metadata in some usable form. You´re still not capable of generating a functional form. Now you need to bind to your business objects. Initially this isn´t that hard, but things start to get complicated when your datamodel evolves or when the functional designer desides to link in that one field from some other business object on the same form, because that is easier for the user.

Now let´s say that you even got that under control. Now you still need to ty in behavior. You need to attach events, which means you also need to generate the code behind for your application, but you also want to be able to extend the code behind at some point. The metadata you need for that process has to come from a developer at some point.

If you now look over the big picture for generating XAML forms, you have not one but three sources of metadata:
  • The datamodel
  • A functional designer
  • A developer
And these three sources of information need to be brought together so they can be used in a single generation process. Building something like this quickly becomes a project on it's own, rather than just a meens to an end.

Putting it in perspective
So shouldn't you do this? That's a good question and there is no simple yes or no answer. It all boils down to your ROI calculation as I explained it earlier. How much effort goes into building and implementing this way of building your GUI in the organization? And how much effort is going away from building and maintianing it in a traditional fashion?
Obviously if you have an increadibelly large application, this might actually work, but another case in which this may work is if you're a one-man-show. If you do the datamodel, the functional design and develop the application, then this might just be worth the effort. The reason for that is because you control all the variables and there is no team to keep in check on how to work with this.

So what if you figure out this is not a good approach for your project, should you just go off and build a hundred forms by hand? Well, not so fast. If you can eliminate one or two of the sources of information, by not generating that part of the code (or at least not in the same component), you may just be able to make it feasible. The thing to keep in mind is that combining metadata sources makes things more complicated, so try and keep them seperated.

I hope this article helps you out with your choices in wether or not to generate specific parts of your application code. If you have any questions, remarks, etc. you know the drill.

Friday, August 21, 2009

Pimpin' the blog part #3

Today we'll look at how I've build my own RSS feed in ASP.NET 3.5 and using Linq2SQL.

Before we dive into that, I would like to announce that my articles are now fed trough this RSS feed to CodeProject.com. The latest of my articles (at least most of them) are now available there too. I would like to thank Shawn Ewington from the CodeProject for working with me an supporting me, so I could achieve this bigger audience.

Now, back to business. As you may remember from part #1 of this series, the first goal for me was to customize my RSS feed, so I could add categories to my feed without messing up my navigation. I also lined out the fases I would go trough to get to my new and improved blog and after leaving you in part #2 I reached step three, wich means I now have my content up and running in a database on the new platform and it gets updated trough RSS from the Blogger.com platform.

I've now used the code I wrote for part #2 as a starting point for building the new RSS feed. As I had a great experience with the SyndicationFeed class for parsing a feed, I figured I might as well use it to publish a feed as well. The general steps needed to get the feed published are these:
  1. Setup a SyndicationFeed object with some general information like title, author, etc.
  2. Load the articles from the database and convert them into feed items
  3. Publish the feed
The first step actually did take some work as I wanted a lot of things to be configurable trough my web.config and also it takes some extra code to set most of the properties. To help me out with loading these settings I've build a SyndicationSettings class which holds the constants for the config key names and accesses the ConfigurationManager to get the actual values. I've made it a static class, so it is as easy to use as possible.

Setting up the feed is done in the SetupFeed method:
private static SyndicationFeed SetupFeed()
SyndicationFeed feed = new SyndicationFeed();
SyndicationPerson person = new SyndicationPerson(SyndicationSettings.SyndicationPersonEmail,
SyndicationSettings.SyndicationPersonName, SyndicationSettings.SyndicationPersonUrl);

feed.Categories.Add(new SyndicationCategory("CodeProject"));

feed.Copyright = new TextSyndicationContent(SyndicationSettings.Copyright,
feed.Description = new TextSyndicationContent(SyndicationSettings.Description);
feed.Generator = GeneratorName;
feed.Id = SyndicationSettings.FeedId;
feed.Links.Add(new SyndicationLink(new Uri(SyndicationSettings.HomePageLink)));
feed.Title = new TextSyndicationContent(SyndicationSettings.FeedTitle);
return feed;

As you can see I've created a SyndicationPerson object to be reused both as the author and a contributor.

The next step is to load the articles from the database and convert them into SyndicationItem instances. First I needed to get them from the database, for which I expanded the functionality of the StorageConnection class I wrote in part #2. The basics are pretty straight forward, but as I was still fairly new to Linq2Sql I struggled a bit with getting the categories to load with the data automatically. Here is the GetArticles method from the StorageConnection:
public static List<Article> GetArticles(bool loadCategories, int maxNumberOfArticles)
List<Article> articles = new List<Article>();

DataLoadOptions dataLoadOptions = new DataLoadOptions();
if (loadCategories)
dataLoadOptions.LoadWith<Article>(article => article.ArticleCategories);
dataLoadOptions.LoadWith<ArticleCategory>(articleCategory => articleCategory.Category);
using (Developers42_DataClassesDataContext context = new Developers42_DataClassesDataContext())
context.LoadOptions = dataLoadOptions;
articles = context.Articles.OrderByDescending(article => article.PublicationDate).Take(
return articles;

I've used the LoadOptions property to let the Linq2Sql framework know I want to include both the ArticleCategory and the Article entities while loading the Articles. Further more I used the OrderByDescending method to specify that I want the Articles on PublicationDate and I used the Take method to specify that I only want to load up to 25 articles at a time.
Converting the Article objects into SyndicationItem objects is as simple as calling the constructor including most of the properties and then adding the categories.

As the final step all that needs to be done is to publish the result to the client. To do this, we need to first clear the existing response buffer, to make sure we don't send anything already in the template. Next we can simply create and XmlWriter that has the response stream as the underlying stream and call the SaveToRss20 method on the SyndicationFeed with the XmlWriter as a parameter. This is how the code looks:

XmlWriterSettings settings = new XmlWriterSettings();
settings.Encoding = Encoding.UTF8;
XmlWriter writer = XmlWriter.Create(Page.Response.OutputStream, settings);


As you may have noticed, building your own RSS feed is easy when using the SyndicationFeed class. To help you even further I've included a link to the source here. It also includes the code from part #2 for parsing an RSS feed.

I hope this post was useful to you. Next time we'll look at parts of the new version of the Developers 42 blog.

Tuesday, August 18, 2009

Adventures while building a Silverlight Enterprise application part #19

This time I would like to share an experience with you that I had trying to get and getting inspiration to solve issues more effectively. I'll line out the issue, which involves saving to the database trough Entity Framework and how I got inspired. Of course we'll also look into the solution I found after getting inspired.

A lot of times when I tell non-developers that building good software is actually a creative activity, more then it is a technical one, they look at me like I'm telling them I'm the Queen of England. However I'm not the only developer who gets stuck every now and then, not so much because I face such a hard to solve problem, but simply because I have a lack of inspiration.
The general consensus is that it helps to put your mind on other things, however in this case I felt I needed to regrasp the spirit of doing cool code. I surfed around a bit and ended up on Channel 9. I browsed trough the latest videos and found a video about Windows 7 and how the kernel no longer uses dispather locks. You can watch it here:

Get Microsoft Silverlight

Now, this is obviously not on topic for anything I do, as I'm working on a Line Of Business application. Also a lot of what was being told was completely new for me, because I never had such an in dept look at the Windows kernel, however the way Arun Kishan explores the issues with the dispatcher lock and with possible solutions to the problem, were very inspiring to me.

The issue
I started to describe my issue to myself. We have been building a framework that uses a generic communication model using WCF to pass data between a webservice and our Silverlight client. In this framework we have a single operation called UpdateItem which is responsible for saving any changes we make to the data (except for delete). So if I create a new business object, I pass it to UpdateItem and it persists it to the database. If I update a business object, again UpdateItem is called and it persists those changes to the database.

So far so good. However, I was building a popup in which a user is allowed to add items to a combobox, so it can then use these. I figured I would just create the business object without an ID, which indicates to the UpdateItem operation that this is a new object and it would then persist it in the database. It turns out it didn't. Here is why.

Two types of adding data
In Entity Framework you can add objects to generic ObjectQuery instances which are contained in the ObjectContext. Some digging showed we didn't use this, because in our application we have two different kinds of adding data.

The first type we implemented was adding data trouhg so called 'parent' data. For example you can have a Person object, which can have children. Now if I add a Child, in the framework we've build, it knows about that and sets the Child.Parent property to the right Person object and the Entity Framework detects this as an insert. We never call an Add on the ObjectContext and that's fine, because we added it trough some other object.

However in the situation I described earlier, the newly added object doesn't have any 'parent' data. It is just a sole record in some list somewhere, that may or may not get referenced by other objects. To be able to insert this I would need a second type of insert that would directly call an Add near or on the ObjectContext somehow. More importantly I would need some way to detect if he object actually did or did not have 'parent' data. To figure this out I dove deeper into the framework.

The internals of UpdateItem
If we have a look at our UpdateItem operation, this is basically what it does:
- Deserialize the incomming data
- Instantiate a new instance of the targeted business object
- Load the original data into the business object (if available)
- Call Save with the incomming data to persist it
- Serialize the business object and send the latest state back

All the magic happens in the Save, so let's have a closer look. The original Save consisted only of three parts. It would start with pre-save preparations, which are specific to an object. Then it would make sure that all the changes needed would be in an EntityObject and then it would call Save on the ObjectContext. This works fine for the first type of save, where we had the 'parent' data, but not if there is no 'parent' data.

Chosing a solution
I figured I had several options to insert code that would fix the problem. The first option was to actually have this done in the pre-save preparations. This would allow me to fix the problem quickly now, however there are going to be a lot of these types of objects, that get saved without 'parent' data, so I decided to try and find a more generic solution that didn't need type specific implementation.

The second option was to expand the method that makes sure all the changes to the model are in the EntityObject (called BuildModelData). This would solve the problem, however determining if the object doesn't have 'parent' data is a very complex thing and BuildModelData was already complex as it was. So if I had no other choice I might go for it, but for the moment the search went on.

The final option was to expand Save, so it would somehow check if the object had 'parent' data or not and if not add it somehow to the ObjectContext. It would have to do so after BuildModelData and before calling Save on the ObjectContext. It turned out this was a lot easier then I expected.
The ObjectContext has a property called ObjectStateManager. This object actually keeps track of any changes made to the ObjectContext in any way. It has a method called TryGetObjectStateEntry, which allows you to see if an EntityObject is actually part of the changes made so far. It also provides information about the changes involved, but all I needed to know was if it was part of the changes or not.

Adding the object was even easier. The ObjectContext has an AddObject method, which takes a name and an EntityObject, which I both had already available. Here is the code I ended up adding:
ObjectStateEntry objectState;
bool hasObjectState = context.ObjectStateManager.TryGetObjectStateEntry(entity, out objectState);
if (!hasObjectState)
context.AddObject(name, entity);

As you can see, there is not much to it, but the best solutions are simple. However, testing with this I found out that the DeleteItem operation, used for deleting objects, also calls save, but then the EntityObject in entity is actually null (as it was deleted), so the Save would fail (although the object was still deleted from the database as well). A simple check on entity == null fixed that problem and now I can handle both types adding data trough our framework.

I hope you've found this post as inspiring as it was for me to experience and write it. Please leave me comments if there is anything you need or want to share.

Thursday, August 13, 2009

Pimpin' the blog part #2

In this episode of Pimpin' the blog we are going to have a look at how I will sync my data between Blogger.com and my new hosting platform. This involves a not well known feature from .NET in relation to RSS and using Linq2SQL. Eventually we'll end up with a tool that reads by Blogger RSS Feed and stores it in a SQL Server 2005 Database (or any other compatible database for that matter).

Thoughts on synchronization
As I laid out in part #1 I'm far from ready to give up my Blogger account as I still have many things to replace before I can do so. However I don't feel much for keeping two stores for the same information synchronized by hand. I have better things to do with my time than that (altough not that much better :-) ).
The first thing that came to mind was actually RSS as it keeps all the blog aggregate sites up to date as well, so why not use that? Besides, the final trigger to do all this was to customize my RSS feed in the first place, so why not use it as a source?
As a good developer I also went to see if I had any alternatives. I could opt for a HTTP/HTML spider, but it would be awkward, messy and complex. I could try and automate the export process for Blogger blogs, but again, awkward, messy and complex.

Loading a feed
So RSS it is then. The entire process is relatively simple:
  1. Get the feeds xml content
  2. Parse the feed into articles, etc.
  3. Store the articles and related data in the database
The first step is easy. Just take an HttpWebRequest and point it at the feed. Here is the code:

HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(FeedUrl);
if (UseProxy)
request.Proxy = newWebProxy(ProxyUrl);

HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();

So as you can see. it first sets up the HttpWebRequest, so it can get to the RSS Feed (using a proxy if needed). Then it just gets the response stream which then contains the XML for the RSS Feed.

The next step got me thinking. The first solution that popped into my mind was to use Linq2Xml. However that would involve a lot of code to get to all the different parts of information I needed. I googled around, read some blogs, until I ran into someone mentioning the SyndicationFeed object that's new in .NET Framework 3.5. I figured I would give that a try to see how it works and I could always go back to parsing the feed myself.

Here is the code to actually load the response stream into a SyndicationFeed instance:

XmlReader reader = XmlReader.Create(responseStream);

Feed = SyndicationFeed.Load(reader);

Wow, that was easy, now wasn't it? Keep in mind however that you do need to add a reference to both System.ServiceModel and System.ServiceModel.Web to make this work.
What I ended up with is a class that would handle loading the feed into a SyndicationFeed object that handled everything I needed in under fifty lines of code!

So that tackled step two of the process.
All that's left is to store it into the database. As I mentioned earlier, I chose to use Linq2Sql to handle this for me. Why? I have had extensive experience with Entity Framework and I do think that for large solutions it can be a good choice, however it does take a lot of effort to make it do what you want, which is not what I needed here.
I read up on Microsofts strategy on data access and why both Entity Framework and Linq2Sql are pushed and found out that Linq2Sql is actually meant to support RAD on smaller projects, or at least for smaller data access layers. As my data model only consists of three tables, I guess my project would qualify as small.

I'm not going to bother you with the details on how I stored my articles trough Linq2Sql and just go ahead and post a link to the code below.
The main program to control this is more interesting:

string feedUrl = ConfigurationManager.AppSettings[FeedUrlConfigKey];
RssFeedReader reader = newRssFeedReader(feedUrl);
reader.UseProxy = UseProxyTrueValue.Equals(ConfigurationManager.AppSettings[UseProxyConfigKey],
if (reader.UseProxy)
reader.ProxyUrl = ConfigurationManager.AppSettings[ProxyUrlConfigKey];

foreach (SyndicationItem feedItem in reader.Feed.Items)
List<string> categories = newList<string>();
foreach (SyndicationCategory category in feedItem.Categories)
StorageConnection.AddArticle(feedItem.Title.Text, feedItem.Summary.Text, feedItem.PublishDate.Date,
feedItem.Id, categories.ToArray());

First I set up my RssFeedReader instance and call ReadFead on it. This results in a SyndicationFeed on which I iterate trough the Items collection. Then I get the categories and feed them into my StorageConnection class which makes sure everything is properly stored in the database. The StorageConnection class makes sure nothing is duplicated even if the same article is added more then once.

Here is the source:

In part #3 of this series, we'll look into building a new RSS feed with some customizations, based on the data we've retrieved today.

Sunday, August 9, 2009

Pimpin' the blog part #1

In this article I will describe to you why I want to change my blog, what I'm trying to achieve, how I plan to do that and whats in it all for you.

After writing only a few articles in Blogger and reading up on how to build a usable blog, I knew that at some point I would have to replace Blogger with something custom build. As time passed and I tried to do more and more things with my blog, I ran into more and more things that I couldn't do as I wanted to do them.

Recently, as some of you may have noticed, I got a comment from Sean Ewington, who is the Chief Technical Editor for The Code Project. He invited me to publish my blog on CodeProject.com (thanks, Sean). To do this I would simply add my blog's RSS feed to their system and add a category to my articles, stating that they should be available on CodeProject as well. Unfortunately that last step is not possible for me, as that would mean I would now have a CodeProject category on my blog, which is not what I want.

This latest thing on the list of things you can't do with Blogger, triggered me to finaly get to work on this and plan some new code.

Some things I want to be able to do after the complete transformation has happened:
  • Customize my RSS feed
  • Easier to customize styling
  • Have more control on advertising (individual posts have specific ads, etc.)
  • Use Silverlight for at least some part of my blog
  • Have a better layout with three columns
  • Have an easier to maintain and easier to use navigation
To get this all done requires some planning. It won't happen all in one go, obviously. First thing I should state is that I had a windows hosting account lying around for a while now and I will be leveraging that account for my blog 2.0.
Here are the steps I plan to follow for getting up and running with a new platform.
  1. Get a database up and running to store content
  2. Get the content on the new platform to sync with the content on Blogger
  3. Build a new RSS feed and publish it trough my feedburner account (so if you have a subscription, you won't notice the change).
  4. Design a new layout for my blog
  5. Implement the new layout
  6. Test the two blogs side by side
After that I'm not sure. I might want to redirect all traffic coming to http://jvdveen.blogger.com/ to my new site, but I'm not sure what will happen with search engines and their spiders and I'm not sure if I'm even allowed to do this. However I'm still happy with the Blogger editor and I'm not sure if I want to build a completely new editor so I can then kill my Blogger account. This is something I'll be thinking about later on. If you have any suggestions, please let me know.

Why should you care?
Well, you don't have to care, however... as the title of this post suggests, I will be publishing about the whole process of creating my new blogging platform and then moving in. Not only might you learn something, I will also make available handy code in the process, that you might use yourself. So stick around, learn and benefit.

Tuesday, August 4, 2009

CodeEmbed4Web Beta 1 Patch

As most of you probably know, I recently released Beta 1 of CodeEmbed4Web. Unfortunately there was an oversight with the deployment and I received some complaints from people who where unable to start up CodeEmbed4Web.

As of today, this issue is fixed. The link in the original post, which you can find trough the link on the right, now points to the new setup, which you can download and reinstall.

The problem I ran into was that I used a WPF Toolkit theme, for which I included the WPF.Themes.dll assembly, but this assembly references WPFToolkit.dll, which you would only have on your computer if you installed an application that was already using the WPF Toolkit or if you are a WPF developer using the toolkit. If you are not (and chances are you aren't) and you don't want to reinstall CodeEmbed4Web, then you still need this file. You can download it here.

To patch CodeEmbed4Web Beta 1 succesfully, you have to place this in the installation folder of CodeEmbed4Web. If you've accepted the default path that should be in your Program Files folder\Developers 42\CodeEmbed4Web\.
Just paste the file there and it should work fine.

If you have any questions, suggestions, or other notes, please email them to codeembed4web@gmail.com or leave a message below.

Adventures while building a Silverlight Enterprise application part #18

Talking about ups and downs. Today we look into an issue with upgrading from Silverlight 2 to Silverlight 3. I feel Microsoft did drop the ball on this one and I guess not a lot of developers noticed it so far as I didn't get any helpful feedback on silverlight.net.

But first I would like to share with you, my first ever interview (ok, so it's only online, but hey, I'm not a famous blogger, right?). You can read it here. Thanks, BenSpark, for giving me this oppertunity.

So, down to business. Here is the complete story. At first it may not seem very relevant, but hang in there.

It all started back in december 2008, when Microsoft released a seperate update for the datagrid in Silverlight 2. We chose to install it, as it fixed an issue we had with the datagrid in combination with the combobox. Installation of this update involved replacing System.Windows.Controls.Data.dll in the Silverlight 2 SDK folder with a new version and then recreating any existing references to it. We went trough the steps and all worked fine so I completely forgot about it after that.

Now about two weeks ago, after testing our application code with Silverlight 3, our product team decided to go ahead and upgrade to Silverlight 3. At that point I was alone working on the Siverlight solution, so I went ahead and upgraded my machine to Silverlight 3. I loaded the solution and it upgraded without any issues. I checked in the solution to Source Control and went on with working on different parts of the application.

Last week a colleague was getting ready to work on the Silverlight solution as well, so he installed Silverlight 3 on a clean VM and loaded the solution. He tried to build, but it failed because it couldn't resolve the reference to System.Windows.Controls.Data.dll. We didn't think to much of it and recreated the reference after which everything worked fine. He ran the application and nothing seemed wrong so he checked in the solution.

I figured I would get the latest version from Source Control, to make sure everything was still working, but it wasn't. Sure, it build and I could run the application, but one of my datagrids, was now not showing any data inside the records (so it did show records). Here is what I saw in the grid:

As you can see the alternating background colors show to records are there, but no data is displayed. Breakpoint on the getters of the properties bound to the columns didn't get any hits, so something more complex is obviously wrong. The code didn't change so that couldn't be the issue. I dough into this and spent almost two days on finding out what was wrong.

Eventually I decided to fire up the VM I used for testing the upgrade and compile my source in there. It did show exactly the same issue, but as compilation was a lot slower I noticed in the output, this one line (click the image to enlarge):

As you can see it states a conflict between versions of the System.Windows.Controls.Data.dll's in different projects. The end of that line actually states:

Choosing "System.Windows.Controls.Data, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" arbitrarily.

What?? I don't like that at all. It doesn't know anything about these assemblies and just gambles to pick one. I feel a compiler error would be in place here, telling me to resolve the conflict manually. Ok, one could argue that it did actually get the Silverlight 3 version (the public key token is the one from Silverlight 3 and not 2). However, it may very well load the wrong dependencies as I assume it did in this case, causing strange behavior. Even stranger, I checked the project files XML and found out it actually explicitly references the Silverlight 2 version, including it's public key token, so why didn't it reference that one?

What do you think about this? Please leave me a comment below.

I decided to recreate the reference, so it would now point to the Silverlight 3 version. I rebuilded the solution and now it works! The conflict is actually resolved, so it no longer comes up in the build output. The datagrid shows the records it's supposed to show.

Lessons learned:
  • Read the build output when in trouble
  • If possible avoid manually updated assemblies
  • Don't trust that if it works on one machine it will work on other
So now that frustrating problem out of the way, it's back to the daily work.