Monday, December 7, 2009

Adventures while building a Silverlight Enterprise application part #30

Today's post is about focus. It seems like a weird topic for my blog, but it was triggered by some major discussions on how resources were being assigned to our project. With this post I hope to shine some light on the subject.

Time management
This story actually starts years ago. I was leading a team at a small company when I was asked to sit in on a time management course. It was a day well spend. One of the most important lessons I got from this course was to focus on your goals. The point was to make you aware of all the things you do in your working life that do not contribute to your goals. With every task you are planning to do or are about to execute, you should ask yourself "How is this going to contribute to my goals?" In order to do that, one should have goals.

Getting and/or setting goals
So having goals is important in order to be effective in your job. There are different types of goals. Some are given to you by your boss/manager. Others you should come up with yourself.
When I applied for a job with my current employer, I was presented with a set of goals:
  • Help build a new version of the software we deliver right now
  • Help take the team to the next level in order to cope with the new technology and methods used
A lot of people would say that these goals are not SMART, they are not concrete enough. For this purpose I tend to disagree. Remember, these are job goals. If you reach them it might be time to look for another job (which doesn't necessarily mean with a different employer).

Conflicting goals
So I know about my goals, but there are other people in the company with different goals, some of which tend to conflict with my goals. This is completely normal, but usually not understood or maybe people do not realize that these things are important to be aware of.
In my case the conflicting goals included making money now, which I couldn't be of any help with as the product is not ready yet, and supporting the software in the market right now, which directly conflicted with my goals, because this meant taking resources from my project and assigning them to other tasks. As these goals are not my goals, I protested and the discussions about this started.
What surprised me was that the project manager on my project was the one defending the other goals. Apparently he has more goals than just this project, but still, considering the circumstances I expected a different response.

Balancing goals
At times your own goals can conflict. If you look at my goals, this can be true at times. Helping to get the team to improve on itself takes time, which means taking time from building the new software, which impacts meeting deadlines. However they can also support each other. If I can get a team member to do work on the new software, he or she learns about the new technology, gains experience and also delivers work. It pays to invest in the team members in the long run.

With two goals this is reasonably easy to work out and deal with. However as the number of goals you have grows, the complexity usually grows with it. Dealing with multiple projects, while coaching people and dealing with the conflicting goals of others would take up a lot of time and effort, as I know from experience. One thing to do if you're a developer, is to reduce the number of goals and if you have to do multiple projects, always make sure you get priorities on each project. If someone asks you to do something for a lower priority project, the task should have a high priority or else you shouldn't be doing it. Being brave enough to ask questions about tasks you were assigned and to say no if needed is the key here.
For managers and/or team leads, you shouldn't assign more then a few projects to any developer. You should also realize that it takes a significant amount of time to switch from one task to another. And also it's your task to balance the goals for your team. Make sure you don't ask them several things at once, without stating the order you expect.

I hope this shines some light on the subject and it helps you to be successful in your job. Please feel free to comment on this. I'm looking forward to reading your opinion.

Thursday, November 19, 2009

Adventures while building a Silverlight Enterprise application part #29

Silverlight 4 Beta is out and it has some cool features that could help our development, but we must wait. This article is about how new versions of a platform like Silverlight impact our current and ongoing development and why sometimes you need to wait and sometimes you need to push forward.

Project status
At this point in time we are working against the clock. We have a tight schedule and we can't afford any delays whatsoever. Obviously now is not the time to switch Silverlight versions, right? Well, it's not that simple. It's not so much that we couldn't make the time. It's more that the benefits should always outweigh the costs. At this point in time, for us, it doesn't.

Pros and cons
Basically what we would benefit from right now:
  • Printing support
  • The new datagrid
  • Improved performance
The printing support is a must have later on, but at this stage of development it just isn't that important for us. The new datagrid is a nice to have, because the one that's in there is doing the job just fine. Improved performance is nice, but right now performance is good. It will most likely drop off later, which is why I do feel we need to switch to Silverlight 4 later on.

The major downsides of switching right now are time and that four letter word 'Beta'. We'd be spending several days on first setting up a decent testing environment, doing a test upgrade, testing a lot of behavior, fixing bugs and then getting VS2010 Beta 2 rolled out to our development environments, including a TFS client. We would have these days if we could reduce the time on at least one of our functionalities needed for the first release, but there just isn't one right now.

'Beta'
I've always loved working with cutting (or should I say bleeding) edge technology and working with Microsoft beta's has improved a lot over the past years, however...
...at times you must ask yourself if it's worth the risks. In this case for example, our software is dealing with peoples paychecks. Now, I don't know about you, but a lot of people tend to get emotional when their paycheck was calculated in the wrong way, is to late, taxes were all wrong or some other mishap occurred.
One thing managers in my company don't like to hear is 'Yeah, we know about that problem that messed up thousands of paychecks. It was a problem with the beta of technology x'. Shouting would have to occur and rightfully so.

"So I shouldn't make the switch?"
It all depends. Let me start by saying I am excited about Silverlight 4 and Visual Studio 2010, but as long as they are in beta they wouldn't work for me. If you are working on code that is still a long way from release, or you are working on software you can take a little risk with, or if there is that one thing in Silverlight 4 you had been waiting on for the past year, then you should make the switch.

Be happy you're on the cutting edge.

Wednesday, November 11, 2009

Adventures while building a Silverlight Enterprise application part #28

Today's post is about scaling a team, especially when innovating.

Why do you need to scale?
Well, that's an obvious question. You can't do everything yourself. Even if you're one of those "can't let it go, so got to do it myself" developers, then still you might have colleagues that actually want to do some work on that cool product you're working on or use that cool technology you've introduced. However hard this can be in the beginning (and trust me, I've been there), you should start to let go and trust other developers with your code. Eventually this will pay off.

Besides letting go, what's the issue?
A simplistic approach to managing a team that needs to produce something faster, is to just throw in a couple more people. It doesn't work that way. Especially when you're doing new things and/or working on a large code base, you can't just expect a developer to walk in and start working on it.

If a new technology is involved the new developer needs to get familiar with what that technology is all about. Changes are some new tools need to be installed, some tutorials need to be followed to get a feel for the technology and then can they start and look at how this technology was applied in this particular project.

Also, if there is already a fairly large code base, then there are most likely a lot of concepts and principles that are being used in the design. A developer that has actually worked on this code from the start has gradually encountered them over time, but a developer just getting started on this code, will need to take time to get to grips with all the thought that has gone into it. For that he or she also needs someone to go over these concepts and explain how they work.

One thing I'd like to point out to the developer that gets assigned to an existing project that is on a tight schedule, stop asking "why didn't you use A instead of B?". It's irrelevant. A choice was made, most likely with a good reason and even if A was better then B, there is no time to change it. So if you're assigned to a project in progress, stop asking "why" and only ask "how".

Communication
Another issue, which is actually documented very well, is team communication. As soon as you add a person to a project, there has to be communication between the existing members and that new team member, in order to coordinate the work. That communication takes time as well. So if you're adding a person to a team, that person will spend some of it's day in communication with others and some team members will have to spend some of their day's in communication with the new team member. It really adds up as you add more team members.

One thing to do to reduce this effect is to prevent a model where everyone is communicating with every other team member. To reduce the effect, there should be someone overseeing the entire project who coordinates the team in such a way that they only need communication with either the coordinator or with a team member or two that work on adjacent parts of the project. This way you take out a lot of the pain already.

Another possibly powerful tool (if used correctly) is a stand up meeting. A concept introduced by Extreme Programming, it means you take about 5 to 15 minutes a day (depending on team size) to meat with everyone at the same time, to discuss any issues that may have occurred in the past day. This way action can be taken right away. The right people can be put together to take care of the issue.

General rules on scale
Some general tips on scaling a project:
  • Don't scale to early. If a project is still half in it's design stage or is still being innovated on a lot, then wait.
  • Don't scale to quickly. Once design is done and the innovation is starting to slow down, don't just through in five developers at the same time. Adding them one by one makes much more sense.
  • Balance the team members. Don't just throw in all the best developers you have and think it'll work out fine. On most projects you need some people that are great at designing software and you also need developers that are going to write bulk amounts of code.
  • Make sure early team members can work full time on the project. There is nothing less efficient than having to keep switching between projects, even if it's only for a question or for rework. Take the amount of time it takes to do something on an old project and multiply it by three. That's the time lost by switching.
  • Don't scale up close to a deadline. What's close to a deadline depends on the projects scale. Because of the effort it takes, scaling up to close to a deadline will cause the project to go late, no matter what.
Well, that's it for today. I hope it helps you in future projects.

Tuesday, November 3, 2009

SilverlightShow.Net and me

It's been a bit of a crazy couple of weeks. I learned we'd have to move within the next two months, because the house we currently live in is sold. I've been quite busy with getting parts of our new application finished.
Also I received an invitation from the editors at SilverlightShow.net to write articles for them. Obviously I'm honored (and it's great for my career :-) ), so I got started right away.

On Tuesday my first article was published on SilverlightShow.net.
You can find it here.

Many thanks go out to the editors at SilverlightShow.net for their great support. A special word goes out to Svetla Stoycheva who has supported me through the process of getting this post on SilverlightShow.net.

Wednesday, October 28, 2009

Adventures while building a Silverlight Enterprise application part #27

It's been a while again, but it's been a bit of a hectic time. I've been approached by the good people of silverlightshow.net, who have invited me to write some articles for them, but more on that later. I've also been struggling with something I've seen coming, but underestimated, fear of change. That's what today's post is about, fear of change in a software company.

The background
When I got hired at the company I've been working for, for over a year now, the main purpose was to take the current development team, which has been developing in older technology for a long time, and help them to take full advantage of the latest Microsoft technology. Back then I took some time to think about the challenges I might face. One of the things I quickly realized is that fear of change would eventually become an issue. I never imagined it to be this strong and though to overcome.

As we started discussing how the new system should look the other developers where thrilled and everything looked good, but as time progressed and team members became more aware of the changes on different levels, some team members started to revert back to old ways. At times this has been fed by non-technical colleagues, some times it is fuelled by the fact that a lot of custom code is outstanding with customers, and quite a few times this is caused by a fear of change by some of the developers themselves.

Now, this all sounds very negative on a team I'm part of myself, however I've seen it happen a lot of times and I know a lot of others are struggling with the same thing. Do you hear sentences like "We've always done it this way and it works" and "I've seen that approach fail catastrophically once" as if they are arguments to not change anything? I hear them at least once every couple of weeks.

Why is fear of change a problem?
One might argue that this is not a problem. Just do your job and that's that. As long as the software you build fulfills the users needs, you're a winner, right? Well, no. With the old version of the software we keep seeing issues with maintenance, deployment and running it in a web environment. This makes maintaining the software expensive and complicated and it limits us in what we can do. All the more reason to do things differently, right?
So from a business perspective not changing things will eventually cause problems. Does that mean that the old software was bad? No. The point is that the environment the software has to work in combined with the requirements that are put on it, change. This means changing the software becomes inevitable.

Take Windows for example. The kernel that was originally build for Windows NT back in the late eighties was a great peace of software engineering and it worked fine for many years providing great stability to several incarnations of the Windows OS. Only in the past couple of years, the IT landscape started changing in terms of what was expected from a kernel. More and more people started using multi-core CPU's even in their PC's and multi CPU PC's are starting to emerge in the market place more and more. So Microsoft made a decision to rewrite part of the NT kernel to better support these multi core systems in a more efficient way. Though? Sure. Scary? I'd say so. But that didn't stop them from doing it and this change is now part of Windows 7. Again a stable OS, no end-of-the-world-as-we-know-it scenario's, just another major release of Windows. Sure, it must have caused the people working on it some blood, sweat and tears, but they can look back at a great achievement.

So "we've always done it this way and it works" is not really an argument to not change anything. The fact that it works doesn't mean it can not be improved upon, adapted to new possibilities and requirements. In fact, whenever someone says this to me, it makes me want to change it even more, not for the sake of it, but because it makes it all the more evident that the team needs that change in order to move forward.

The other one, "I've seen that approach fail catastrophically once" and less drastic expressions along the same line are another expression of fear. So it failed last time you've tried it? Great, that means you have experience and you know some of the pitfalls already. All the more reason to try that approach again and succeed.

So now what?
So what am I going to do about it? Fight. This may sound simplistic, but it starts with that. I realize that every change I'd like to make to the system is possibly going to be a struggle. Some of these might even fail.
Preparation is key here. It will take a lot of thought to determine what changes might be subject to resistance and what arguments I'm going to bring to the table to convince both technical and non-technical colleagues that this change is necessary and good.
Another help in this is to get others aware that this fear of change is actually there and is a problem for moving forward. In the end any help is welcome.

That concludes this episode, which I hope helps people to better understand this problem. Another great article on this I came across is written by Alexander Johannesen and can be found here.

Monday, October 19, 2009

Message framework design considerations

A while back I posted some thoughts on using a message framework as a way of abstracting part of the distributed computing problem. I stated I would start and write some code and so I have. On the way I had to make some interesting design decisions I wanted to share with you. This article is about these decisions and what where some of the reasons I made these.

Targeting the right objects
One of the first things I had to make a choice about, was how to target the right objects. Basically I needed two scenarios.

The first scenario is that Object A needs some task to be done by an object of type B. In this case Object A doesn't care what instance of type B actually does the job. It doesn't even care where this instance might be. In this case Object A should send out a message targeting type B. This means using a fully qualified class name as the receiver address.

The second scenario is that Object A sends out a message to notify other objects but it doesn't know which objects might be interested in the message. In this case other objects should register themselves with a dispatcher, stating that they are interested in some type of message. In this case Object A shouldn't include a receiver address at all.

But what about interaction with the system? This wouldn't be sufficient in a multi user scenario, because you can't target a specific instance of an object, which would be needed to actually give feedback to the user. I've decided that being able to target a specific object is not part of the low level messaging. To achieve a scenario where you need users to only receive their own messages back, there should be a layer on top of the existing messaging.
I chose this approach because it would be highly impractical to keep track of individual object instances across the entire ecosystem. Remember that one statement is that an object can be anywhere and it shouldn't matter for sending a message. By introducing unique addresses per instance, now it starts to matter, because a sender needs to know these unique addresses.

Interprocess communication
Another important aspect of the messaging framework is the communication between different processes. Again an object can be anywhere, but also anything .NET should work. I've investigated on using WCF, because this is obviously a very flexible and configurable way of communicating, scaling out from local (on the same machine) to global (across the internet). However it does put some bloat on the framework for people who don't want to use it.

I've also considered using Microsoft Messaging Queue, which kind of makes sense for a messaging framework. However this would involve building several tools around MQ to make sure all process types would be able to access the message queue and it would also put a deployment strain on the framework, beyond what I find is acceptable.

I settled on WCF, which doesn't mean that MQ is completely discarded. In the future it might prove valuable to actually build libraries around MQ to incorporate it into the framework. It does mean that by default every process includes code for both a WCF host and a client. However these are only instantiated as soon as a process is attached to another process.

IMessageReceiver
To indicate that an object can recieve messages, it is needed that it implements the IMessageReceiver interface, which only contains one method (at least for now) that's called RecieveMessage. It takes a single parameter of type IMessage. The IMessage type has two properties, Sender and Receiver which are both strings (for containing the fully qualified classnames of the objects involved in the communication).

MessageDispatcher
To make sure objects don't have to worry about getting a message to the receiver, every process should have exactly one MessageDispatcher object, who's sole responsibillity is to collect messages and dispatch them to the right objects and/or to other processes as needed.

To attach to another process in the ecosystem, the process in question has to make a request to the other process. Because we already have a message in the processes interface, that's exactly what I want to use. I've introduced a FrameworkMessage type that implements IMessage, so I can send it across to another process, in effect registering as a client to that process. To make sure the other process can also communicate back, it's response is to send a registration message as well, to make itself known as a client process as well.

Conclusion
As you can see there is a lot to think about when building a messaging framework. I'll keep working on this every now and then. Next time I'll try to get some code in.

Tuesday, October 13, 2009

Adventures while building a Silverlight Enterprise application part #26

The other day I ran into a scenario where I would need to databind to some primitive type variables like a bool or a string. As I didn't want to build class after class to solve this, I sought after a more generic solution. This article describes that solution.

The requirements
The requirements are simple:
  • I should be able to databind to any primitive type
  • The solution should be easy to use
  • The solution should require as little code as possible
Analysis
Analyzing this I quickly realized a generic class, wrapping the actual variable might would be a nice solution. I started with this:
public class BindableObject<T>

A simple class declaration taking in any type. Obviously I needed the class to support binding to it's properties, so I included the INotifyPropertyChanged interface in the declaration and I implemented the PropertyChanged event and a method to trigger that event.
I also needed to expose a property to contain the actual value, so I included the Value property and it's private field.

Now I would write something like:
BindableObject<bool> someObject = new BindableObject<bool>();
Binding someBinding = new Binding("Value");
someBinding.Source = someObject;
someBinding.Mode = BindingMode.TwoWay;
textBox1.SetBinding(TextBox.IsEnabledProperty, someBinding);

As you can see, this is still quite elaborate. I decided on adding a method to do the binding inside the BindableObject class. I ended up with a class that looks like this:

public class BindableObject<T> : INotifyPropertyChanged
{
private T _value;

public T Value
{
get
{
return _value;
}
set
{
_value = value;
DoPropertyChanged("Value");
}
}

public void BindTo(FrameworkElement element, DependencyProperty property, BindingMode mode)
{
Binding binding = newBinding("Value");
binding.Source = this;
binding.Mode = mode;
element.SetBinding(property, binding);
}

#region INotifyPropertyChanged Members

private void DoPropertyChanged(string propertyName)
{
if (PropertyChanged != null)
{
PropertyChanged.Invoke(this, newPropertyChangedEventArgs(propertyName));
}
}

public event PropertyChangedEventHandler PropertyChanged;

#endregion
}

It's a fairly simple class. You can now write the previous example like this:
BindableObject<bool> someObject = new BindableObject<bool>();
someObject.BindTo(textBox1, TextBox.IsEnabledProperty, BindingMode.TwoWay);

I'm pretty happy with this and I hope it proves useful to you too.

Thursday, October 8, 2009

Developers 42 online for one year!

Today is the day, Developers 42's first anniversary!
In this post we'll look back to see what was hot and what was not. We'll also look forward to the coming year. You now, make plans, decide where to go from here, etc.. And of course their will be some words of gratitude as I couldn't have done it all by myself.

The past year
On October 1st 2008, I started to work for a new company in a role I hadn't been in for years. I was going to work on a product again. I decided I needed a blog to keep track of what I do and to share some of my experiences with you out there. It took some preparation, but on October 9th 2008 my first post went online. It was about a trick to layout a report in Reporting Services 2005 and in the first week it got... wait for it... zero hits, nothing. Obviously no one actually knew about my blog and I felt I needed to keep it that way until I was having some more content online.
This went on for a while, until I posted this article, called Lookup Combobox in Silverlight 2. Now hits started dribbling in, still one at a time, but they did come. This article proofed to be a winner. It is actually the single most visited article of the past year, with 1169 unique page views, it is only second to the home page.

My big break came on October 26st, when an article I wrote a couple of weeks before was picked up by Dave Campbell and put on his Wynapse Blog. All of a sudden page views skyrocketed to 148 unique views the next day! Things didn't slow down until March 2009. By then I hit a pique of 306 unique views on February 20th, which is still the all time high.

But having page views isn't the only thing. I looked at the RSS feed stats and there is the real cool stuff. The feed has seen a steady growth throughout the past ten months, from 5 to 10 subscribers in December all the way up to around 85 subscribers right now. How cool is that?
Check out the graph:

And I've done some stuff as well. I published beta 1 of CodeEmbed4Web (not with great success unfortunately). I've written 25 posts in the Adventures series and I now have my articles published on CodeProject.com as well. Some of my articles are getting serious hits there, exceeding anything I get on my blog. Overall it's been a hectic year with ups and downs, but I'm still happy to be here.
What kept me going was a comment every now and then with a 'Thank you, I needed that.' or a 'I was stuck on this and you fixed it' kind of message. These are a great encouragement to keep posting.

Plans for the coming year
As I promised, I'll also give a peek into what we want to do next year. Off course I'd like to see some growth in traffic, both through the site as well as through the RSS feed. But that's not the only thing. I'm still working on moving the blog to a new custom platform. In fact I've got something to show you right now:


This is the new logo. The music was done by Kevin MacLeod. You can find his website here.

I'm still planning on releasing CodeEmbed4Web RTM. As for true content, I'm looking into doing more nitty-gritty stuff on C# and .NET, maybe diving into some of the inner workings of the framework, especially with the launch of C# 4.0 coming up.

Some words of gratitude
There are some people I'd like to say thank you to. First of all I'd like to thank Dave Campbell for sending all this traffic my way. I'd also like to thank Sean Ewington from CodeProject.com for being a great help with publishing my articles there. Another thank you must go to Drew Bennet from I'm not a famous blogger. His work encouraged me to go on when things didn't go so well.
The last big thank you goes out to you! Yeah, you there on the other side of the line, thanks for being here.

Callout to all readers
Please let me know if there are things you'd like to see changing on my blog. It could be there is some subject you'd like to see discussed, it could be you'd like to see a different format, have more content or less, or you just don't like the font I use, but please give me feedback so I can improve.
Another thing I'd like to ask you all is to spread the word if you really like my blog or a particular article. Please let your peers know.

Thanks again, and I'm looking forward to your comments.

Adventures while building a Silverlight Enterprise application part #25

In the previous episode I discussed how an enum can be used to encapsulate char values in a database, in this case using a Gender enum as an example. As cool as that was it does pose a problem with databinding in Silverlight that I'd like to share with you. Part of that explanation leads us into the realm of reflection (that would make for a cool blog title :) ) and then into some cool trick in binding, all to end with a bit of a dissapointment. So if you already feel depressed, stop reading now :-P.

The story
First there was a BussinessObjectBase. This class handles a lot of stuff around transporting data between the service and the client and back. It also helps in keeping track of property changes and notifies about them. It's a very generic class and it serves as a base class for all our 'business' objects.
In this case let's say there is a PersonBase class, that is derived from BusinessObjectBase. PersonBase is generated based on some metadata, so we don't want to touch this source. The PersonBase class has a property called Gender that is of type string.
Then there is the Person class, which was generated but is meant to be used for custom code, so this is where we will write some code to make our Gender available as a Gender instead of a string.
We could simply write something like this in the Person class:
private char GetGender()
{
string gender = base.Gender;
if (gender.Length == 0)
{
return DefaultGender;
}
return gender[0];
}

public Gender GenderValue
{
get
{
return (Gender)GetGender();
}
set
{
base.Gender = ((char)value).ToString();
}
}

Our intention here is to hide the base implementation of the Gender property and have our own implementation that returns an actual Gender enum. Nothing special so far.
Next step is to use this new property in a databinding scenario. Here is what the binding statement in XAML would look like.
{Binding Path=Gender, Mode=TwoWay}

Now if you'd try to use a binding statement like this and run the code, what would happen is, that you would get an AmbiguousMatchException, the reason being that the binding engine can't distinguish between PersonBase.Gender and Person.Gender.
What? But we wanted to hide PersonBase.Gender, right? Absolutely, but both properties are public, so both are actually available.

Reflecting on the 'new' property
To get a better understanding of why this was happening, I decided to write some reflection code:
object person = newPerson();
object personBase = newPersonBase();

Type personType = person.GetType();
PropertyInfo personGenderProperty = personType.GetProperty("Gender");
MessageBox.Show(personGenderProperty.GetValue(person, null).ToString());

Actually trying to get the value of the Gender property through reflection threw the AmbiguousMatchException, just like it did when databinding to it. Actually calling personType.GetProperties from the Immediate Window in Visual Studio returned both properties. Then the exception all of a sudden makes sense.

What might seem a bit akward is the fact that both properties are there. If I'd try to write string gender = somePerson.Gender where somePerson is of type Person then it would not compile because I can't implicitly cast this, proving that the base property is actually hidden. Still, having the property available is needed, because the derived class needs to be able to call the property in the base class.

Trying to bind to it anyway
Still, I needed to find a way to bind to this property. I came up with some simple solutions:
  1. Make the base property protected
  2. Rename one of the properties to remove the problem all together
  3. Find a way to distinguish between the two properties in databinding
The first solution doesn't work for me, because I can't touch this code (it was generated, remember?).
The second solution would mean that I'd have to rename the property in the derived class, which doesn't seem very nice.
The third solution was a long shot, but I had to give it a try. I did actually find this syntax to use for a case like this, however it wasn't very well documented, so I did some experimentation to get a better understanding of it. I started writing this in XAML:
{Binding Path=(local:Person.Gender)}

This failed with an exception telling me that this was an invalid value for the attribute. I was surprised an puzzled, because I read about other people using it and it is actually in the documentation like this. Some further investigation learned me that this only works on dependency properties. I've build a small example of this and it actually works very well, however...

...having a dependency property with all the plumbing involved is best done by deriving your class from DependencyObject (which contains stuff like GetValue and SetValue). I obviously can't derive Person or PersonBase from DependencyObject, because I have already derived them from another class. This means I now have to rename the property in the derived class :(.

You can find the example of binding to a new property here. Hopefully it is helpful to you. It was a great learning experience overall. Just to bad it didn't lead to a better solution for me.

Tuesday, October 6, 2009

Adventures while building a Silverlight Enterprise application part #24

Recently I was working out some details to include in our applications framework, when I encountered some old school enum stuff, that I'd like to share with you. It's all about using char implicitly as sort of the underlying type of an enum and why you might want that.

The story
As I was working on one of the modules we need in our application I had to figure out a way to store a persons gender. As it turns out, we don't have any data in the database specifying a list of genders. As this is all about what gender is on someones passport, we only needed two values (obviously 'male' and 'female') and we decided to only store the first character of these values in the database. The reason we didn't want to have a Gender table in our database was because of clarity. We have introduced this rule, saying that all primary keys should be of the SQL type uniqueidentifier (.NET type Guid) and having a foreign key for gender would result in storing a Guid for each and every gender in the database, which would not only be overkill, but it would also make things complicated.

So on the database side we had a char(1) column. But that's not what you want throughout your .NET code. Having an enum for this scenario is the obvious choice and that is what I went for. Luckily Silverlight is very well equipped for data binding enum types. There was one possible issue here. .NET enums only support int types as their underlying type...

...or do they? Obviously specifying something like enum Gender : char would not compile, but what about this:

public enum Gender
{
Female = 'f',
Male = 'm'
}


This is actually valid. But what happens then? Is char now the underlying type for this enum? Let's find out. I ran the following code to find out:

MessageBox.Show(string.Format("Enum underlying type: {0}", Enum.GetUnderlyingType(typeof(Gender)).Name));

As it turns out the underlying type is Int32. Hmmm, would this actually work? What value is it actually storing?
I expanded my code to the following:

MessageBox.Show(string.Format("Enum underlying type: {0}\r\n" +
"Enum value as int: {1}\r\n" +
"Enum value as char: {2}",
Enum.GetUnderlyingType(typeof(Gender)).Name,
((int)_person.Gender).ToString(),
((char)_person.Gender).ToString()
));

Now it would show me the int value and the char value casts for the selected gender value. In this case _person.Gender was Gender.Male. The actual int that was stored was 109, which is the byte value of the 'm' char. The ToString method on the char cast returned the 'm' char.

That's just great. Now I can store my Gender enum values by casting them to char at some point. But how about loading it pack into the Gender enum? As it turns out, you are allowed to cast this both ways, so something like this works:

_person.Gender = (Gender)genderCharTextBox.Text[0];

Duality rocks, man! So now we can store enum types without an underlying model with ease. Simple but powerful stuff, which I hope helps you out.

I've uploaded a simple project, which I used to test all this, here.

Wednesday, September 30, 2009

Pondering on distributed computing in .NET

After working on a highly scalable application for almost a year now, using Silverlight, WCF, Entity Framework and Sql Server and after reading and replying to numerous questions on these topics and seeing the struggle developers are having with these new ways of doing things, I started thinking about how this could be made easier.

What I figured was that I don't really care about what part of my software runs where, which is basically a 'cloud' way of looking at things. So why not do everything in the cloud then? Well, there are lots of reasons not to, including costs and customer trust, but that's not what this post is about. It boils down to us not wanting to do everything in the cloud just yet.

That thought however, led to something which is not very new, but hasn't really been very popular either. I guess object messaging describes it for what it is. Basically what I'm saying is that objects may or may not know each other, but any object can send out messages to other objects, it being send directly or through a message dispatcher. Some (but maybe not all) objects would be able to recieve messages.
I wanted to figure out why it is not very popular today and instead of just reading hundreds of webpages, trying to find out why, I thought a cool way of ganing this understanding would be to simply design and build my own messaging framework.

An object can be anywhere
The first thing to do is to make it clear to myself what the concept is going to be. My first statement is that an object can be anywhere. It can be on the same thread, in a different thread, on the same machine or on a different machine, it really shoudn't matter. All I want to do is tell the framework to send message X to object Y and it should be taken care off.

.NET == .NET == .NET
What I mean by this is that as a Microsoft fanboy I obviously wanted to build this in .NET (C#). I also want this to work on a wide variaty of process types, wheter it is a command line application, a Silverlight client, a service hosted in IIS or in Azure, or an NT Service, shouldn't matter. It should all connect implicitly.

Challenges
This does bring along some technical challenges. For example a Silverlight client can connect to a service perfictly but connecting to a Silverlight client in that same fasion is not going to work, so recieving messages there is going to be a challenge, especially if keeping it transparent comes into play.

Dealing with concurrency is another big thing that comes into play. All of a sudden everything that's not in my object is asynchronous, which can be a challenge. This brings a whole new way of developing objects. Whenever a message is recieved, do I need to stop work and check if it is relevant for what I'm doing right now, or should I just ignore it untill I have time to deal with it? How do I keep track of what I've send out in relation to what comes back in? These are all questions that all of a sudden need answering.

I'll be looking into this stuff in the near future, writing some code and hopefully sharing it with you. Please let me know your thoughts on the topic.

Thursday, September 24, 2009

Adventures while building a Silverlight Enterprise application part #23

Today we look into how collations in Sql Server bugged me while trying to get code generation to work on a database different from what it was originally build on. We also look at how I solved the problem (which is with some cool C# and WPF code).

The problem context
Lets take a look at the big picture first, so we all are on the same page as to why I ran into this problem. In the application we are building we have multiple databases to store our LOB data in. Depending on someones authorization, it's possible he or she has access to one or more of these databases (or sometimes only subsets of these databases). Because of this, we needed some central data store to hold information on security, the complete installation and on where what LOB database is and what data is in it. We call this data store the Repository and stated that any data we need to go across LOB database boundaries, should be in the Repository.

So far, so good. In the early stages of the project we build a WCF service, based on Entity Framework, that allows us to do any data operations on the LOB databases in a generic way. To achieve this we build a code generator to generate all the business classes we needed. The code generation process we use for the service is based on the fact that we use a automatically generated EF model. In other words, the EF model is a direct depiction of the SQL Server data model. Because of this fact we can extract metadata from SQL Server to feed our code generation process. To do this, our database guys build a script that extracts the metadata from SQL Server (using the system views) and insert it into a separate database which we use to power our code generation.

Up to this point, still no problems. Now what we wanted to do, was copy all the code we've build to access the LOB databases and from that build a WCF service to access our Repository and because of that I needed to generate business classes from the model as it is inside our Repository database. Now normally we create and update our database models through a Sql Server 2005 project in Visual Studio 2008 and do schema compares.
However, data is added to this metadata by several people and in that case we tend to use a backup to distribute this data, which was exactly what I did when preparing to add the metadata from our Repository.

The probem
After restoring the metadata database is when the problems started. I needed to alter the stored procedure written to extract the metadata from a database to change the source database name to make it access the repository database. As soon as I ran the alter procedure statement to update the stored procedure, I got several collation conflict errors. It turns out that a backup I got from the Repository database was created in a different collation, from the one I normally use. To be more specific the Repository database was created with Latin1_General_CI_AS (my local default) where as the metadata database was created with SQL_Latin1_General_CP1_CI_AS.

To solve this issue I would need to go through each column one by one and change the collation. Because the only data accessed from the Repository is in system views and you can't (and don't want to) change their collations, I had to change the collations on the metadata database. I didn't feel much for doing this by hand and because I foresee this happening more often, I figured I might aswell write a small tool to handle this for me. Here is a screen shot of what it looks like:
Basically it allows you to type in a Sql Instance name, after which it retrieves the database list from that instance and also it retrieves all available collations to fill the two comboboxes. As soon as you select a database and at least a source collation, you can then click the Find Columns button to retrieve any columns with the source collation. After you've selected a target collation and unchecked whatever columns you do not want to change, you can then click the Change Collations button and it will trigger an alter table / alter column query to try and do that for you.

I guess what gives this tool it's flair is the use of some queries directly on ADO. Here is a snippet of code with these queries:
private const string GetAllDatabasesSqlCommand = "select name from sys.databases";
private const string GetAllCollationsSqlCommand = "select name from ::fn_helpcollations()";
private const string GetAllColumnNamesSqlCommand = "select o.name, c.name, t.name, c.max_length, c.is_nullable"
+ " from sys.columns c"
+ " left join sys.objects o on c.object_id=o.object_id"
+ " left join sys.types t on c.system_type_id = t.system_type_id"
+ " where c.collation_name=@collation_name";

The first query retrieves all database names. This obviously only works when connected to the master database.
The second query returns all collation names. This can be done on any database (it should always return the same list).
The final query finds all columns that use a specific collation. The result set includes the table name, the column name, the type name, the max length and whether or not a column can contain NULL. This is all the information needed to generate an alter table alter column statement which changes only the collation.

Something I never used before was the SqlConnectionStringBuilder class to dynamically create my connection string. Here is the code I used for that:

SqlConnectionStringBuilder connectionStringBuilder = new SqlConnectionStringBuilder();
connectionStringBuilder.DataSource = hostnameTextBox.Text;
connectionStringBuilder.InitialCatalog = MasterDatabaseName;
connectionStringBuilder.IntegratedSecurity = true;

As you can see, using this class is really straight forward. Another thing I never used before was the ChangeDatabase method on the SqlConnection class. This came in handy as I could simply keep one connection throughout the flow of the application and switch databases quickly.

I've uploaded the code here for your viewing pleasure. Note that this tool was slammed together in a hurry so most of the code is in code behind for the main window and most of it isn't very well written, however there are some nice concepts in there as well

I hoped you enjoyed yourself again. I know I have.

Wednesday, September 16, 2009

Adventures while building a Silverlight Enterprise application part #22

Yesterday I needed to check the available style keys in our main app.xaml file and see which are no longer needed. As there currently are 66 style keys in that file and it's growing, I didn't feel much for taking each key and searching through our source code to find out. Time to build a small tool. This article describes how I build this tool.

Requirements

The tool needs to be able to search a directory tree for files with a certain extensions (.xaml*) for a pattern or literal string. Before it does this it also needs to be able to open a .xaml file and retrieve any style elements so it can then read their keys.

To achieve this, two classes are needed. One class will read a .xaml file and get all keys from style elements and the other class will search trough the file system for files containing these keys.

Building it
I'll spare the obvious details and dive right into the highlights. To read the keys from style elements in basically any xml document, I used LinqToXml. Here is the code I used:
private void LoadStyleKeysFromDocument()
{
XNamespace winFxNamespace = "http://schemas.microsoft.com/winfx/2006/xaml";
XName keyAttributeName = winFxNamespace + "Key";

var result = from node in _document.Descendants()
where node.Name.LocalName.Equals("Style")
select node;

var distinctResult = result.Distinct();

StyleKeys.Clear();
foreach (XElement styleElement in distinctResult)
{
StyleKeys.Add(styleElement.Attributes(keyAttributeName).First().Value);
}
}


The first two lines make an XName object that is needed to include the xml namespace when retrieving the x:Key from the element. Note that this works independently from the prefix (x) as it was assigned in the document. This means that this code will still work if someone would decide to change the prefix on this namespace.

Next, a Linq query is used to retrieve any nodes in the document that have the name Style. The query is followed by a statement to make sure I only get unique results.

Finally I fill the StyleKeys collection with any key attributes value found inside an element in the query result.

Searching for a particular pattern in the file system is done in the following method:
public void Search(string pattern, string rootFolder, string fileFilter)
{
// Get all files matching the filter
string[] fileNames = Directory.GetFiles(rootFolder, fileFilter, SearchOption.AllDirectories);
// For each file
foreach (string fileName in fileNames)
{
// Open file
string fileData = File.ReadAllText(fileName);
// Match pattern
MatchCollection matches = Regex.Matches(fileData, pattern);
// Register count
PatternSearchResultEntry resultEntry = newPatternSearchResultEntry()
{
FileName = fileName,
HitCount = matches.Count,
Pattern = pattern
};
Results.Add(resultEntry);
}
}


As you can see, the first line gets all the filenames that are anywhere in the directory hierarchy below the supplied root folder.
Looping through the filenames, I simply load all the text from each file and use the Regex class to count the number of hits. By doing so, this code is also very useful to find hit counts for other patterns.
All the results are added to a collection of a struct called PatternSearchResultEntry.

So thats the business end of things. Obviously we need a user interface of some sort.
I chose a WPF interface, because I like data binding.
To retrieve user input for the style file and the folder to look in, I build a class called BindableString, which contains a Name and a Value and implements the INotifyPropertyChanged interface. It allows me to create instances of these and bind them to my UI. This way I have a central point to access this information without having to worry about updates, etc..

To do the actual work I wrote the following Click event for a button:
private void analyseStyleUsageButton_Click(object sender, RoutedEventArgs e)
{
XamlStyleKeyReader reader = newXamlStyleKeyReader();
reader.ReadXamlFile(_stylesFilePath.Value);

PatternSearch patternSearch = newPatternSearch();
foreach (string styleKey in reader.StyleKeys)
{
patternSearch.Search(styleKey, _searchRootDirectory.Value, new string[] { "*.xaml" });
}

CollectionView view = (CollectionView)CollectionViewSource.GetDefaultView(
patternSearch.Results);
if (view.CanGroup)
{
view.GroupDescriptions.Add(new PropertyGroupDescription("Pattern"));
}
analyseStyleUsageDataGrid.ItemsSource = view.Groups;
}

It basically instantiates the XamlStyleKeyReader class and loads the style file in it. Next it instantiates the PatternSearch class and kicks of a search for each style key available in the XamlStyleKeyReader.

The code after that groups the results based on the search pattern. The reason I did it this way is because it is not very transparent to bind to the result of a group in Linq. Binding to this is easy once you know how. As you can see the items source for the datagrid that displays my results, is actually the collection of groups.
This collection is declared as having objects, which isn't very helpful, however diving into the API documentation reviels that this collection contains instances of the CollectionViewGroup class. From that class I need the name (obviously) and a hit count, which of course it doesn't have.
To get a hit count I bound to the Items property from the group, which contains all the items that belong to that group and then I use a value converter to get the total hit count for that group.

I've uploaded the complete source for this tool here.

Be aware that this tool is far from finished. I would like to save the last settings and have some progress indication, which means moving the search code to it's own thread. Styling of the UI can be improved, etc., etc.

I do hope you find this code useful and you've learned something along the way.

Friday, September 11, 2009

Why you want code reviews

Lately there has been some discussion in our team on whether or not code reviews are needed and even practical and why you should want them in your development team. This may sound like obvious knowledge, but unfortunately it appears that some developers have an aversion against code reviews, whether it being their code being reviewed or it being them reviewing someone else's code. In this article we'll look into some of the reasons that makes developers feel this way and also we look into some of the benefits of having code reviews. As we go through all this, we will also give some attention at how to prevent bad experiences with code reviews.

Why not?
As I mentioned in the introduction, there is still a fairly large group of developers who have an aversion against code reviews. A lot of the time, this is because they feel that they did the best job they could on the code they have build, given the circumstances and they don't need some coworker grinding their hard labor to peaces and giving them a hard time.

If that's the case, something is obviously wrong. If this is feeling is based on previous experience, then they have experienced a bad code review by some reviewer who clearly missed the point.
Others may not want to be at that end of the table where they have to tell a coworker they did a bad job. Again, if you feel like this, you missed the point.
It is NEVER the goal to find something to be able to accuse the developer of bad work.

A third reason why a developer may not want code reviews is that they think it will be boring to do a review. If that's you, you guessed it, you've missed the point as well.

The goals of a code review
So what are we trying to accomplish trough a code review? There are several very important goals one could target:
  1. Code quality
  2. Code consistency
  3. Architecture consistency
  4. Awareness
  5. Knowledge sharing
Code quality
This may seem like the obvious first reason and causes the most aversion. Sure a developer tries to do the best job he or she can, however we are all human so we make mistakes. The goal here is not to bang on someones bad peaces of code, but to prevent bugs from happening. Bugs cause a lot of work for a software company. If they get catched by the testing department, they have to generate a case file for it and communicate that back to the developers, who in turn have to do rework and system test that. It then gets handed back to testing and they have to retest.
If a bug is discovered by a user, things get even worse. They will file a case with the support desk, who has to triage it and then it has to be planned for a release and generates a whole bunch of work. So in the end it is in everyones best interest to prevent bugs from happening, which is exactly what the first goal of a code review is all about.

Code consistency
To keep code maintainable it is imperative that chosen solutions are consistent with each other. However it is hard to know everything about a large code base that keeps expanding and changing while a team of developers is working on it. By having code reviews in place, a large part of inconsistency can be catched and fixed before the code actually gets to the testing department.

Architecture consistency
From experience I know it is hard to keep the architecture that was planned consistent throughout the system. Developers often tend to do things their way, especially when they either have difficulties understanding the architecture or how to solve a particular problem in the architecture, or because they run into time constraints.
To make sure that architecture at least stays consistent throughout the system, it is important to look at how certain functionality is built into the system. If a developer did something that affects the consistency, but brings a good argument for doing it that way, it may result in changes in the overall architecture because then changes are the architecture wasn't fitting the application. Again this is important to the whole team as some others may end up feeling the same pains the first developer encountered if it doesn't get addressed.

Awareness
This may sound like a very generic term, so let me explain. Because we work in a rapidly changing field there is no way to keep up with everything that's going on so at times we may miss certain information which would have helped us solving a particular problem a lot easier or more efficient or prevent some problem in the future. To keep us more up to speed a reviewer may spot some of these issues and point out other possible solutions to a problem of which we might not have been aware.
Looking at this from a reviewers point of view you might actually see some features you where not aware of.

Knowledge sharing
This is a more obvious benefit. Not only does the developer get a change learn from the reviewer, but it also works the other way around. And this doesn't only apply to technical knowledge, it also goes for specific knowledge about the application that's being reviewed, which is an important business benefit.

Tips on code reviews
To make code reviews be a pleasant experience both the developer and the review need to keep an open mind. That doesn't mean you can't defend something you think is good work, or you can't say it when you see something you feel should be improved.

As a developer look at this as an opportunity to showcase your code to one of your peers. Reviewers, ask questions. They allow the developer to explain choices he or she made in the process. Both parties shouldn't be pushing something without a good explanation. If you can't come to an agreement on an issue, involve a third party you both trust and stick to that decision. Also don't take it personal, opinions can differ and there are more ways then one to reach a goal.

To take full advantage of code reviews, there should be a group of reviewers, who should be experienced developers in general and in the technology used. Also make sure the right reviewers are pared with the right developers. It's likely that a young but bright developer could review a more senior developer and technically this may make sense. However in the 'food chain' this might cause problems, so it's better to avoid this. On the other end of the spectrum a young developer with less experience on the technology used might be intimidated by a review from the most senior developers on the team. This might lead to the young developer not speaking his or her mind and just following the senior developers. Although this might get the junior developer a long way, it will not teach him or her to have a mind of their own.
For educational purposes it would be best to have a medior developer review a junior developer and have senior developers review medior developers. This way junior developers will be less intimidated and medior developers can learn the review skills from their senior peers.

Conclusion
I agree that there is a lot to think about when implementing code reviews, however I do feel it is worth the effort. A team doing good code reviews will get great benefit from them. Consider it and if you feel like it, please share your experiences with us.

Sunday, September 6, 2009

Adventures while building a Silverlight Enterprise application part #21

I came back from a short vacation today and read this post from Tim Heuer. I then realized that in all twenty parts I've written on our adventure of building a Silverlight Enterprise application, I've actually never elaborated about why and how we chose Silverlight and how we (or at least I) learned to use Silverlight technology. So now seems as good a time as any.

Let's be a lazy author and just follow the questions that Tim put up there.

Decision resources
Obviously when you go about to start a project you have to decide on what you are going to use and why. If you are looking at Silverlight, what factors into your decision?

In our case, one of the most important topics was to move from a client-server based solution with a (very) fat client to a multi-tier solution with a web based client. The first choice was to go with DotNetNuke as some experience was already available, however, as you may very well know, this uses regular ASP.NET Ajax like solutions, which we found very limiting in UI perspective. At the time Silverlight 2 was just about to come out of the Beta stage and we felt we should at least take a good hard look at it. We needed to have technology that was relatively easy to implement and yet have enough flexibility to make for a very useful UI.

So our major decision factors where:
  1. UI possibilities / user friendliness
  2. Development speed
  3. Flexibility
Why did your company choose to adopt Silverlight (or choose not to)? Was there another technology that was chosen to be better? Why/why not?

We choose to go with Silverlight because it checked the boxes on the above list. We felt it was easier to build a good user friendly UI with Silverlight, in comparison with ASP.NET / Ajax. Also we experienced in our prototyping / prove of concept face, that building a UI with Silverlight is taking less time as well, because of the use of XAML and C# only. No HTML or JavaScript code made live a lot easier for us.

Another elementary aspect of Silverlight proved valuable to us. It was the fact that you can embed controls inside other controls and work with templating. In combination with declarative databinding this became an important topic for us during decision making.

Again we where comparing to DotNetNuke, which had some advantages over Silverlight as well. One of the things that DotNetNuke had which was considered an advantage, is the fact that you could than access the database directly from your client side code. In hindsight I'm glad that we can't do this with Silverlight and just have all the logic in a service layer. This proved to be much more flexible for a lot of scenarios we already ran into.

What is the most important thing in deciding if Silverlight is right? Feature set? Existing technologies? Rapid development? Other reasons?

At the time, Silverlight 2 was not exactly existing technology as it was just coming out of Beta. This was a concern for us, but the fact that this was Silverlight 2 and that there was already an active and growing community, combined with the fact that Microsoft had plans for updates layed out already, pulled us across. The feature set was definitely important. The way databinding is implemented is important to us, as we are building a data rich application. Also the controls available at the time as default, but also trough the Silverlight Toolkit and the fact that third party companies literally jumped at Silverlight 2 helped easing the choice even further.

Rapid development was obviously something that needed prove as no real world examples where available at the time. We build prototypes of several scales and found that it was indeed fast enough to build something with Silverlight 2, even without experience with the technology.

Learning resources
On learning – how do you best learn? Do you prefer “atomic” samples? These are the ones that you can just pop in and figure out a task-based situation (i.e. how do I open a file in Silverlight). Or do you prefer more of a “lesson plan” approach to things? This would be a series based on a task (i.e. Build a Media Player in Silverlight).

For me personally a lesson plan approach was new. I did part of the digg API sample series, but after several steps I lost interest, mainly because some of the topics where not of interest at the time and some of the topics where just to easy for me. This makes me lean towards atomic samples, altough as a starting place I can see why people like this lesson plan thing. Later on atomic samples are obviously the way to go. They have the advantage of showing only the bare minimum of implementation without the burden of having to go trough previous steps to understand code that was already there.

On medium – in either types of these learning paths, what is your preference? Video? Written step-by-step guides? Labs?
When you are completely new to a topic, watching a video is great. It takes away some of the effort of having to follow along with some article and just being showed how things work. This is also great for some more indept topics, with less code and more explanation.

If looking for a reference, or if I need to actually start coding on something, I do prefer written material, as it is a lot easier to just scan trough and look for that bid of code, or that bid of explanation you needed, instead of plowing trough twenty minutes of video trying to find that five second shot of the code you need.

I've tried labs in the past and I found that they are not my thing. I tend to get bored very quickly because usually the level of instructions is making things to easy. Also setting up to do a lab feels like a lot of work most of the time.

On topics – what are the top 3 topics you expect when learning a new technology? How do you on-ramp yourself when you know nothing about it? Do you expect to learn the tools first? Or jump right in to data access?

This is actually a very tough question to answer. It realy depends on what a technology looks like. When looking at Silverlight I felt that I already knew enough about the most important tool: Visual Studio 2008. Of course Blend became part of the toolset I use and over time I did watch videos and read tutorials on Blend to learn some tricks here and there. So for me, while learning Silverlight, diving right into important topics was very helpful. I do feel it is important to include tips on how to use tools, where this is important to the topic at hand.

For example if you are talking about databinding and data access, it is handy to demonstrate on how to do this in Blend including using sample data. If something like this becomes to elaborate, at least make sure you point out to the public that this is a handy feature they should know about and tell them where they can find more information about it.

Other notes
Something that bugged me for quite some time when getting started with Silverlight was the number of resources that where available for Silverlight 2 Beta 1 and 2. While being a great help to win over management to go with Silverlight, this made live a lot tougher when trying to find useful resources for Silverlight 2 RTM as a lot of times the important bits would not work in the RTM.

Something else that I found is that, altough silverlight.net is a great resource when learning Silverlight, it is a bit of a mess. Videos are not categorized in a way that makes them easy to find and there is not a proper search function on the site. Also I think it would be very helpful to have a search engine that can search for Silverlight content for your specific version only. So if I'm working on Silverlight 3, I might not be very interested on Silverlight 1.1 and 2 content. Or at times I might not care if it's Silverlight 2 or 3 content, because they are likely to be compatible any way.

As a final note I would like to encourage anyone who hasn't already done this, please go to the article of Tim Heuer and comment. Or you can comment on this article as well.