Friday, November 14, 2008
This is the last post on http://brian.genisio.org. I have recently joined the developer community GeeksWithBlogs for my blog. I haven't transferred the domain over since there are a lot of sites linking this blog and I don't want to break the links.
I was able to transfer my RSS feed, but if you are getting this post via RSS, then you will need to subscribe to my feedburner feed, and unsubscribe from this feed.
My new blog is: http://HouseOfBilz.com
My RSS feed: http://feeds.feedburner.com/genisio
Tuesday, November 11, 2008
Although Microsoft will claim that it is "not possible to have a memory leak in managed code", most seasoned .NET developers will laugh at that statement. It turns out that it is very easy to leak memory -- just keep a referencing object around longer than the referenced object, and you can leak. There at least two tools on the market that are designed specifically to seek out memory leaks of this kind (Scitech and ANTS).
The most common case of this happens with events in C#. Take the following example:
public class Observable
public delegate void SomethingHappenedDelegate();
public event SomethingHappenedDelegate SomethingHappened;
// Rest of the class
public class Observer
private readonly Observable _observable;
public Observer(Observable observable)
_observable = observable;
_observable.SomethingHappened += UhOh_SomethingHappened;
// Handle the event
In this example, the Observer class will hook the event on the Observable class during construction. Because of the way that events work in C#, the Observable object has a reference to the Observer. In the following example, the Observer will be alive (at least) as long as the Observable class is alive. For this reason, the following method will cause a memory leak:
public void LeakAnObserver()
var observer = new Observer(_observable);
In most cases, the Observer instance would be garbage collected as it went out of scope. Instead, since it is kept alive through the event handler of the Observable, we leak memory.
There is a pretty easy way to solve this. Simply unhook the event in the disposal event (actual "Dispose Pattern" removed for brevity).
public class Observer : IDisposable
// Existing code
public void Dispose()
_observable.SomethingHappened -= UhOh_SomethingHappened;
Great! Now, as long as we dispose the Observer, all references will be removed and the object will get garbage collected. Unfortunately, it is VERY easy to forget to call the Dispose method. I want to write some tests to make sure that these objects are garbage collected.
This is a tall order to fill. Having a reference to the object will cause it to stay alive. How do you ask an object if it is alive without actually having a reference to the object? This is where the WeakReference class comes in. It is a magical class that keeps a reference to an object without the garbage collector knowing about the reference. I wrote the following class to help me monitor and test if it still alive:
public class LeakMonitor<T>
private readonly WeakReference _reference;
public LeakMonitor(T itemToWatch)
_reference = new WeakReference(itemToWatch);
public bool ItemIsAlive()
public T Item
Here are two examples of tests that illustrate the use of LeakMonitor. These are over-simplified unit test examples for this blog post, but you can see how this can be extended to integration and functional tests to verify that inner objects are not leaked. Be creative!
public class MemTests
private Observable _observable;
public virtual void SetUp()
_observable = new Observable();
public void Test_That_Observer_Leaks()
var monitor = new LeakMonitor<Observer>(LeakMemory());
public void Test_That_Disposing_Observer_Does_Not_Leak()
var monitor = new LeakMonitor<Observer>(LeakMemory());
private Observer LeakMemory()
return new Observer(_observable);
Friday, November 7, 2008
I will be speaking at GLUG.NET Lansing on November 20th, 2008. My topic will be a talk I have given once before -- Castle Active Record (Don't Get Good at a CRUDy Job). Thanks to Jeff McWherter for signing me up for this gig. I look forward to meeting those in the Lansing area.
Wednesday, October 29, 2008
I have been toying with functional programming a bit lately. I have been using lambdas and Linq when it has made sense in my code. I downloaded the F# compiler tools and mucked around with it a tiny bit. I read a few blogs that talk about functional programming concepts, etc. I have enjoyed reading the elegance of the paradigm, but never really got into it much.
But then I sat in on an "Open Space" session where Scott Guthrie was talking. Most of it strayed from the standard "Open Space" format and was more of a Q&A, but I guess this happens sometimes when big names show up. Anyways, one thing he said really stuck with me. It may not be a new idea, but it really resonated with me... I just hadn't thought about it this way before.
He said that in functional programming, you are declaring WHAT you want to do, instead of HOW you want to do it. In other words, in a functional language, you might describe a pattern to select from a collection (WHAT). In a more procedural language, you would do the same thing with a "foreach" loop (HOW).
This distinction isn't just semantic. It is extremely important for parallel programming. A loop is very hard to run in parallel since the compiler has a difficult time determining side effects. When you describe what you want done in a functional language, your compiler/framework CAN add parallelism. You let your tools figure out how to do the work in a parallel way.
It became clear to me that I need to take functional programming more seriously. Our hardware is no longer increasing in speed. Instead, the number of processors increase our computing power. If we can't figure out how to spread the work over multiple workers, we will never be able to take advantage of that power.
Functional languages have the potential to execute in parallel much more than procedural languages. I am making a promise to myself that I will become much more proficient in the functional paradigm. It seems to be the responsible thing to do.
Monday, October 27, 2008
I just got back from the “Future of C#” talk at PDC by Anders Hejlsberg. This was a truly inspiring talk for a geek like me. C# is evolving into a much more dynamic language. I have always been a believer of strong typing… except when I’m not… and I have been wishing for something more dynamic (such as Duck Typing). In C# 4.0, we will be seeing some significant dynamic features.
In reality, the thing that has kept me away from using languages such as IronPython and IronRuby is their interoperability with strongly typed languages. I really believe in the concept of “The right language for the job”, but I hate the idea of sticking to that one language for the entire project. With the dynamic capabilities in C#, it will be MUCH easier to talk to Python or Ruby code. If I need to implement something really loosely (like a calculation engine), I will be able to jump into something loose. Then, when I want to work with that code in my more strongly typed environment, I will have that ability. The “Right Language for the Job” paradigm has just become much finer grained.
So here are the details.
First and foremost is the dynamic keyword. This is kind of like using the object keyword, but you are saying that all of your binding will be at runtime. You will loose your intelisense, of course, but you will now be able to call into methods that have not been previously defined.
The neat thing about this is that you can make your statically defined classes be dynamic by implementing the IDynamicObject interface, which allows you to have access to the late binding calls.
Named and Optional Parameters
Next is something that C++ has and C# has needed for a long time – Optional Parameters. You can set defaults in your method declaration and the caller doesn’t need to specify the parameters. In addition, you can name the parameters in the method calls. This is really great for readability… especially when you are passing a bool into a method that you have no idea what it does.
Better COM Interoperability
These previous features (Optional and Named Parameters) are really useful to add to the new COM Interoperability features. Basically, pairing the dynamic and parameter features, talking to COM controls looks very natural.
Covariants and Contravariants
Finally, but certainly not least, we are getting covariance and contravariance. This is something that has bugged me since I have started with C#. Currently, if a method takes IEnumerable<BaseType>, you can’t pass IEnumerable<DerivedType>. I Hate having to convert the derived set to a base set just to pass it in. In C# 4.0, this will be fixed.
I am hoping to see C# 4.0 soon. Better yet, I am hoping to see C# 4.0 in the bits we get tomorrow!
PDC has commenced. Herds of people flocked to the keynote where the topic was infrastructure. It may not be the most sexy of topics, but it is certainly the way that Microsoft is moving with their business plan. Specifically, they talked about their new cloud OS dubbed “Microsoft Azure”. Azure is going to be a scalable infrastructure for hosting cloud applications.
One of the most interesting parts of this talk for me was the actual coding demo. They have included an embedded Azure simulator for debugging. It will let you develop your cloud app in your IDE without uploading it to their servers.
Another important aspect of Azure is the ability to easily add more resources to your application. Whether it be ASP.NET or Silverlight, you will be able to run your app from their Azure cloud server and scale the resources as you see fit. Need to add resources for the Christmas season? No problem. In January, you can just reduce the resources assigned.
Deployment is another important aspect of the Azure system. It is very easy to deploy your application to their servers via a single upload. Because I am here at PDC, I will have access to the Azure server today. I think I will go sign up now!
Thursday, October 23, 2008
It's been over a year now since I have been developing using TDD (Test Driven Development) as my primary development practice. I wanted to reflect on what it has done for me professionally. In reality, the past year has been great for my professional career in many ways.
I started out in August of 2007 with what I THOUGHT was TDD. Sure, I wrote my tests before my code, but the philosophy behind it wasn't enough to be effective. It wasn't until I went to Boston for a 3-day seminar on TDD taught by Rob Myers of NetObjectives that I really understood the power and relevance of TDD. His challenge was simple -- Try it completely for 30 days. If you don't find the value in it, then move along and look for something else.
So, this is what I did. I spent 30 days practicing TDD the way Rob taught us. I followed the following algorithm:
- Write a test
- Watch it fail
- Write the MINIMUM necessary to make it pass
- Watch it go green
- Refactor if necessary
To be honest, it was a real exercise in self control. I wanted to take shortcuts. I wanted to write some behavior while I was there, and write the tests afterwards. But I promised myself that I would stick through it and write all of my new code in this way.
The benefits were immediate and profound. My methods were smaller. My classes were cohesive. My design is more extensible. My code was more readable. My classes were not tightly coupled. My units were testable, and my tests ran fast. It was amazing how quickly the prophecies of TDD came true.
After a year of this practice, I can honestly state that my code has less bugs. Moreover, when a bug is found in my code, I am able to write a new test immediately (due to the heightened testability of the code) that exercises the bug. Fixes happen quickly and I have a great deal of confidence that my fix doesn't break something else. I certainly can't say that about my legacy non-TDD code.
It is funny. I often feel like a born-again evangelical when it comes to TDD. Like a wide-eyed Christian who is eager to spread "the good news" every time somebody has a personal problem, I am quick to suggest TDD whenever I hear somebody talk about a coding problem. I am not the first (nor the last) to liken TDD to religion. It is fitting:
"For Kent Beck so loved the developers, that he gave his most precious tool (TDD), that whosoever believeth in it should not write legacy code, but have everlasting code." -- Agile 3:16
All kidding aside, TDD has really changed my professional life. In the past year, I have met many colleagues who share my beliefs and there is a real community out there. I have become so passionate about the topic that I am even giving public talks on testing and TDD. Without TDD, I would probably be stuck on the plateau where I sat -- stagnant and stale. TDD was the kick in the ass I needed to grow as a developer.
Looking back, I couldn't be happier with my experience in Rob's class. He taught me what I was doing wrong, and helped me do it right. It takes someone who really KNOWS TDD to teach it to someone who doesn't. I would recommend this experience to any developer in a heartbeat.