Wednesday, January 27, 2010

Open/Closed Principle

In general, it is more desirable to write new code in an existing codebase than it is to change existing code. This is especially true if you are unfamiliar with the codebase or it has little to no test coverage to facilitate ease of refactoring. Modifying an existing codebase increases the risk of breaking existing functionality and adds to the testing time as now we must regression test all parts of the system that have been affected by our change.

Wouldn’t it be nice if there were some way for us to modify existing functionality by writing new code, and leaving the existing codebase untouched (ok, relatively untouched)? Enter the Open/Closed Principle (OCP). The according to Hoyle definition states "software entities should be open for extension, but closed for modification". Sweet! But what does that mean?

It means we want to design our applications in such a way that allows us to add new functionality or behavior without having to modify existing code. This help us to reduce code fragility, where a single change causes a ripple throughout other areas of our application, forcing us to modify larger portions of existing code. Again, this increases our testing time and reduces the overall maintainability of the code.

So let’s look at a very simple example of how adhering to the Open/Closed Principle can help improve the value and quality of our code (I want to stay focused here on the OCP concept and not get bogged down in the fact that there are other design issues with this code, i.e., ignoring the Single Responsibility Principle) . Here is the ubiquitous person class:

   1: public class Person
2: {
3: public string FirstName { get; set; }
4: public string LastName { get; set; }
5:
6: public bool IsValid
7: {
8: get
9: {
10: return (!string.IsNullOrEmpty(FirstName) &&
11: FirstName.Length <= 30);
12: }
13: }


We have a person class with two properties and one validation rule: the person must have a first name and it cannot be more than thirty characters. This works perfectly until the business rules change. Now we must add a new validation rule for the last name as well.

In order for us to change this code and apply the new business rules, we can either continue updating the IsValid property or take the time to update to code to fix this issue such that it will not burn us again.

First we will refactor our person class and introduce a validator class:


   1: public class Person
   2: {
   3:     private IValidator validator;
   4:  
   5:     public Person(IValidator validator)
   6:     {
   7:         this.validator = validator;
   8:     }
   9:  
  10:     public string FirstName { get; set; }
  11:     public string LastName { get; set; }
  12:  
  13:     public bool IsValid
  14:     {
  15:         get
  16:         {
  17:             return validator.IsValid(this);
  18:         }
  19:     }
  20: }


Now that we have introduced the concept of a validator, we can create a validation class to accommodate any type of business rule without have to modify our person class. Here is a validator that will maintain the original functionality:

   1: public interface IValidator
   2: {
   3:     bool IsValid(Person person);
   4: }
   5:  
   6: public class PersonFirstNameValidator : IValidator
   7: {
   8:     public bool IsValid(Person person)
   9:     {
  10:         return (!string.IsNullOrEmpty(person.FirstName)
  11:                 &&
  12:                 person.FirstName.Length <= 30);
  13:     }
  14: }

And here is the validator that picks up the new business rules:


   1: public class PersonFirstAndLastNameValidator : IValidator
   2: {
   3:     public bool IsValid(Person person)
   4:     {
   5:         return (IsPropertyValid(person.FirstName) &&
   6:                 IsPropertyValid(person.LastName));
   7:     }
   8:  
   9:     private bool IsPropertyValid(string value)
  10:     {
  11:         return (!string.IsNullOrEmpty(value)
  12:                 &&
  13:                 value.Length <= 30);
  14:     }
  15: }


Now our code is more resilient to the volatility of changing business rules. It also allows us the flexibility to use different business rules based on context or domain requirements. Our class is open for extension (we can modify its behavior), but closed for modification (without changing its code).

Wednesday, January 20, 2010

The Nature of Language

After my last post “How Fluent is too Fluent”, I began really thinking about the nature of human language and how it relates to computer programming language.

Computer languages at their core only really perform one task – they convert information to and from ones and zeros. But human language is much more. It allows us to be expressive and nuanced. It allows us to share ideas, state and clarify positions, record history, express and elicit emotions. It is a call to action and a way to categorize meaning. It is a speech, a sermon, a homily, a parable, a soliloquy. It is a poem, a joke, a riddle, a limerick. It is an oath, a vow, a greeting, a farewell, a command, a question. It is every thought that has ever been conceived. Can or should coding languages ever reach that apotheosis?

Ultimately, it is up to the designer to decide the level of “humanity” an API contains, but consider this - as professionals, one of our responsibilities is to ensure that our code effectively communicates with other developers. Does code that reads like prose accomplish this? Does it better convey our intent? Does and/or can it help to reduce the level of intellectual friction poorly written code can cause? Could it allow us to focus more on the solution and less on the medium if we were allowed write code the way we think, and not the way the compiler thinks? Are we just a vehicle to the cold, concrete bleakness of ones and zeros?

A computer is only raw materials without a human being to interact with it. Maybe it is the reason nature allowed us to be created in the first place. “Why are we here?” - ones and zeros.

Wednesday, January 13, 2010

How Fluent is too Fluent?

Fluent interfaces have become all the rage, and with good reason (although I prefer the adjective fluid to fluent). Their intent is to make code more “readable”. My question is, for whom? An entry level developer who just got finished with their first “Hello World!” app? A senior level developer who started off as a switch flipper on an Altair? My grandmother? While developing my own fluent interfaces, this is a question I have wrestled with. Who is my target audience, and how far do I take it?

Let’s assume that we should gear our fluent API toward any developer who has basic knowledge of the language of our choosing. The next step is to decide how verbose our interface should be.

There are several fluent implementations that I use in my practice and will show as examples of to illustrate the point.

Fluent Validation is a lightweight validation framework. You create a validation rule as follows:

RuleFor(author => author.FirstName)
.NotEmpty()
.WithMessage("First name is a required field.")
.And
.Length(1, 30)
.WithMessage("First name cannot exceed thirty characters.");

Here is a version for a non-existent validation framework that is more verbose, but reads somewhat more like an actual conversation or user story.


Each<Author>      
.HasA(author => author.FirstName,
"First name is a required field.")
.ThatIs
.NoLongerThan(30,
"First name cannot exceed thirty characters.");

Is there any improvement in the level of communication between the first and second example?

Here’s another example. We are all familiar with SQL statements such as:

SELECT ID, Quantity, Price
FROM dbo.Orders
WHERE ID = 2

Here is a version that could exist in an O/RM framework:

IWantThe(ID, Quantity, Price)  
.OfThe<Order>.Whose.ID.Is(2);

Does code that reads like prose communicate more effectively? Does it better convey our intent? Or, is this just a bunch of unnecessary keystrokes?

Wednesday, January 6, 2010

An Estimate is not a Commitment

In our quest to shake off the shackles of the past and leave waterfall permanently in our rearview mirrors, we sometimes forget the still important task of estimating. It, like any other, is a skill that requires knowledge, wisdom, experience, and a lot of luck. In order for us to move toward better estimating, we need to identify areas of our processes and behaviors that are repeatable and quantifiable.

I would liken this to a professional painter and the action of painting a room. He follows a well-defined and measurable process. It is that process which, when put into practice, allows him to estimate the duration for properly painting the walls of a two hundred square foot room versus a two thousand square foot room.

So what behaviors and processes can we define as repeatable or predictable such that a dollar amount can be assigned to them? Let’s list some of the steps in our process and see if any fit the bill. Requirements gathering, writing user stories, design and architecture planning and development, unit and integration testing, domain identification and definition, database schema design, and QA.

After careful review, it may be that none of these things can be effectively quantified, partially because the process begins with a moving target - requirements gathering. Each project is as unique as the needs of our clients. Although there are similarities and the potential of some overlap from project to project, the uniqueness of each is one of the contributing factors to the challenge of estimating effectively. We must not be fearful in making our clients fully aware of this and invite and encourage them to become integral in the process.

Once a client sees a number it is no longer an estimate - it is a commitment. So when asked for an estimate, I respond with “two million dollars”. Usually I come in under budget.