I was blown away when first introduced to KISS principle. At the time, I was frustrated with how overly complicated our enterprise software was designed. It was a rewrite of a legacy system and our architect single-handedly wrote a custom framework on top of ASP.NET MVC framework, with the goal to make it more succinct and extensible.
The only problem, was that everybody was so confused about how it worked, that we lined up to the door of the architect with questions on how to implement particular logic. I carried that bitter taste of over-engineering for a long time, and have rigorously asked myself every time I make any design decisions, if my design was not simple enough...until now.
Today, a very smart and capable colleague criticized a simple piece of my code for being over-engineered, it was a surprise.The problem was fairly simple, so I skipped over things like IoC container, unit testing and the like, implementing only a very simple MVC design, with model layer logic located in separate project, using a single-class Micro ORM (PetaPoco) as opposed to a full-blown ORM.
In my mind, things cannot get simpler than this. Well, technically it can, like skipping the overhead of unit-testability, keeping all logic within single project, using built-in SqlDataReader instead of Micro ORM. However those measures only reduces complexity marginally, while making future refactoring exponentially more difficult. In my book, those are bad trade offs.
What I failed to recognize was that, you see, I have been writing code this way for many projects by now and all those concepts and code structures are intuitive to me. Writing and reading code structured this way is as straight-forward to me as code structured in the simplest possible way (TM).
This was not the case for my colleague.
He has different background than I do, and has grown different habits. All the "intuitive" project structure, the MicroORM, and unit testable code was extra complexity to him. While he recognizes the value of all those things, those are extra complexity that has no potential return due to simplicity of the project.
In the end, after some discussion with him and mental struggle with myself, I got rid of most of the "fluffy" stuff. I also lost the bitter taste for the "over-engineering" architect, after all, the gigantic custom framework that I found so overly-complicated was probably "simple enough" for him.
Lesson learned: Simplicity, like anything, is subjective -- what is simple to one person may not be the same for another. And when working in a team, you have to consider the whole team when making design decisions.
Sharing my thoughts on software development
Monday, January 9, 2012
Friday, July 29, 2011
Visual Studio Website Project vs Web Application Project
There probably has been billions of articles written about this topic, and I am no expert who understands the full extent on how the two types of project differ from each other. However, most of those articles are fairly old, and the year 2011 deserves a new article about this topic. :)
So if you are a developer working in a Microsoft shop, chances are you would use at least one of the following: ASP.NET MVC, NuGet, Unity Application Block, Unit Test, Gated Check-in, custom build event/script.
If you use any of those consistently in your projects, it's probably best to stick with Web Application Project, some of them (i.e. ASP.NET MVC) flat out cannot work with Website Project; some would, but requires non-typical setup which you have to investigate and assess.
Conclusion: in year 2011, if you are not sure, use Web Application Project.
So if you are a developer working in a Microsoft shop, chances are you would use at least one of the following: ASP.NET MVC, NuGet, Unity Application Block, Unit Test, Gated Check-in, custom build event/script.
If you use any of those consistently in your projects, it's probably best to stick with Web Application Project, some of them (i.e. ASP.NET MVC) flat out cannot work with Website Project; some would, but requires non-typical setup which you have to investigate and assess.
Conclusion: in year 2011, if you are not sure, use Web Application Project.
Thursday, July 28, 2011
CoffeeScript global variable and @ keyword
I finally managed to pick up CoffeeScript, by writing a small game using CoffeeScript and Canvas. Writing code with CoffeeScript is a very fluent experience, even though I have never used Ruby and have C# background, the syntax feels so natural and code enjoyable to write. All the good things aside, I also got caught by a little gotcha of the language.
In the world of CoffeeScript, global variables must be explicitly declared, using either @ or window keyword, and most tutorials recommend @ due to its succinctness:
this attaches luckyNumber into window object and thus accessible globally. The tricky thing is if you do the same declaration in a class:
all of sudden, @luckyNumber becomes a property of MyClass, and is not attached to the window object anymore! Now consider following situation:
Is it going to give you 4 or 5? ... Yes you probably guessed right, it's 5. To access the global variable, you need:
In the world of CoffeeScript, global variables must be explicitly declared, using either @ or window keyword, and most tutorials recommend @ due to its succinctness:
@luckyNumber = 4
this attaches luckyNumber into window object and thus accessible globally. The tricky thing is if you do the same declaration in a class:
class MyClass constructor: (@luckyNumber = 5) ->
all of sudden, @luckyNumber becomes a property of MyClass, and is not attached to the window object anymore! Now consider following situation:
@luckyNumber = 4
class MyClass
constructor: (@luckyNumber = 5) ->
showLuckyNumber: () ->
alert(@luckyNumber)
(new MyClass).showLuckyNumber()
Is it going to give you 4 or 5? ... Yes you probably guessed right, it's 5. To access the global variable, you need:
showLuckyNumber: () ->
alert(window.luckyNumber)
While tricky, this is not exactly a CoffeeScript problem, but weirdness inherited from Javascript. Remember the golden rule: "It's just JavaScript".Wednesday, September 15, 2010
A redemption of Waterfall model
It is almost common knowledge now that waterfall model does not work, especially if you talk to programmers leaning to the agile persuasion. In this article I am going to give waterfall model its deserved credit, and to look at software development from a different perspective.
Before start, I need to first make a disclaimer -- I am not talking about "strict" waterfall or "strict" agile, I don't think those extreme cases apply to most real life situations.To me, any development processes that falls into defining clear phases (initiation, analysis, design, development, testing) are using waterfall model, regardless compromises made to accommodate different circumstances.
First, let's talk a little about history. Waterfall model originated from manufacturing and construction industries, the waterfall model does not only suits the limitation of the projects of those industries (where flexibility is impossible when real goods are involved), but also suits their management method -- The Command and Control Management Method.
If a company using command and control management method tries to use agile development model, guess what will happen? Well, agile practices rely heavily on making constant changes based on new discoveries and shifting requirements which will need to travel back to the top and come back down to the developers. When too many changes are pending, the decision maker becomes a bottleneck while the developers are held up until a decision can be made. This does not only creates more pressure on the decision maker but also would actually delay the project delivery.
On the other hand, waterfall model tends to minimize the communication requirement between decision maker and developers. When the project is in analysis/design phase, developers can be given other tasks like maintaining existing projects. Once the construction starts, developers are called in and start doing the work. There will be only occasional cases when changes are needed AND possible, giving the decision maker plenty leisure to examine issues popping up without delaying the project much.
Of course, the better solution would be to change the management process to be more decentralized to avoid decision making bottleneck. But unfortunately, real life situations are normally far from ideal. Maybe there aren't enough competent developers to delegate responsibilities to (and no, you don't want to set everybody lose to wreck havoc to the code base); or the company has been burnt by having too many fingers making decisions; or the business model can tolerate low quality software thus improving the development process is lower priority; or maybe simply just because.
You have to really think hard about the situation before applying any best practices or the hottest methodologies. Sometimes you may have to make compromises and take the second-best option.
Before start, I need to first make a disclaimer -- I am not talking about "strict" waterfall or "strict" agile, I don't think those extreme cases apply to most real life situations.To me, any development processes that falls into defining clear phases (initiation, analysis, design, development, testing) are using waterfall model, regardless compromises made to accommodate different circumstances.
First, let's talk a little about history. Waterfall model originated from manufacturing and construction industries, the waterfall model does not only suits the limitation of the projects of those industries (where flexibility is impossible when real goods are involved), but also suits their management method -- The Command and Control Management Method.
If a company using command and control management method tries to use agile development model, guess what will happen? Well, agile practices rely heavily on making constant changes based on new discoveries and shifting requirements which will need to travel back to the top and come back down to the developers. When too many changes are pending, the decision maker becomes a bottleneck while the developers are held up until a decision can be made. This does not only creates more pressure on the decision maker but also would actually delay the project delivery.
On the other hand, waterfall model tends to minimize the communication requirement between decision maker and developers. When the project is in analysis/design phase, developers can be given other tasks like maintaining existing projects. Once the construction starts, developers are called in and start doing the work. There will be only occasional cases when changes are needed AND possible, giving the decision maker plenty leisure to examine issues popping up without delaying the project much.
Of course, the better solution would be to change the management process to be more decentralized to avoid decision making bottleneck. But unfortunately, real life situations are normally far from ideal. Maybe there aren't enough competent developers to delegate responsibilities to (and no, you don't want to set everybody lose to wreck havoc to the code base); or the company has been burnt by having too many fingers making decisions; or the business model can tolerate low quality software thus improving the development process is lower priority; or maybe simply just because.
You have to really think hard about the situation before applying any best practices or the hottest methodologies. Sometimes you may have to make compromises and take the second-best option.
Monday, August 9, 2010
Premature Abstraction
Everyone knows that premature optimization is root of all evil and in extreme cases the programmer who does such thing will be stoned to death.
But not many people seem to be bothered with premature abstraction, or encapsulation or whatever people fancy it. Here is what I mean by premature abstraction: creating a train of complex framework to perform simple tasks to make them future proof.
This is a natural tendency after becoming more and more fluent in OO design, as one of the main advantage of OO is abstraction, to make code reusable and extensible. But like making a dish, if you put in too much of any sauce, it will ruin the taste. Abstraction is no exception.
Here is what will actually happen when you design something to be "future-proof": most of the anticipated requirements will be forgotten, and many new ones will show up from no where and on things you never thought of. When such thing happens, it'll be a painful experience to refactor that neatly designed and implemented framework to accommodate new requirements. If the project happen to be behind schedule at the time, which is always, guess what, you may not get the leisure and time to refactor the framework and hacks/workarounds will be applied to make it even harder to refactor. In fact once enough duct-tape is being applied to that originally neatly designed framework, nobody will dare to touch it any more.
Now I think about it, there is actually a proper name for it: it is called over-engineering. And it is quickly surpassing premature optimization to become the new root of all evil.
I will stone you next time I see you do that.
But not many people seem to be bothered with premature abstraction, or encapsulation or whatever people fancy it. Here is what I mean by premature abstraction: creating a train of complex framework to perform simple tasks to make them future proof.
This is a natural tendency after becoming more and more fluent in OO design, as one of the main advantage of OO is abstraction, to make code reusable and extensible. But like making a dish, if you put in too much of any sauce, it will ruin the taste. Abstraction is no exception.
Here is what will actually happen when you design something to be "future-proof": most of the anticipated requirements will be forgotten, and many new ones will show up from no where and on things you never thought of. When such thing happens, it'll be a painful experience to refactor that neatly designed and implemented framework to accommodate new requirements. If the project happen to be behind schedule at the time, which is always, guess what, you may not get the leisure and time to refactor the framework and hacks/workarounds will be applied to make it even harder to refactor. In fact once enough duct-tape is being applied to that originally neatly designed framework, nobody will dare to touch it any more.
Now I think about it, there is actually a proper name for it: it is called over-engineering. And it is quickly surpassing premature optimization to become the new root of all evil.
I will stone you next time I see you do that.
Subscribe to:
Comments (Atom)