It is almost common knowledge now that waterfall model does not work, especially if you talk to programmers leaning to the agile persuasion. In this article I am going to give waterfall model its deserved credit, and to look at software development from a different perspective.
Before start, I need to first make a disclaimer -- I am not talking about "strict" waterfall or "strict" agile, I don't think those extreme cases apply to most real life situations.To me, any development processes that falls into defining clear phases (initiation, analysis, design, development, testing) are using waterfall model, regardless compromises made to accommodate different circumstances.
First, let's talk a little about history. Waterfall model originated from manufacturing and construction industries, the waterfall model does not only suits the limitation of the projects of those industries (where flexibility is impossible when real goods are involved), but also suits their management method -- The Command and Control Management Method.
If a company using command and control management method tries to use agile development model, guess what will happen? Well, agile practices rely heavily on making constant changes based on new discoveries and shifting requirements which will need to travel back to the top and come back down to the developers. When too many changes are pending, the decision maker becomes a bottleneck while the developers are held up until a decision can be made. This does not only creates more pressure on the decision maker but also would actually delay the project delivery.
On the other hand, waterfall model tends to minimize the communication requirement between decision maker and developers. When the project is in analysis/design phase, developers can be given other tasks like maintaining existing projects. Once the construction starts, developers are called in and start doing the work. There will be only occasional cases when changes are needed AND possible, giving the decision maker plenty leisure to examine issues popping up without delaying the project much.
Of course, the better solution would be to change the management process to be more decentralized to avoid decision making bottleneck. But unfortunately, real life situations are normally far from ideal. Maybe there aren't enough competent developers to delegate responsibilities to (and no, you don't want to set everybody lose to wreck havoc to the code base); or the company has been burnt by having too many fingers making decisions; or the business model can tolerate low quality software thus improving the development process is lower priority; or maybe simply just because.
You have to really think hard about the situation before applying any best practices or the hottest methodologies. Sometimes you may have to make compromises and take the second-best option.
Sharing my thoughts on software development
Wednesday, September 15, 2010
Monday, August 9, 2010
Premature Abstraction
Everyone knows that premature optimization is root of all evil and in extreme cases the programmer who does such thing will be stoned to death.
But not many people seem to be bothered with premature abstraction, or encapsulation or whatever people fancy it. Here is what I mean by premature abstraction: creating a train of complex framework to perform simple tasks to make them future proof.
This is a natural tendency after becoming more and more fluent in OO design, as one of the main advantage of OO is abstraction, to make code reusable and extensible. But like making a dish, if you put in too much of any sauce, it will ruin the taste. Abstraction is no exception.
Here is what will actually happen when you design something to be "future-proof": most of the anticipated requirements will be forgotten, and many new ones will show up from no where and on things you never thought of. When such thing happens, it'll be a painful experience to refactor that neatly designed and implemented framework to accommodate new requirements. If the project happen to be behind schedule at the time, which is always, guess what, you may not get the leisure and time to refactor the framework and hacks/workarounds will be applied to make it even harder to refactor. In fact once enough duct-tape is being applied to that originally neatly designed framework, nobody will dare to touch it any more.
Now I think about it, there is actually a proper name for it: it is called over-engineering. And it is quickly surpassing premature optimization to become the new root of all evil.
I will stone you next time I see you do that.
But not many people seem to be bothered with premature abstraction, or encapsulation or whatever people fancy it. Here is what I mean by premature abstraction: creating a train of complex framework to perform simple tasks to make them future proof.
This is a natural tendency after becoming more and more fluent in OO design, as one of the main advantage of OO is abstraction, to make code reusable and extensible. But like making a dish, if you put in too much of any sauce, it will ruin the taste. Abstraction is no exception.
Here is what will actually happen when you design something to be "future-proof": most of the anticipated requirements will be forgotten, and many new ones will show up from no where and on things you never thought of. When such thing happens, it'll be a painful experience to refactor that neatly designed and implemented framework to accommodate new requirements. If the project happen to be behind schedule at the time, which is always, guess what, you may not get the leisure and time to refactor the framework and hacks/workarounds will be applied to make it even harder to refactor. In fact once enough duct-tape is being applied to that originally neatly designed framework, nobody will dare to touch it any more.
Now I think about it, there is actually a proper name for it: it is called over-engineering. And it is quickly surpassing premature optimization to become the new root of all evil.
I will stone you next time I see you do that.
Friday, March 5, 2010
You don't need a publisher
I was talking to my imaginative friend Joe the other day, trying to figure out why there still hasn't been a good website to read novels. We were talking about why book publishers should release books online in this era of Internet, and it struck me, why was publishers needed in the first place?
Back in the days, book publishing was a huge task and no small entities (a.k.a. authors) can do it for themselves, and in order to reach a wider audience without devoting their life to marketing which many may not be good at, book publishers were necessary.In fact things haven't changed much since! Book publishing is still a tremendous effort, you have to get editors to review the book, get advertisers to market the book, get designers to do book design, get printing companies to print the book, then go through a chain of distribution to get the books to readers.
And the result of that? Authors don't make that much. According to this post from a New York Times best selling author, she made a staggering $24,517.36 on a book that sold for 61,663 copies, about 40 cents for every book that's sold for $7.99 at retail. That's very little, but considering the amount of work the publisher has to do, it is only fair. Or is it?
In a business partnership, each partner has to contribute value to the business and take his/her share of profit based on the contribution, it is not always easily measurable, but mostly fair or the partnership won't work. Let's take a look what publisher brings to the table, that warrant a 95% cut [1].
Editor Review
Editors are quite helpful as they check for spelling, grammar and logical errors,they also give some good advices to improve the book or edit it directly.
Marketing
While authors can do a lot of the marketing themselves (and are increasingly required to), paid marketing campaign and advertisement still have their values. After all, who don't want to sell a few more books?
Printing and Distribution
These are normally done by separate companies. Printing and distribution is a lot of work, and book shelves are costly.
Publishers are indeed bringing a lot to the table in traditional publishing practice to warrant their share of profit. But things has changed! Advance in computer technology and the Internet has made a lot of things obsolete, including publishers, and here is why:
Editor Review
Spell checkers will weed out most of the typos and simple errors. Some helpful readers or a fellow author could definitely be able to help to do the proofreading to find any logical errors. In fact the readers will feel delighted to have a sneak peek into your newest book, it's a privilege! And even if errors manage to loom into your book, you can always go back and fix it, a luxury you don't have with paper backs.
And what beats reader feedback on how to improve your book? After all, the definition of a good book is that people like it enough that they will buy it.
Marketing
Authors already do a lot of marketing these days, and with the help of Internet, self-promotion has not been easier! If you really think you need some professional marketing, no one can stop you from hiring a marketing company, and you get to decide whether that's worth it.
Printing and Distribution
Need to say no more, distributing books online cost a fraction of traditional printing and distribution model. And there is always print-on-demand option for readers who prefer paperback.
Notes:
[1] To publishers' defense, they do not take 95% of the sales, money goes to all parties involved in the process of publishing a book. But to an author, it doesn't make a difference where the money went.
Back in the days, book publishing was a huge task and no small entities (a.k.a. authors) can do it for themselves, and in order to reach a wider audience without devoting their life to marketing which many may not be good at, book publishers were necessary.In fact things haven't changed much since! Book publishing is still a tremendous effort, you have to get editors to review the book, get advertisers to market the book, get designers to do book design, get printing companies to print the book, then go through a chain of distribution to get the books to readers.
And the result of that? Authors don't make that much. According to this post from a New York Times best selling author, she made a staggering $24,517.36 on a book that sold for 61,663 copies, about 40 cents for every book that's sold for $7.99 at retail. That's very little, but considering the amount of work the publisher has to do, it is only fair. Or is it?
In a business partnership, each partner has to contribute value to the business and take his/her share of profit based on the contribution, it is not always easily measurable, but mostly fair or the partnership won't work. Let's take a look what publisher brings to the table, that warrant a 95% cut [1].
Editor Review
Editors are quite helpful as they check for spelling, grammar and logical errors,they also give some good advices to improve the book or edit it directly.
Marketing
While authors can do a lot of the marketing themselves (and are increasingly required to), paid marketing campaign and advertisement still have their values. After all, who don't want to sell a few more books?
Printing and Distribution
These are normally done by separate companies. Printing and distribution is a lot of work, and book shelves are costly.
Publishers are indeed bringing a lot to the table in traditional publishing practice to warrant their share of profit. But things has changed! Advance in computer technology and the Internet has made a lot of things obsolete, including publishers, and here is why:
Editor Review
Spell checkers will weed out most of the typos and simple errors. Some helpful readers or a fellow author could definitely be able to help to do the proofreading to find any logical errors. In fact the readers will feel delighted to have a sneak peek into your newest book, it's a privilege! And even if errors manage to loom into your book, you can always go back and fix it, a luxury you don't have with paper backs.
And what beats reader feedback on how to improve your book? After all, the definition of a good book is that people like it enough that they will buy it.
Marketing
Authors already do a lot of marketing these days, and with the help of Internet, self-promotion has not been easier! If you really think you need some professional marketing, no one can stop you from hiring a marketing company, and you get to decide whether that's worth it.
Printing and Distribution
Need to say no more, distributing books online cost a fraction of traditional printing and distribution model. And there is always print-on-demand option for readers who prefer paperback.
Notes:
[1] To publishers' defense, they do not take 95% of the sales, money goes to all parties involved in the process of publishing a book. But to an author, it doesn't make a difference where the money went.
Sunday, February 28, 2010
Google AI Challenge
I have recently competed in Google AI Challenge, it was a lot of fun and I have learned quite a few things. Here is a quick explanation of the strategy I employ.
First it checks to see if two bots are seperated.
If two bots are connected, it will use Minimax/Alpha Beta Pruning to determine the best move it should take. The evaluation functions is as follows:
1, if I win: +1000; opponent wins: -1000
2, if disconnected, and I have more open spots: +800; opponent has more open spots: -800; same: 0; Open spots is determined by flood fill.
3, if still connected,
find articulation points and divide map into zones
find the largest zones
use breadth first algorithm to divide the zones into spots I can reach first and spots enemy can reach first
return final score as spots I can reach first - spots enemy can reach first
The whole Alpha Beta Pruning process is wrapped using a Iterative Deepening loop.
When two bots are seperated, simply use a breadth first algorithm to determine how many open spots are left after each move using flood fill. Then choose the move that leaves the most spots open, in case two moves are equally good, hug the wall.
It is written in C#, and can be found here: http://github.com/analyst74/Tron-Bot
Update: here is a great write up about the challenge and many related links http://www.benzedrine.cx/tron/
First it checks to see if two bots are seperated.
If two bots are connected, it will use Minimax/Alpha Beta Pruning to determine the best move it should take. The evaluation functions is as follows:
1, if I win: +1000; opponent wins: -1000
2, if disconnected, and I have more open spots: +800; opponent has more open spots: -800; same: 0; Open spots is determined by flood fill.
3, if still connected,
find articulation points and divide map into zones
find the largest zones
use breadth first algorithm to divide the zones into spots I can reach first
return final score as spots I can reach first - spots enemy can reach first
The whole Alpha Beta Pruning process is wrapped using a Iterative Deepening loop.
When two bots are seperated, simply use a breadth first algorithm to determine how many open spots are left after each move using flood fill. Then choose the move that leaves the most spots open, in case two moves are equally good, hug the wall.
It is written in C#, and can be found here: http://github.com/analyst74/Tron-Bot
Update: here is a great write up about the challenge and many related links http://www.benzedrine.cx/tron/
Thursday, January 21, 2010
on Google vs China
Now the immediate heat of discussion on Google's new approach to China has settled a little (until new news come out from either party).
A lot of interesting analysis and speculations have emerged from the blogsphere and news medias, from bashing China's (if you don't know yet) human rights problem to questioning Google's real motives. You can find them all here (courtesy to Google!).
One thing everyone agrees, is that this is a significant event, but how significant? Will it change the world?
Fast forward forty years.
It's year 2050, we have just achieved 80% CO2 reduction goal set out forty years ago thanks to innovations in energy sector. On the news:
How does this future sound?
Maybe, just maybe, one day the Internet could be free.
A lot of interesting analysis and speculations have emerged from the blogsphere and news medias, from bashing China's (if you don't know yet) human rights problem to questioning Google's real motives. You can find them all here (courtesy to Google!).
One thing everyone agrees, is that this is a significant event, but how significant? Will it change the world?
Fast forward forty years.
It's year 2050, we have just achieved 80% CO2 reduction goal set out forty years ago thanks to innovations in energy sector. On the news:
(Yahoo News) A syndicate of Internet companies lead by prominent companies like Google, Microsoft and eBay, have threatened to blacklist Australia if a new tax is implemented. The tax will be applied to companies who do not voluntarily comply with its censoring rules, and cover costs associated with censoring on the ISP level. Rumors say proponents of the tax are rapidly losing support from voters for fears that those threats become real.
(Apple Daily) In a bold move to tackle lumping economy and increasing domestic unrest, Japanese government has announced a plan to lift regulations related to the Internet, citing "The Internet has sovereignty of its own and cannot be governed in traditional means". The plan includes establishing a governing entity called UN (United Netizens) and handing over all regulatory duties to this entity. UN council body will consist representatives from prominent Internet companies and user groups. Japan urges other countries to follow the "great leap forward in net-neutrality". Economic analysts say such move could attracts over 5 trillion foreign investment to flow to Japan, especially from countries with complex or repressive Internet laws like US and China.
How does this future sound?
Maybe, just maybe, one day the Internet could be free.
Subscribe to:
Posts (Atom)