There are two brilliant managers called Jack and Gordon.
They both work for the same company -- it's one of the nation's biggest corporation and has huge influence in the market. Having a job in the company is considered a privilege, let alone a managerial position. One has to be cream of the most brilliant crop to get there.
Jack is a very smart man, and he has gained substantial experience in this field to make him a distinguished expert in the company. This is not to say he lacks social skills though. In fact people admit that he posses both technical expertise and some political talent. He is a star leading a team of experts solving some of the company's most difficult problems. His talent is both widely appreciated by the senior management and admired by fellow employees, even those who are not in his team.
Gordon is also a very smart man, although not quite as sharp as Jack is. He is considered by most people to be a people's manager as compared to Jack, who is both a remarkable manager AND technical leader. Gordon's team is also one of the most successful teams in the company, the product they created has received outstanding reception and is bringing the company a steady stream of income measured in the millions. The success of Gordon, as people say, is widely contributed to the great idea (which did not come up by himself) and a superb team he got stuck with. His team is indeed superb, many of them have proved themselves to be A players and there is one young guy who is considered to be a wizard even among his team of A players, a star employee he is. Anyone who is lucky enough to stuck with a good idea and a superb team like Gordon's would have succeeded, probably more wildly than Gordon have.
Fast forward fifteen years.
Jack has gotten old and has stepped away from the company's core team since he lost his sharpness a while ago. The company still treats him well though, and he is enjoying his life with a much less stressful job. He misses the glory days, but nobody can fight time, his time has passed.
Gordon has lost the best members of his original team, three of them including the star employee were promoted "past" him, others eventually left and got promotion in other companies. But the team somehow managed to survive, he always seem to be lucky to recruit a few remarkable employees who kept the team and Gordon himself afloat, until about five years ago when he was promoted to senior management. "It is all due to his political talent", some people say.
But the company's chief architect disagrees:"He (Gordon) is the best manager I've ever met and the time working for him was the best of my career. He did not only understand both the business and technical sides, shielded us from senior management pressure, but also cared for us personally. He fought tooth and nails with HR to give us bigger raises and single handedly convinced senior management to promote me to the architect team. Without him, I wouldn't have been where I am now. I am so glad that I am reporting to him again."
The chief architect was not the only one Gordon promoted -- almost a dozen of the company's current managers used to work in Gordon's team and have since been recognized and promoted. They were equally thankful for the chance to work for Gordon. On the other hand, nobody seems to remember Jack except a few old timers, and Jack has never been "lucky" enough to recruit such talented people like Gordon has. Those "B players" who worked for him have either stayed where they were or have since left the company.
"Luck" is really important, as people say.
Sharing my thoughts on software development
Sunday, December 6, 2009
Monday, November 9, 2009
Does Inheritance breaks Encapsulation?
Because inheritance exposes a subclass to details of its parent's implementation, it's often said that 'inheritance breaks encapsulation' (Gang of Four 1995:19)No I did not read the GoF book, I'm quoting them because I believe quoting important people makes my point more correct.
So we know that inheritance breaks encapsulation because it exposes a subclass to details of its parent's implementation. But what is the harm of exposing parent's implementation to subclasses? Why does it matter?
Imagine this situation: you are coding a very core class for a system, and decided to make a field protected so it's convenient for subclasses to access it. After all, it is a core member field that all subclasses need to use and declaring it as protected seemed like reasonable thing to do. Everything worked fine and smoothly in the beginning. Then 5 years passed, assuming the system is actually making money for your company, it has grown substantially through a steady stream of feature requests. And many subclasses were created inheriting the core class you wrote. Now imagine you have to change that protected field, and *bam*, you're stuck! Because there are many subclasses sprang across many assemblies, and nobody has a headcount of them all! So you either end up not changing it, or do the change and fix as many subclasses as possible, then prey you don't miss any.
Does this sound familiar? Yes, this is the exact scenario when someone accidentally declared a field public for convenience when it should have been properly encapsulated, I hope it's not you, oops! (I'll be honest, I did this before!)
I believe this is the context when those influential people declared that inheritance breaks encapsulation. But a class does not have to expose its implementation to subclasses, take Hashtable for example: it is carefully designed to hide all implementation details from not just the rest of the world, but also its subclasses. So if there is a change to the default hashing method, how data is stored internally or any of the implementation details, all the subclasses created by developers across the world will not be broken! I believe a more proper statement would be:
Overusing protected members breaks encapsulation.And the remedy to that? Don't overuse the protected keyword. :)
Monday, October 12, 2009
The problem of too many layers of indirection (abstraction)
So everyone has probably heard this by now
indirection abstraction, and that's how we advanced into modern society. It is a proven concept.
Well, let me step back a little and talk about an interesting issue I had recently. It was decided that our enterprise contract management software is not responsive enough and a few engineers were tasked to take a look at the problem.
Thanks to the advancement of software development, we now have awesome tools like dotTrace, among others, to simplify the daunting task of inserting time stamping instructions into every single method in the application. Looking at the profiling result, an interesting method quickly grabbed my attention -- it is a heavy lifter which makes up 40% of the overall page load time. Upon closer investigation, I realized that it is our new navigation tree which loads a gigantic metadata file that contains all information there is to know (like data dependency, context relation, data validation rule, permission rule and etc). To be more specific:
Well, the truth is, the original developer implemented the navigation module nicely, with caching and stuff, so the heavy-lifting method will never be called twice. Then a few months later, someone had to fix a caching bug -- the cached navigation tree became corrupted for some unknown reason. He looked around, and found a little method that was nicely packed and seems harmless, which will solve his problem by rebuilding the corrupted data. It was a perfectly logical choice on his side, although little did he know that about 3 abstraction layers down the road, it reads a gigantic xml file and create a few hundred objects on the fly.
Maybe the original developer should have documented this code better, maybe the other developer should have been more cautious when using other people's code. But the real issue is, abstraction hides so much detail that gives you a false sense of confidence. It makes you believe you know everything, after all, the method name and comment will be sufficient to describe what it does right? (hint, no) If it does, it would have to explain what all its function calls do, and the functions called by those functions it calls, and the functions called by those functions called by those functions it calls...
Every abstraction layer does not only adds a little over head to the CPU, but also to the poor human who has to read that code. Be careful, those little overheads may come and bite you one day.
All problems in computer science can be solved by another level of indirectionand the corollary
...except for the problem of too many layers of indirectionWhile it sounds smart and witty, I have never quite figured out what the corollary is referring to. After all, our society is built based on layers and layers of
Well, let me step back a little and talk about an interesting issue I had recently. It was decided that our enterprise contract management software is not responsive enough and a few engineers were tasked to take a look at the problem.
Thanks to the advancement of software development, we now have awesome tools like dotTrace, among others, to simplify the daunting task of inserting time stamping instructions into every single method in the application. Looking at the profiling result, an interesting method quickly grabbed my attention -- it is a heavy lifter which makes up 40% of the overall page load time. Upon closer investigation, I realized that it is our new navigation tree which loads a gigantic metadata file that contains all information there is to know (like data dependency, context relation, data validation rule, permission rule and etc). To be more specific:
- it was reloading the (static) data every page load.
- only a small amount of data (those related to current page) is actually required.
- it was executed twice on every page load.
Well, the truth is, the original developer implemented the navigation module nicely, with caching and stuff, so the heavy-lifting method will never be called twice. Then a few months later, someone had to fix a caching bug -- the cached navigation tree became corrupted for some unknown reason. He looked around, and found a little method that was nicely packed and seems harmless, which will solve his problem by rebuilding the corrupted data. It was a perfectly logical choice on his side, although little did he know that about 3 abstraction layers down the road, it reads a gigantic xml file and create a few hundred objects on the fly.
Maybe the original developer should have documented this code better, maybe the other developer should have been more cautious when using other people's code. But the real issue is, abstraction hides so much detail that gives you a false sense of confidence. It makes you believe you know everything, after all, the method name and comment will be sufficient to describe what it does right? (hint, no) If it does, it would have to explain what all its function calls do, and the functions called by those functions it calls, and the functions called by those functions called by those functions it calls...
Every abstraction layer does not only adds a little over head to the CPU, but also to the poor human who has to read that code. Be careful, those little overheads may come and bite you one day.
Subscribe to:
Posts (Atom)