Friday, December 29, 2006

2006 - My Favorite Picks from the Blogosphere

Oh it's that time of the continuum when we look back at the passing of one more year and contemplate what it meant to each one of us in the broader spectrum of life. I am not trying to be cynical or delve into the psyco-analysis of what went wrong in the last 365 days of the calendar .. Here is a peek to my-best-of-2006 from the blogosphere ..

In no particular order ::

Stevey's Good Agile, Bad Agile

For all agilists, the party is over. Steve Yeggey is on song tearing apart the agile purists with the good-agile of Google. Great read.

Sriram's Lisp is Sin

A fascinating walk down the road of the language of Gods. This post has been discussed in various forums by a multitude of Lisp practitioners.

Joel's Development Abstraction Layer

A programmer's sole activity is to produce beautiful code, but it needs layers of abstraction from the organization to create a software product out of it in the market. He concludes with an analogy of Dolly Parton's performance being supported by the leakproof abstractions provided by the incredible infrastructure of managers, musicians, recording technicians, record companies, roadies, hairdressers, and publicists behind her.

Jao's Category Theory with Bananas

A very good introduction to category theory in programming languages.

Ola Bini on The Dark Ages of programming languages

Ola Bini's search continues for the next programming language that does not suck - very well written by one of the prolific programmers of today.

Another Joel's Gem : Can your Programming Language Do This ?

An eye opener to all schools who have started teaching Java as the first programming language. Once again Joel highlights the importance of functional programming through evangelizing the map/reduce techniques.

Personally ...

2006 was the year when

  • I started blogging - two months down the line I will be celebrating the first anniversary of my Ruminations blog.

  • I attended the first JavaOne of my life - it was exciting to be among the millions of Java programmers swarming inside the Moscone Center.

  • I made my first venture into the Ruby land and lived up to a promise that I will learn at least one new programming language this year. Incidentally I have also started playing around with Scala .. my initial impressions are very good. 2007 will definitely see me closer to the languages of the Gods, to rediscover the joy of programming.

  • I finally came out of the EJB fiasco and dug deep into the Spring experience, along with the usual accompaniments of Hibernate and the rest of the lightweight stack elements. EJB3 is better than EJB2, but Spring is way ahead.

  • I became a huge fan of AOP.

That's all, folks. Would be eager to hear out from all of you about any of your interesting reads in 2006. Finally, I would like to sign off 2006 with the following revelation from Paul Graham :

The pointy-haired boss is a manager who doesn't program. So the surest way to avoid becoming him is to stay a programmer. What tempts programmers to become managers are companies with old-fashioned corporate structure, where the only way to advance in salary and prestige is to go into management. So if you want to avoid becoming a PHB, avoid such companies, and work for (or start) startups.

There's the spirit, that's the spirit, the spirit of a programmer - be a programmer, remain a programmer and have respect for hacker oriented programming languages (aka Lisp). My goal for 2007 - to get more enlightened into the eval land, which John McCarthy laid out way back in the 1960s.

Happy Holidays !!

Tuesday, December 26, 2006

Domain Driven Design : Control Domain Object Lifetimes using Spring Custom Scoped Beans

In his celebrated book on Domain Driven Design, Eric Evans mentions that one of the biggest challenges of maintaining the integrity of a domain model is managing the lifecycle of a domain object. He states
But other objects have longer lives, not all of which are spent in active memory. They have complex interdependencies with other objects. They go through changes of state to which invariants apply. Managing these objects presents challenges that can easily derail an attempt at MODEL-DRIVEN DESIGN.

Two points immediately stand out from the above, when we think of modeling a system based on principles of DDD :

  1. Domain object lifecycles need to be managed

  2. Lifecycle management needs to be decoupled from mainstream business logic

Both of the above issues can be addressed using an IoC container like Spring. And the new custom scopes of Spring 2.0 give us the real steroid towards declarative lifecycle management of domain objects.

Custom Scopes in Spring 2.0

Till 2.0, Spring provided only two levels of granularity at which you could declare your beans -

  1. Singleton, which scopes a single bean definition to a single object instance per Spring IoC container and

  2. Prototype, which scopes a single bean definition to any number of object instances. The prototype scope results in creation of a new bean instance every time a request for that specific bean is made.

Hence any other intermediate granularity of bean lifecycles had to be managed explicitly by the application itself. Spring 2.0 comes with three additional levels of bean scopes out of the box, as well as the framework to create your custom scope that suits your application needs. The ones that come out of the box are request, session and global session, which are described in detail in the Spring Reference documentation.

In this blog post, I will elaborate how the domain model can be enriched by defining beans at application defined custom scopes. I will end the post with an example of how declaring the scope as configuration, results in declarative lifecycle management of a domain object, irrespective of the layer to which the object belongs.

Scope your Domain Objects at Business Level Granularity

Prior to 2.0, non-singleton beans could only be defined with a lifecycle that creates a new instance on every access - aka the prototype scope. For beans not created by Spring, there used to be techniques like Field Level Injection and Service Location Strategy which achieves the same effect as prototype beans.

Let us consider a real life example from the financial domain, specifically, a back office system for Capital Market Trading and Settlement. We have an example of a BasketTrade, which is, essentially a collection of Trades matching some criteria and need to be processed together. No surprises here, we model a BasketTrade as :

public class BasketTrade {
  private List<Trade> trades = new ArrayList<Trade>();
  // ..
  // ..

The point to note is that the abstraction BasketTrade is just a virtual container for the collection of trades and has been created for the convenience of atomic processing of the underlying Trade objects. It is typically the root of an Aggregate, as Eric defines in his DDD book, which ceases to exist as soon as the processing is complete, e.g. all component trades are committed to the Repository.

In other words, we can define a custom scope (say, basket-scope), which defines the lifecycle of a BasketTrade bean. Typically, the application provides a BasketingService, which can be modeled as a Singleton, and can contain a BasketTrade as a scoped bean within it, injected by the IoC container.

public class BasketingService {
  private BasketTrade basket;

  public void addToBasket(Trade trade) {

  public void setBasket(BasketTrade basket) {
    this.basket = basket;

  public BasketTrade getBasket() {
    return basket;

We have two collaborating beans with different lifecycles, which can be wired up with the custom scope definitions as part of the configuration. And Spring 2.0 offers ScopedProxyFactoryBean for this purpose, which offers convenient proxy factory bean for scoped objects. Here we have the xml, which wires the domain objects with custom life cycles :

<bean class="org.springframework.beans.factory.config.CustomScopeConfigurer">
  <property name="scopes">
    <entry key="basket"><bean class=""/></entry>

<bean id="basketTradeTarget"
  class="" scope="basket" lazy-init="true">

<bean id="basketTradeProxy"
   <property name="targetBeanName">

<bean id="basketingService" class="">
  <property name="basket" ref="basketTradeProxy"/>

Defining the BasketScope

The following is a very naive implementation of the custom basket scope. This is just for demonstrating the power of custom scopes in controlling the lifecycles of domain objects.

public class BasketScope implements Scope {
  private static final Map scope = new ConcurrentHashMap();

  public String getConversationId() {
    return null;

  public Object get(String name, ObjectFactory objectFactory) {
    Object obj = scope.get(name);
    if (obj == null) {
      obj = objectFactory.getObject();
      scope.put(name, obj);
    return obj;

  public Object remove(String name) {
    return scope.remove(name);

  public void registerDestructionCallback(String string, Runnable runnable) {
    // register any custom callback

Removing Objects with Custom Scope

This is an area which is not very clear from Spring Reference documentation. No problem, the helpful Spring community was prompt enough to give enough support to clarify all my confusions (see this thread).

For the out-of-the-box implementations of request and session scopes, the lifetime of the scoped bean ends automatically with the end of the request or session - and one can implement HttpSessionBindingListener to plug in custom destruction callback. Have a look at the implementation of DestructionCallbackBindingListener in class org.springframework.web.context.request.ServletRequestAttributes :

private static class DestructionCallbackBindingListener
    implements HttpSessionBindingListener {

  private final Runnable destructionCallback;

  public DestructionCallbackBindingListener(Runnable destructionCallback) {
    this.destructionCallback = destructionCallback;

  public void valueBound(HttpSessionBindingEvent event) {

  public void valueUnbound(HttpSessionBindingEvent event) {;

For a custom scope, the simplest way will be to invoke the object removal manually in the workflow. For the above example with BasketTrade, the destruction code looks like :

ScopedObject so = (ScopedObject) basketingService.getBasket();

Note that proxies returned by ScopedProxyFactoryBean implement the ScopedObject interface, which allows removing the corresponding object from the scope, seamlessly creating a new instance in the scope on next access.

And now on to a neat trick. We can encapsulate the invocation of the destruction callback into a Seam-style annotation marking the end of the conversation scope. In the above code for BasketingService, suppose we would like to end the scope of the basket after a commit to the database of all constituent trades - we have a method commit(), which after database commit will mark the end of the lifetime of the current basket. We mark this declaratively using the @End annotation.

public class BasketingService {
  // as above

  public BasketTrade commit() {
    // database commit logic
    return basket;

Finally the implementation of @End using the usual Spring AOP magic ..

public class DestroyScope {

  public void doDestroy(Object retVal) {
    ScopedObject so = (ScopedObject) retVal;

and the corresponding entry in configuration xml :

<bean id="destroyAspect" class=""></bean>


One of the great benefits of custom scopes in Spring 2.0 is the fact that it allows declarative lifecycle management of domain objects without the service location api intruding your business logic code. In the above example, the two wired beans BasketingService and BasketTrade have different lifecycles - yet the collaborating code and the associated business logic is completely oblivious about this difference. The declarative @End annotation, along with the Spring AOP magic, works behind the doors to automatically fetch a new instance of BasketTrade when the user asks for the next access.

Tuesday, December 12, 2006

Domain Driven Design : Service Injection Strategies in Spring and their Pitfalls - Part 2 - Service Injection into Aspects

In Part 1 of this series, I had discussed one way of injecting services into domain objects not instantiated by the Spring container - the @Configurable annotation, which provides a nice declarative semantics for encouraging rich domain models. I had also pointed out some of the pitfalls which can bite you in course of the implementation using the @Configurable technique.

One of the highlights to remember regarding the @Configurable approach is that the annotation works on a per-class basis and cannot be meaningfully enforced on class hierarchies all at once. Let us consider the case where we need to inject a service to a number of domain classes, not related by inheritance. I came across this situation recently in modeling the financial domain for developing a solution for a capital market back office system.

class Trade {
  // models a trade of a security for an account

class Settlement {
  // models a settlement of a trade

class Position {
  // models the security and cash position of an account

These three are only examples of some of the many domain classes which needed a validation sevice for the account on which they operate. It is fairly simple to inject the validation service using @Configurable - note that being domain classes, these are not instantiated by Spring container. Hence @Configurable works like a charm !

class Trade {
  private IAccountValidationService accountValidationService;
  // setter ..
  // ..

class Settlement {
  private IAccountValidationService accountValidationService;
  // setter ..
  // ..

class Position {
  private IAccountValidationService accountValidationService;
  // setter ..
  // ..

and we have the corresponding applicationContext.xml :

<bean id="trade"
  class="" scope="prototype">
  <property name="accountValidationService">
  <ref bean="defaultAccountValidationService"/>

<bean id="settlement"
  class="" scope="prototype">
  <property name="accountValidationService">
  <ref bean="defaultAccountValidationService"/>

<bean id="position"
  class="" scope="prototype">
  <property name="accountValidationService">
  <ref bean="defaultAccountValidationService"/>

<bean name="defaultAccountValidationService"

One of the disturbing points of the above configuration is the boilerplate repetition of the service injection for accountValidationService. If tomorrow we need to change the validation strategy, we need to change entries for all of them separately - a violation of DRY. And had it not been for this service, we need not have an entry in the configuration file for these domain classes at all !

When Dealing with Repetitions, Think Aspects

Clearly the above domain classes cannot be related through any common parentage - so we cannot capture them directly on their head. Why not have an extra level of indirection that enables us to do so ? Adrian Coyler explains this strategy succinctly in this article and I will try to summarise my experience in using it in a domain modeling exercise.

Let us have the domain classes themselves advertise the services that they want to subscribe to ..

class Trade implements IAccountValidationClient {
  // ..

class Settlement implements IAccountValidationClient {
  // ..

class Position implements IAccountValidationClient {
  // ..

Aha .. now at least we have the head to catch - we need to determine how we can inject the service in all of them using the head. Think Pointcuts ..

pointcut clientCreation(IAccountValidationClient aClient) :
  initialization( &&
  !initialization( &&

This will capture all instantiations of classes that subscribe to account validation service by implementing IAccountValidationClient. Once we have the instantiations intercepted through a pointcut, can we inject a service into each of them through an aspect ? Note that the service injection cannot be done through inter-type declarations or type introductions, since all of the classes will actually be using the service in-situ, while inter-type declaration introduces the new service off-site in the aspect definition. e.g.

class Trade implements IAccountValidationClient {
  // ..

  public void validate(..) {
    // validate account using validation service
    // the validation service has to be declared in-situ

Inject the Service into an Aspect

Create an aspect with the above pointcut, inject the service into it and use the injected service in an aspect advice to reinject it into the client using the matching joinpoint. The only question is how to inject the service into an aspect .. and this is where Spring rocks. Spring allows you to specify a factory method which the container will use for instantiation of the bean. And AspectJ exposes the method aspectOf for every aspect, which precisely fits the situation like a glove. Marry the two and what you have is pure magic :

<bean name="accountValidationServiceInjector"
  <property name="service"><ref bean="accountValidationService"/></property>

The aspect gets the service injected after instantiation through the aspectOf factory method.

The Missing Block in the Puzzle - the Advice that Gets Weaved

Here is the complete aspect :

public aspect AccountValidationServiceInjector {
  private IAccountValidationService service;

  public void setService(IAccountValidationService service) {
    this.service = service;

  pointcut clientCreation(IAccountValidationClient aClient) :
    initialization( &&
    !initialization( &&

  after(IAccountValidationClient aClient) returning :
    clientCreation(aClient) {
    if (aClient.getAccountValidationService() == null) {

and the relevant portions of a client :

public class Settlement implements IAccountValidationClient {

  private IAccountValidationService accountValidationService;

  public IAccountValidationService getAccountValidationService() {
    return accountValidationService;

  public void setAccountValidationService(
    IAccountValidationService aValidationService) {
    this.accountValidationService = aValidationService;

  // ...

It does not need the @Configurable annotation and hence the configuration boilerplates disappear. And if we need to change the validation strategy, we need to change only one entry for the aspect accountValidationServiceInjector in the applicationContext.xml.

Strategy Specialization using @Configurable

The above technique injects an implementation of a service across all domain objects that publish their willingness for the service. It may so happen, that some of them may need to go for a specialized service implementation.

e.g. in our domain, the account validation service for Position needs to check if the Position Management Service is subscribed for the account in the contract with the Custodian. This calls for a special validation service to be injected for Position class. This can be done using @Configurable on top of the above generalized injection.

class Position {
  private IAccountValidationService accountValidationService;
  // setter ..
  // ..

Here are the corresponding entries in applicationContext.xml for the specialized service :

<bean name="specialAccountValidationService"

<bean id="position"
  <property name="accountValidationService">
    <ref bean="specialAccountValidationService"/>

With the above specialization, all classes implementing IAccountValidationClient will be injected with DefaultAccountValidationService, except Position, which will get an instance of SpecialAccountValidationService. Note that this may require an explicit setting of aspect precedence as I mentioned in Part 1.


I found the above technique quite useful in injecting common services to domain classes at large. The main advantages were realized with reduced size of the configuration xml and ease of adaptability to changes in implementation.

The main pitfall with this approach is with respect to injection upon deserialization (the same as @Configurable), which can be addressed using the same technique that Ramnivas has adopted in fixing @Configurable.

Another pitfall of this approach is that all clients need to implement the interface explicitly and have the setters programmed within the domain class. But this is a one time effort and has no impact on future changes to service implementation. It may be an interesting exercise to use Java Generics to reduce the boilerplates that need to be coded in the domain abstractions. The important point to consider here is that every domain class may implement multiple service clients - I tried to model this with Java Generics, but only banged my head with type erasure getting in the way ..

class Trade implements IAppClient<IAccountValidationService>, IAppClient<IAuditService> {
  // ..

No good in the current implementation of Java generics.

In the next part of this series, I will have a look at the non-singleton services and how the new Spring 2.0 scoped beans can provide an elegant solution for some such typical use cases. For now, I need some more caffeine kick .. Spring 2.0.1, the NetBeans platform, the new release of Scala .. too many toys to play around with in my list for now ..

Thursday, December 07, 2006

Domain Driven Design : Service Injection Strategies in Spring and their Pitfalls - Part 1

One of the exciting features that Spring 2.0 offers is the ability to inject dependencies into arbitrary domain objects, even when the domain object has not been created by Spring. This is achieved by using the annotation @Configurable on the domain object. I have blogged on this before and had described how this declarative dependency injection helps us in architecting rich domain models.

@Configurable is based on AspectJ's powerful AOP support, though the user can be blissfully oblivious to the nitty gritties of the implementation. In this post, I would like to discuss some of the pitfalls of @Configurable, as it stands today, with Spring 2.0.1. This will open up a discussion towards other strategies of service injection in domain objects using the combination of Spring DI and aspects. I plan to model this to be a nice little series discussing the various service injection strategies, their applications and their pitfalls. Hence the optimistic Part 1 in the title line. Stay tuned ..

@Configurable - Do I need to care about the implementation ?

The Spring reference documentation says
The @Configurable annotation marks a class as eligible for Spring-driven configuration.

This statement gives us a nice declarative semantics for dependency injection into objects not instantiated by Spring. Cool .. but unfortunately as they say, the devil is in the details. And, as a user of this contract, I still need to care about the fact that the semantics of this annotation has been implemented using an aspect. Aspects, provided by the framework, which weave into application code, always tend to be invasive, and there can be side-effects, if I have to plug in my own aspect into the very same domain object. Recently in one of the domain model implementations, I had to introduce explicit precedence on aspects to get my desired functionality :

public aspect Ordering {
  declare precedence: *..*AnnotationBeanConfigurerAspect*, *;

So, implementation of @Configurable, may have side-effects on user code. Watch out ..

@Configurable - Not Inheritance Friendly

Spring reference documentation does not mention this explicitly, but the fact is that @Configurable does not handle inheritance correctly. This is related to the implementation mechanism of initialization joinpoints. In case of a pointcut matching an instance with a superclass and a superinterface (e.g. class Implementation extends Parent implements Interface), three initialization joinpoints will be identified - one for the instance (Implementation), one for the superclass (Parent) and one for the superinterface (Interface). Hence the advice will also be executed thrice, once for every matching joinpoint.

@Configurable implementation is based on the AnnotationBeanConfigurerAspect, which is implemented in AspectJ. The following is the corresponding pointcut definition :

public aspect AnnotationBeanConfigurerAspect extends AbstractBeanConfigurerAspect {
  // ...
  protected pointcut beanCreation(Object beanInstance) :
    initialization((@Configurable *).new(..)) && this(beanInstance);

while the advice comes from the base aspect :

public abstract aspect AbstractBeanConfigurerAspect extends BeanConfigurerSupport {

  after(Object beanInstance) returning : beanCreation(beanInstance) {
  // ...

Note that the pointcut is based on the initialization joinpoint of the bean instance. If the annotation is used on a class which has one or more parents in the inheritance hierarchy, then the method configureBean() will be executed once for every matching joinpoint. Now, configureBean() is idempotent - hence the end result will be the same. But still there is a performance overhead for multiple executions of the method.


public class Trade {
  // ...

public class CashTrade extends Trade {
  // ...

For the above example, issuing a new CashTrade() will invoke configureBean() twice on the instance created.

@Configurable and Deserializability

Deserializing a @Configurable object loses all its injected dependencies. In a clustered environment where serializing and deserializing is a common phenomenon, this is a real problem. Currently the only way to overcome this issue is for the user to write a subaspect of AbstractBeanConfigurerAspect, which takes care of restoring the dependencies on deserialization. While this solves the problem for the user, ideally this should have been taken care of by the framework. The good part is that, Ramnivas has already posted a patch for this problem in the Spring JIRA, which solves the problem almost completely. As he mentions in the comment, the solution will be exactly foolproof with a minor change in the next release of AspectJ.

Apart from the above 3 pitfalls, there are some other problems with @Configurable related to bean location failures for AnnotationBeanConfigurerAspect in hierarchical contexts. Compared to the above three, this is a more specialized occurrence since hierarchical contexts are comparatively rare.

Next time when you use @Configurable, keep an eye on these gotchas. While none of these problems are unsurmountable and we will have the fixes in future releases, these are the pitfalls that we need to consider today when deciding on the service injection strategy for domain objects. In the next part of this series, we will look at yet another strategy for service injection, which uses Spring IoC to inject dependencies into aspects instead of objects. There are situations where we may prefer the latter approach over the more user-friendly @Configurable, but that is the subject of another post, another day ..

Tuesday, December 05, 2006

Shop During Office Hours - It's Google Business Model in Action

Here is one straight from the Official Google Blog ..

Googler Tom Oliveri of the Google Checkout team, blogs on Google encouraging its employees on an online shopping spree on Cyber Monday. The employees freaked out on the discounts available through Google Checkout, definitely had a great feel good factor for their employer. Which employer, on earth, will encourage its employees to use office working hours to go an online shopping day out ?

It's Google, and this is yet another great deployment of their innovative business model ..

Monday, November 27, 2006

Threadless Concurrency on the JVM - aka Scala Actors

Billy has written an interesting blog on the impact of multicore processors on Java. He concludes that the Java EE platform will have to be redressed to some extent in order to address the new threading patterns that applications will use and the consequences of reduced clock speed to accomodate the extra cores on the die. He has made some very thoughtful observations regarding the evolution of the future Java EE platforms and the JVM. Definitely worth a couple of reads ..

Concurrency Concurrency

One of the commonly mentioned fallouts of the new processor architectures is the new face of the applications written on the JVM. In order to take performance advantage from the multiple cores, applications need to be more concurrent, programmers need to find more parallelism within the application domain. Herb Sutter sums it up nicely in this landmark article :
But if you want your application to benefit from the continued exponential throughput advances in new processors, it will need to be a well-written concurrent (usually multithreaded) application. And that’s easier said than done, because not all problems are inherently parallelizable and because concurrent programming is hard.

Look Maa ! No Threads !

Writing multi-threaded code is hard, and, as the experts say, the best way to deal with multi-threading is to avoid it. The two dominant paradigms of concurrency available in modern day languages are :

  • Shared State with Monitors, where concurrency is achieved through multiple threads of execution synchronized using locks, barriers, latches etc.

  • Message Passing, which is a shared-nothing model using asynchronous messaging across lightweight processes or threads.

The second form of concurrent programming offers a higher level of abstraction where the user does not have to interact directly with the lower level primitives of thread models. Erlang supports this model of programming and has been used extensively in the telecommunications domain to achieve a great degree of parallelism. Java supports the first model, much to the horror of many experts of the domain and unless you are Brian Goetze or Doug Lea, designing concurrent applications in Java is hard.

Actors on the JVM

Actor based concurrency in Erlang is highly scalable and offers a coarser level of programing model to the developers. Have a look at this presentation by Joe Armstrong which illustrates how the share-nothing model, lightweight processes and asynchronous messaging support makes Erlang a truly Concurrency Oriented Programming Language. The presentation also gives us some interesting figures - an Erlang based Web server supported more than 80,000 sessions while Apache crashed at around 4,000.

The new kid on the block, Scala brings Erlang style actor based concurrency on the JVM. Developers can now design scalable concurrent applications on the JVM using the actor model of Scala which will automatically take advantage of the multicore processors, without programming to the complicated thread model of Java. In applications which demand large number of concurrent processes over a limited amount of memory, threads of the JVM, prove to be of significant footprint because of stack maintenance overhead and locking contentions. Scala actors provide an ideal model for programming in the non-cooperative virtual machine environment. Coupled with the pattern matching capabilities of the Scala language, we can have the full power of Erlang style concurrency on the JVM. The following example is from this recent paper by Philipp Haller and Martin Odersky:

class Counter extends Actor {
  override def run(): unit = loop(0)

  def loop(value: int): unit = {
    Console.println("Value: " + value)
    receive {
      case Incr() => loop(value + 1)
      case Value(a) => a ! value; loop(value)
      case Lock(a) => a ! value
        receive { case UnLock(v) => loop(v) }
      case _ => loop(value)

and its typical usage also from the same paper :

val counter = new Counter // create a counter actor
counter.start() // start the actor
counter ! Incr() // increment the value by sending the Incr() message
counter ! Value(this) // ask for the value

// and get it printed by waiting on receive
receive { case cvalue => Console.println(cvalue) }

Scala Actors

In Scala, actors come in two flavors -

  1. Thread based actors, that offer a higher-level abstraction of threads, which replace error-prone shared memory accesses and locks by asynchronous message passing and

  2. Event based actors, which are threadless and hence offer the enormous scalability that we get in Erlang based actors.

As the paper indicates, event based actors offer phenomenal scalability when benchmarked against thread based actors and thread based concurrency implementations in Java. The paper also demonstrates some of the cool features of library based design of concurrency abstractions in the sense that Scala contains no language support for concurrency beyond the standard thread model offered by the host environment.

I have been playing around with Scala for quite some time and have been thoroughly enjoying the innovations that the language offers. Actor based concurrency model is a definite addition to this list, more so since it promises to be a great feature that programmers would love to have as part of their toolbox while implementing on the JVM. JVM is where the future is, and event based actors in Scala will definitely be one of the things to watch out for ..

Monday, November 20, 2006

Use Development Aspects to Enforce Concurrency Idioms in Java Applications

Java 5 has given us a killer concurrency library in java.util.concurrent. Mustang will add more ammunitions to the already performant landscape - now it is upto the developers to use it effectively for the best yield. If you are not Doug Lea and have been getting your hands dirty with the Executors and Latches and Barriers of java.util.concurrent, I am sure you must have realized that killer libraries also need quality programmers to deliver the good. I have been thinking of ways get some of the concurrency goodies into existing Java applications, who have recently migrated to the Java 5 platform and have still been struggling with performance problems on multithreaded programs. Of course for production environments, we cannot redesign things from scratch, however ingenuous solution we promise to deliver. However, in one of these recently migrated applications, it was one of my charters to do a code review and suggest a path of least resistance that can potentially introduce some of the improvements of java.util.concurrent without any major redesign.

I decided to approach the problem by trying to find out some of the obvious concurrency related problems that Brian Goetze has been harping upon in his Java Theory and Practice columns in IBM developerworks. The codebase was huge and has evolved over the last 3 years under the auspices of a myriad of programmers - hence I decided to equip myself with the most important crosscutting weapon, a range of development aspects that can point me to some of the possible problem areas. There may be some false positives, but overall the strategy worked .. the following post highlights some of them with some representative code snippets for illustration purposes.

Safe Publication of Objects #1

Do not allow the this reference to escape during construction. One typical scenario where programmers make mistake is to start a thread from within a constructor. And as Brian Goetze has pointed out, it is a definite anti-pattern. I wrote a small aspect for detecting this :

public aspect TrackEscapeWithThreadStartInConstructor {
  pointcut inConstructor() : initialization(*.new(..));
  pointcut threadStart() : call(void Thread.start());

  before() : cflow(inConstructor()) && threadStart() {
    throw new RuntimeException(
      "possible escape of this through thread start in constructor");

and Eclipse was quick to point me to the offending classes :

Safe Publication of Objects #2

Another anti-pattern when the this reference escapes from the constructor is when the programmer calls a non-static instance method from within the constructor of a class. Here is a small aspect that catches this anti-pattern :

public aspect TrackEscapeWithMethodCallInConstructor {
  pointcut inConstructor(Object o)
   : target(o)
    && withincode(*.new(..));

  pointcut callInstanceMethod()
   : call(!private !static **.*(..));

  before(Object o) : inConstructor(o)
    && callInstanceMethod()
    && if (o.equals(thisJoinPoint.getTarget())) {

      throw new RuntimeException(
      "possible escape of this through instance method call in constructor");

and Eclipse responds :

Using the New Concurrent Collections

Java 5 and Java 6 offer concurrent collection classes as definite improvements over the synchronized collections and can be used as drop in replacements in most of the cases. The older synchronized collection classes serialize all access to the collection, resulting in poor concurrency. The new ones (ConcurrentHashMap, CopyOnWriteArrayList etc.) offer better concurrency through locking at a finer level of granularity. If the traversal in the concurrent collections is the dominant operation, then these new collection classes can offer dramatic scalability improvements with little risk - for more information refer to Brian Goetze's excellent book on Java Concurrency in Practice.

I wrote the following aspect which pointed me to all possible uses of the synchronized collections in the codebase :

public aspect TrackSynchronizedCollection {
  pointcut usingSyncCollection() :
   call(public * java.util.Collections.synchronized*(..));

  declare warning
    : usingSyncCollection()
    : "consider replacing with Concurrent collections of Java 5";

We did a careful review and replaced many of those occurences with the newer collections - and we were able to achieve significant performance gains in some situations.

Handling the InterruptedException

In many places within the application that deal with blocking apis and multithreading, I found empty catch blocks as handlers of InterruptedException. This is not recommended as it deprives code higher up on the call stack of the opportunity to act on the interruption - once again refer to Brian Goetze for details.

However, a small aspect allows to get to these offending points :

public aspect TrackEmptyInterruptedExceptionHandler {
  pointcut inInterruptedExceptionHandler()
   : handler(InterruptedException+);

  declare warning
    : inInterruptedExceptionHandler()
    : "InterruptedException handler policy may not be defined";

  before() : inInterruptedExceptionHandler() {

Development aspects are a great tool for refactoring and code review. I realized this first hand in the exercise that I did above and succeeded in identifying some of the anti-patterns of writing multi-threaded programs in Java and enforcing some of the common idioms and best practices across the codebase. In the above aspects, I have only scratched the surface - in fact developing a library of development aspects will be a great tool to the developer at large.

Monday, November 13, 2006

RIA, Echo2 and Programming Model

We, at Anshinsoft, have been working on our offering in the Reporting Solutions space - a WYSIWYG Report Designer and a full blown Enterprise Report Server. The Designer is a desktop Swing based application for designing reports, which, then can be deployed, managed and scheduled in the Enterprise Server.

As an additional plug-in, we would also like to have the Designer on the Web using the in-fashion RIA and Ajax architectural styles, which will enable users the usual flexibilities of a thin client application along with the richness that adds on to it. I have been exploring some of the architectural options towards this end, keeping in mind some of the constraints that we, as an organization have :

  • The current Designer has been implemented in Swing - we have a large programmer base who have delivered Java Swing based UIs and are familiar with the Swing programming model.

  • We do not have many Javascript programmers and as an organization are not equipped well enough to take up the seemingly daunting task of crafting a Javascript based architecture (aka client side Ajax)

  • I am comfortable with the first part of the previous point that we have a large Java programmer base. But based on the tonnes of Swing code that have been churned out in the current implementation, I am very much skeptical about the maintenability and the aesthetics of the codebase. In one of my previous posts, I had expressed my disliking of the Swing based programming model, which encourages lots of boilerplate stuff. Hence, I would like to move away from the naked model of Swing programming.

Enter Echo2

Based on the above constraints, I did some research and have ultimately arrived at Echo2 as the suggested platform for implementing the Web based Designer with a rich UI. The main pluses that I considered going with Echo2 are :

  • Completely Java based programming model, which nicely fits into the scheme of our organization expertise fabric.

  • Swing like apis which again score with respect to the familiarity metrics of the team.

  • Echo2 nicely integrates with Spring, which we use as the business layer.

  • I also considered GWT, but ultimately took on Echo2, because GWT is still in beta and does not have the same richness with respect to pre-built set of available components.


The main concern that I have with Echo2 is, once again, related to the programming model - I am not a big admirer of the naked Swing model of programming. And here is the main gotcha .. I have been thinking of the following possibilities that can give me a more improved programming model on Echo2 :

  • Think MDA, use Eclipse EMF and openArchitectureWare to design models of the user interfaces and generate Echo2 code out of it. Then I maintain the models, which look much more pragmatic than maintaining a huge boilerplate style of codebase.

  • Has someone written something similar to Groovy SwingBuilder for Echo2, which I can use as a markup.

  • Use some sort of Language Oriented Programming, maybe a small DSL using the JetBrains Meta Programming System(MPS).

  • Write a homegrown abstraction layer on top of Echo2 that incorporates FluentInterfaces like goodies and offers a better programming model.

I would really like to go for a more declarative programming model - in Echo2, the navigation and flow logic are completely embedded with the rendering part. Can I externalize it without writing lots of framework code ?

Why not WebOnSwing or similar stuff ?

I do not want to deploy the existing Swing application - it has evolved over the years and it is time we move on to a higher level of abstraction and capitalize on richer features and responsiveness of Ajax frameworks.

I would like to seek suggestions from experts on the above views. Any pointers, any suggestions that will help us make a positive move will be most welcome!

Monday, November 06, 2006

Domain Abstractions : Abstract Classes or Aspect-Powered Interfaces ?

In my last post I had discussed about why we should be using abstract classes instead of pure Java interfaces to model a behavior-rich domain abstraction. My logic was that rich domain abstractions have to honor various constraints, which cannot be expressed explicitly by pure interfaces in Java. Hence leaving all constraints to implementers may lead to replication of the same logic across multiple implementations - a clear violation of the DRY principle.

However, lots of people expressed their opinions through comments in my blog and an interesting discussion on the Javalobby forum, where I found many of them to be big proponents of *pure* Java interfaces. All of them view Java interfaces as contracts of the abstraction and would like to undertake the pain of extreme refactoring in order to accomodate changes in the *published* interfaces. This post takes a relook at the entire view from the world of interfaces and achieve the (almost) same effect as the one described using abstract classes.

Interfaces Alone Don't Make the Cut!

Since pure Java interfaces cannot express any behavior, it is not possible to express any constraint using interfaces alone. Enter aspects .. we can express the same behavioral constraints using aspects along with interfaces.

Continuing with the same example from the previous post :-

The interface ..

interface IValueDateCalculator {
  Date calculateValueDate(final Date tradeDate)
      throws InvalidValueDateException;

and a default implementation ..

public class ValueDateCalculator implements IValueDateCalculator {
  public Date calculateValueDate(Date tradeDate)
      throws InvalidValueDateException {
    // implementation

Use aspects to power up the contract with mandatory behavior and constraints ..

public aspect ValueDateContract {

  pointcut withinValueDateCalculator(Date tradeDate) :
      target(IValueDateCalculator) &&
      args(tradeDate) &&
      call(Date calculateValueDate(Date));

  Date around(Date tradeDate) : withinValueDateCalculator(tradeDate) {

    if (tradeDate == null) {
      throw new IllegalArgumentException("Trade date cannot be null");

    Date valueDate = proceed(tradeDate);

    try {
      if (valueDate == null) {
        throw new InvalidValueDateException(
          "Value date cannot be null");

      if (valueDate.compareTo(tradeDate) == 0
        || valueDate.before(tradeDate)) {
        throw new InvalidValueDateException(
          "Value date must be after trade date");

    } catch(Exception e) {
      // handle

    return valueDate;

Is this Approach Developer-friendly ?

Aspects, being looked upon as an artifact with the *obliviousness* property, are best kept completely decoupled from the interfaces. Yet, the complete configuration of a contract for a module can be known only with the full knowledge of all aspects that weave together with the interface. Hence we have the experts working on how to engineer aspect-aware-interfaces as contracts for the modules of a system, yet maintaining the obliviousness property.

While extending from an abstract class, the constraints are always localized within the super class for the implementer, in this case, the aspects, which encapsulate the constraints, may not be "textually local". Hence it may be difficult for the implementer to be aware of these constraints without strong tooling support towards this .. However, Eclipse 3.2 along with ajdt shows all constraints and advices associated with the aspect :

at aspect declaration site ..

and at aspect call site ..

And also, aspects do not provide that much fine grained control over object behavior than in-situ Java classes. You can put before(), after() and around() advices, but I still think abstract classes allow a more fine grained control over assertion, invariant and behavior parameterization.

Dependencies, Side-effects and Undocumented Behaviors

In response to my last post, Cedric had expressed the concern that with the approach of modeling domain abstractions using abstract classes with mandatory behavioral constraints

Before you know it, your clients will be relying on subtle side-effects and undocumented behaviors of your code, and it will make future evolution much harder than if you had used interfaces from the start.

I personally feel that for mandatory behaviors, assertions and invariants, I would like to have all concrete implementations *depend* on the abstract class - bear in mind that *only* those constraints go to the abstract class which are globally true and must be honored for all implementations. And regarding unwanted side-effects, of course, it is not desirable and often depends on the designer.

From this point of view, the above implementation using interfaces and aspects also suffer from the same consequences - implementers depend on concrete aspects and are always susceptible to unwanted side-effects.

Would love to hear what others feel about this ..

Tuesday, October 31, 2006

Domain Classes or Interfaces ?

A few days back I had initiated a thread in the mailing list of Domain Driven Design regarding the usage of pure Java interfaces as the contract for domain objects. The discussion turned out to be quite interesting with Sergio Bossa pitching in as the main supporter of using pure interfaces in the domain model. Personally I am not a big camp follower of the pure-interfaces-as-intention-revealing paradigm - however, I enjoyed the discussion with Sergio and the other participants of the group. Sergio has posted the story with his thoughts on the subject. The current entry is a view from the opposite camp and not really a java pure-interface love affair.

The entire premise of the Domain Driven Design is based upon evolving a domain model as the cornerstone of the design activity. A domain model consists of domain level abstractions, which builds upon intention revealing interfaces, built out of the Ubiquitous Language. And when we talk about abstractions, we talk about data and the associated behavior. The entire purpose behind DDD is to manage the complexity in the modeling of these abstractions, so that we have a supple design that can be carefully extended by the implementers and easily used by other clients. In the process of extension, the designer needs to ensure that the basic assumptions or behavioral constraints are never violated and the abstractions' published interfaces always honor the basic contractual framework (pre-conditions, post-conditions and invariants). Erik Evans never meant Java interfaces when he talked about intention-revealing-interfaces - what he meant was more in terms of contract or behavior to be modeled with the most appropriate artifact available in the language of implementation.

Are Java interfaces sufficiently intention-revealing ?

The only scope that the designer has to reveal the intention is through the naming of the interface and its participating methods. Unfortunately Java interfaces are not rich enough to model any constraints or aspects that can be associated with the published apis (see here for some similar stuff in C#). Without resorting to some of the non-native techniques, it is never possible to express the basic constraints that must be honored by every implementation of the interface. Let us take an example from the capital market domain :

interface IValueDateCalculator {
  Date calculateValueDate(final Date tradeDate)
      throws InvalidValueDateException;

The above interface is in compliance with all criteria for an intention-revealing-interface. But does it provide all the necessary constraints that an implementor need to be aware of ? How do I specify that the value-date calculated should be a business date after the trade date and must be at least three business dates ahead of the input trade-date ? Pure Java interfaces do not allow me to specify any such criteria. Annotations also cannot be of any help, since annotations on an interface do not get inherited by the implementations.

Make this an abstract class with all constraints and a suitable hook for the implementation :

abstract class ValueDateCalculator {
  public final Date calculateValueDate(final Date tradeDate)
      throws InvalidValueDateException {
    Date valueDate = doCalculateValueDate(tradeDate);
    if (DateUtils.before(valueDate, tradeDate) {
      throw new InvalidValueDateException("...");
    if (DateUtils.dateDifference(valueDate, tradeDate) < 3) {
      throw new InvalidValueDateException("...");
    // check other post conditions

  // hook to be implemented by subclasses
  protected abstract Date doCalculateValueDate(final Date tradeDate)
      throws InvalidValueDateException;

The above model checks all constraints that need to be satisfied once the implementation calculates the value-date through overriding the template method. On the contrary, with pure interfaces (the first model above), in order to honor all constraints, the following alternatives are available :

  • Have an abstract class implementing the interface, which will have the constraints enforced. This results in an unnecessary indirection without any value addition to the model. The implementers are supposed to extend the abstract class (which anyway makes the interface redundant), but, hey, you cannot force them. Some adventurous soul may prefer to implement the interface directly, and send all your constraints for a toss!

  • Allow multiple implementations to proliferate each having their own versions of constraints implementations - a clear violation of DRY.

  • Leave everything to the implementers, document all constraints in Javadoc and hope for the best.

Evolving Your Domain Model

Abstract classes provide an easy evolution of the domain model. The process of domain modeling is iterative and evolutionary. Hence, once you publish your apis, you need to honor their immutability, since all published apis will potentially be used by various clients. Various schools of thought adopt different techniques towards achieving this immutability. Eclipse development team use extension of interfaces (The Extension Object Design Pattern) and evolve their design by naming extended interfaces suffixed by a number - the I*2 pattern of interface evolution. Have a look at this excellent interview with Erich Gamma for details on this scheme of evolution. While effective in some situations where you need to implement multiple inheritance, I am not a big fan of this technique for evolving my domain abstractions - firstly, this technique does not scale and secondly, it requires an instanceof check in client code, which is a code-smell, as the gurus say.

Once again, to support smooth evolution of your domain apis, you need to back your interfaces with an abstract class implementation and have the implementers program to the abstract class, and not the interface. Then what good is the interface for ?

Are Interfaces Useless in DDD ?

Certainly not. I will use pure interfaces to support the following cases :

  • Multiple inheritance, particularly mixin implementations

  • SPIs, since they will always have multiple implementations and fairly disjoint ones too. The service layer is one which is a definite candidate for interfaces. This layer needs easy mocking for testability, and interfaces fit this context like a charm.

Some of the proponents of using interfaces claim testability as a criterion for interface based design, because of the ease of mockability. Firstly, I am not sure if domain objects can be tested effectively using mocking. Mocking is most suitable for the services and SPIs and I am a strong supporter of using interfaces towards that end. Even with classes, EasyMock supports mocking using CGLIB and proxies.

Finally ...

I think abstract classes provide a much more complete vehicle for implementation of behavior rich domain abstractions. I prefer to use interfaces for the SPIs and other service layers which tend to have multiple implementations and need easy mocking and for situations where I need multiple inheritance and mixin implementations. I would love to hear what the experts have to say on this ..

Monday, October 23, 2006

Why OOP Alone in Java is Not Enough

Object-oriented languages have taught us to think in terms of objects (or nouns) and Java is yet another example of the incarnation of the noun land. When was the last time you saw an elegant piece of Swing code ? Steve Yegge is merciless when he rants about it .. and rightly so ..
Building UIs in Swing is this huge, festering gob of object instantiations and method calls. It's OOP at its absolute worst.

There are ways of making OOP smart, we have been talking about fluent interfaces, OO design patterns, AOP and higher level of abstractions similar to those of DSLs. But the real word is *productivity* and the language needs to make your user elegantly productive. Unfortunately in Java, we often find people generating reams of boilerplates (aka getters and setters) that look like pureplay copy-paste stuff. Java abstractions thrive on the evil loop of the 3 C's create-construct-call along with liberal litterings of getters and setters. You create a class, declare 5 read-write attributes and you have a pageful of code before you start throwing in a single piece of actual functionality. Object orientation procrastinates public attributes, restricts visibility of implementation details, but never prevents the language from providing elegant constructs to handle boilerplates. Ruby does this, and does it with elan.

Java is not Orthogonal

Paul Graham in On Lisp defines orthogonality of a language as follows :
An orthogonal language is one inwhich you can express a lot by combining a small number of operators in a lot of different ways.

He goes on to explain how the complement function in Lisp has got rid of half of the *if_not* funtions from the pairs like [remove-if, remove-if-not], [subst-if, subst-if-not] etc. Similarly in Ruby we can have the following orthogonal usage of the "*" operator across data types :

"Seconds/day: #{24*60*60}" will give Seconds/day: 86400
"#{'Ho! '*3}Merry Christmas!" will give Ho! Ho! Ho! Merry Christmas!

C++ supports operator overloading, which is also a minimalistic way to extend your operator usage.

In order to bring some amount of orthogonality in Java we have lots of frameworks and libraries. This is yet another problem of dealing with an impoverished language - you have a proliferation of libraries and frameworks which add unnecessary layers in your codebase and tend to collapse under their weight.

Consider the following code in Java to find a matching sub-collection based on a predicate :

class Song {
  private String name;

// ...
// ...
Collection<Song> songs = new ArrayList<Song>();
// ...
// populate songs
// ...
String title = ...;
Collection<Song> sub = new ArrayList<Song>();
for(Song song : songs) {
  if (song.getName().equals(title)) {

The Jakarta Commons Collections framework adds orthogonality by defining abstractions like Predicate, Closure, Transformer etc., along with lots of helper methods like find(), forAllDo(), select() that operate on them, which helps user do away with boilerplate iterators and for-loops. For the above example, the equivalent one will be :

Collection sub = CollectionUtils.transformedCollection(songs,

Yuck !! We have got rid of the for-loop, but at the expense of ugly ugly syntax, loads of statics and type-unsafety, for which we take pride in Java. Of course, in Ruby we can do this with much more elegance and lesser code :

@songs.find {|song| title == }

and this same syntax and structure will work for all sorts of collections and arrays which can be iterated. This is orthogonality.

Another classic example of non-orthogonality in Java is the treatment of arrays as compared to other collections. You can initialize an array as :

String[] animals = new String[] {"elephant", "tiger", "cat", "dog"};

while for Collections you have to fall back to the ugliness of explicit method calls :

Collection<String> animals = new ArrayList<String>();

Besides arrays have always been a second class citizen in the Java OO land - they support covariant subtyping (which is unsafe, hence all runtime checks have to be done), cannot be subclassed and are not extensible unlike other collection classes. A classic example of non-orthogonality.

Initialization syntax ugliness and lack of literals syntax support has been one of the major failings of Java - Steve Yegge has documented it right to its last bit.

Java and Extensibility

Being an OO language, Java supports extension of classes through inheritance. But once you define a class, there is no scope of extensibility at runtime - you cannot define additional methods or properties. AOP has been in style, of late, and has proved quite effective as an extension tool for Java abstractions. But, once again it is NOT part of the language and hence does not go to enrich the Java language semantics. There is no meta-programming support which can make Java friendlier for DSL adoption. Look at this excellent example from this recent blogpost :

Creating some test data for building a tree, the Java way :

Tree a = new Tree("a");

Tree b = new Tree("b");
Tree c = new Tree("c");

Tree d = new Tree("d");
Tree e = new Tree("e");

Tree f = new Tree("f");
Tree g = new Tree("g");
Tree h = new Tree("h");

and the Ruby way :

tree = a {
      b { d e }
      c { f g h }

It is really this simple - of course you have the meta-programming engine backing you for creating this DSL. What this implies is that, with Ruby you can extend the language to define your own DSL and make it usable for your specific problem at hand.

Java Needs More Syntactic Sugars

Any Turing complete programming language has the ability to allow programmers implement similar functionalities. Java is a Turing complete language, but still does not boost enough programmer's productivity. Brevity of the language is an important feature and modern day languages like Ruby and Scala offer a lot in that respect. Syntactic sugars are just as important in making programmers feel concise about the implementation. Over the last year or so, we have seen lots of syntactic sugars being added to C# in the forms of Anomymous Methods, Lambdas, Expression Trees and Extension Methods. I think Java is lagging behind a lot in this respect. The smart for-loop is an example in the right direction. But Sun will do the Java community a world of good in offering other syntactic sugars like automatic accessors, closures and lambdas.

Proliferation of Libraries

In order to combat Java's shortcomings at complexity management, over the last five years or so, we have seen the proliferation of a large number of libraries and frameworks, that claim to improve programmer's productivity. I gave an example above, which proves that there is no substitute for language elegance. These so called productivity enhancing tools are added layers on top of the language core and have been mostly delivered as generic ones which solve generic problems. There you are .. a definite case of Frameworkitis. Boy, I need to solve this particular problem - why should I incur the overhead of all the generic implementations. Think DSL, my language should allow me to carve out a domain specific solution using a domain specific language. This is where Paul Graham positions Lisp as a programmable programming language. I am not telling all Java libraries are crap, believe me, some of them really rocks, java.util.concurrent is one of the most significant value additions to Java ever and AOP is the closest approximation to meta-programming in Java. Still I feel many of them would not have been there, had Java been more extensible.

Is it Really Static Typing ?

I have been thinking really hard about this issue of lack of programmer productivity with Java - is static typing the main issue ? Or the lack of meta-programming features and the ability that languages like Ruby and Lisp offer to treat code and data interchangeably. I think it is a combination of both the features - besides Java does not support first class functions, it doesn't have Closures as yet and does not have some of the other productivity tools like parallel assignment, multiple return values, user-defined operators, continuations etc. that make a programmer happy. Look at Scala today - it definitely has all of them, and also supports static typing as well.

In one of the enterprise Java projects that we are executing, the Maven repository has reams of third party jars (mostly open source) that claim to do a better job of complexity management. I know Ruby is not enterprise ready, Lisp never claimed to deliver performance in a typical enterprise business application, Java does the best under the current circumstances. And the strongest point of Java is the JVM, possibly the best under the Sun. Initiatives like Rhino integration, JRuby and Jython are definitely in the right direction - we all would love to see the JVM evolving as a friendly nest of the dynamic languages. The other day, I was listening to the Gilad Bracha session on "Dynamically Typed Languages on the Java Platform" delivered in Lang .NET 2006. He discussed about invokedynamic bytecode and hotswapping to be implemented in the near future on the JVM. Possibly this is the future of the Java computing platform - it's the JVM that holds more promise for the future than the core Java programming language.

Monday, October 09, 2006

AOP : Writing Expressive Pointcuts

Aha .. yet another rant on AOP and pointcuts, this time expressing some of the concerns with the most important aspect of aspects - the Pointcut Descriptors. In order for aspects to be a first class citizen of the domain modeling community, pointcut descriptors will have to be much more expressive than what they are today in AspectJ. Taking an example from one of the threads in "aspectj-users" forum, AOP expert Dean Wampler himself had made a mistake between call( @MyAnnotation *.new(..) ) and call( (@MyAnnotation *).new(..) ), while answering a query from another user. While the former pointcut matches all constructors annotated with @MyAnnotation, the latter matches constructors in classes where the class itself has the same annotation.

This is, at best, confusing - the syntax is not expressive and liberal sprinkling of position dependent wild card characters pose a real challenge to the beginners of AspectJ. Dean has some suggestions in his blog for making pointcut languages more expressive - as Dean has pointed out, the solution is to move towards a flexible DSL for writing pointcuts in AspectJ.

What we write today as :

execution(public !static * *(..))

can be expressed more effectively as :


The experts need to work out the complete DSL to make life easier for the beginners.

Pointcuts can be Intrusive

If not properly designed, pointcuts can directly bite into the implementation of abstractions. Consider the following example from the classic An Overview of AspectJ paper by Kiczales et. al. :

interface FigureElement {
  void incrXY(int x, int y);

class Point implements FigureElement {
  int x, y;
  // ...

Now consider the following two pointcuts :

get(int Point.x) || get(int Point.y)


get(* Shape+.*)

Both the above pointcuts match the same set of join points. But the first one directly intrudes into the implementation of the abstraction accessing the private fields, while the latter is based only on the interface. While both of them will have the same effect in the current implementation, but certainly, the first one violates the principle of "programming to the interface" and hence is not modular and scalable. While pointcuts have the raw power to cut into any levels of abstraction and inject advice transparently, care should be taken to make these pointcuts honor the ageold abstraction principles of the object oriented paradigm. As Bertrand Meyer has noted about OO contracts, pointcuts should also be pushed up the inheritance hierarchy in order to ensure maximal reusability.

Jonas Boner, while talking about invasive pointcuts has expressed this succinctly in his blog :
A pointcut can be seen as an implicit contract between the target code and the artifact that is using the pointcut (could be an aspect or an interceptor).
One problem with using patterns like this is that we are basing the implicit contract on implementation details, details are likely to change during the lifetime of the application. This is becoming an even bigger problem with the popularity of agile software development methodologies (like XP, Scrum, TDD etc.), with a high focus on refactoring and responsiveness to customer ever-changing requirements.

Metadata for Expressiveness

Ramnivas Laddad has talked about metadata as a multidimensional signature and has described annotations as a vehicle to prevent signature tangling and express any data associated with your code's crosscutting concerns. While annotations make code much more readable, but it is a compromise on one of the most professed principles of AOP - obliviousness. Annotations (and other mechanisms) can also be used to constrain advice execution on classes and interfaces. There have also been suggestions to have classes and interfaces explicitly restrict aspects or publish pointcuts. All of these, while publishing much more powerful interfaces for abstractions, will inherently limit the obliviousness property of AOP. See here for more details.

Use metadata to enhance the artifact being annotated, but the enhancement should be horizontal and NOT orthogonal. e.g. a domain model should always be annotated with domain level metadata and, as Jonas has rightly pointed out, it is equally important to use the Ubiquitous Language for annotating domain artifacts. Taking cue from the example Ramnivas has cited :

public void credit(float amount);

If the method credit() belongs to the domain model, it should never be annotated with service level annotations like @Transactional and @Authorized. These annotations go into service layer abstractions - the domain layer can contain only domain level metadata. Accordingly the pointcut processing for domain layers should not contain service layer functionality.

Tuesday, October 03, 2006

Agile Blast

Steve Yeggey @ Google has blasted Agile and InfoQ has carried a significant post on it. Rants like these sell like hot cakes and, no wonder, the last time I checked Steve's blogs, I found 161 comments posted against it. Martin Fowler has posted a quiet, but convincing dictum in his bliki regarding his agile practices in Thoughtworks. Of course, Martin's post contains no reference to Google Agile practices of Steve - but the timing is significant.

Any practice, done the wrong way is bad, and Agile is no exception. The Agile Manifesto never talks about imposition, never dictates any forceful action from the upper management - it talks about individuals, interactions and collaborations. It's never an enforcement of RigorousAgile.

We have been practicing many of the principles of agile methodology in Anshinsoft in our offshore software development model in India. To a large extent we have been quite satisfied with the results. We do *not* do pair programming, but we follow principles like customer collaboration, short iterative model of development, merciless refactoring, early builds and short release cycles. Developing in collaboration with the client team, 10,000 miles and 12 hour timezones away, these have worked out great for us.

Steve has mentioned about many of the Google practices. We need to understand that Google hires its staff after a very thorough and careful screening process, has a completely different business model and does not have to think about the red-faced fuming client hammering for the red dots of the project dashboard late at night. So whatever is Google Agile, cannot be applied to houses that deliver run-of-the-mill project solutions at nickel-a-line-of-code priceline.

Here are some of the other Yegge rants ..

- there are managers, sort of, but most of them code at least half-time, making them more like tech leads.

In a typical project, the project manager has to do external client management and keep all stakeholders updated about the project dashboard. Managers coding half of the time simply do not work in a large enterprise scale development project. Well, once again it may be a Google specialization, but for people working further down the intellectual curve, it's all bricks and mortars - managers need to work collaboratively with a very strong client facing role.

- developers can switch teams and/or projects any time they want, no questions asked; just say the word and the movers will show up the next day to put you in your new office with your new team.

A real joke when you are delivering a time critical project to your client. Again it's Google Agile, but definitely not applicable to the business model in which the lesser mortals thrive on.

- there aren't Gantt charts or date-task-owner spreadsheets or any other visible project-management artifacts in evidence, not that I've ever seen.

When you don't have the deadlines and the client manager sniffing at your project dashboard, you can indulge in creativity-unlimited - sorry folks, no place for this one too in my delivery model.

The agile methodology does not force you to use (or not use) Gantt charts or excel sheets for project management. It's all about adding flexibility and making your process easier to manage and make teams drift away from the reams of useless documentation. Agility, practiced bad is Bad Agile, but one model does not fit all and Google Agile is not for the general mass to follow.