Sunday, December 28, 2008

Partial Applications with binds in Scala and Haskell

One of the fun parts of learning a new language is the involuntary urge of pitching its features against your favorite language of the same paradigm. I have been a Scala enthusiast for quite some time now, and have been trying to learn Haskell .. it's no wonder that I have been passing through many of such "oh! it's flatMap in Scala" like rants, realizations and enlightenments. This post is about one such session .. it was special since I thought at the end of it, that sometimes some language can beat Haskell in its own forte of concise elegance.

I was going through Chapter 14 of the Real World Haskell book, that discusses and explains monads. Here is an example straight from the book itself ..

import qualified Data.Map as M

type PersonName = String
type PhoneNumber = String
type BillingAddress = String
data MobileCarrier = Honest_Bobs_Phone_Network
                   | Morrisas_Marvelous_Mobiles
                   | Petes_Plutocratic_Phones
                     deriving (Eq, Ord)

findCarrierBillingAddress :: PersonName
                          -> M.Map PersonName PhoneNumber
                          -> M.Map PhoneNumber MobileCarrier
                          -> M.Map MobileCarrier BillingAddress
                          -> Maybe BillingAddress


and one of it's implementations, the most concise one (name changed) ..

findCarrierBillingAddress person phoneMap carrierMap addressMap =
    lookup phoneMap person >>= lookup carrierMap >>= lookup addressMap
  where lookup = flip M.lookup


The problem is to find the address of mobile carriers used by a person, hopping through a few cascaded lookups in mapped information.

The method uses the chaining function (>>=) of monads, "bind", in Haskell, that binds the result of the computation on the left to the parameter of the one on the right. More specifically, the result of the computation on the left gets applied as a parameter to the already partially applied function on the right.

Cool stuff .. bread and butter to an experienced Haskeller. But I do not fall into that category, and succumbed immediately to the urge of comparing it with how it will be implemented in Scala ..

Here is the meat of the monadic code in Scala ..

def find_carrier_billing_address(name: String,
    phone_map: Map[String, String],
    carrier_map: Map[String, MobileCarrier],
    address_map: Map[MobileCarrier, String]) = {

    phone_map.get(name).flatMap(carrier_map.get(_)).flatMap(address_map.get(_))
}


Pretty much similar in conciseness .. flatMap is Scala's bind and we have the same chaining effect through partial application. In both of the above applications, we use the MayBe monad (Option[T] in Scala).

However, there are a few subtle points to be noted in the Haskell and Scala variants that bring out some of the differences in the language implementations. And which became much clearer to me once I implemented the above Haskell example in Scala.

Partial Application in Haskell

Haskell is a purely functional language, where partially applied functions are the normal idioms in programming. Have a look at the type signature of map in Haskell ..

ghci> :type map
map :: (-> b) -> [a] -> [b]


The above definition is, by definition curried. For a more common view of the type for map, as in other not-so-pure languages, we need to do an explicit uncurry in Haskell ..

ghci> :type uncurry map
uncurry map :: (-> b, [a]) -> [b]


map is a function that takes as arguments a function and a collection and generates another collection by applying the function on every element of the input collection. Now, in Haskell, the function type operator (->) is right associative - hence the above signature looks like (a -> b) -> ([a] -> [b]). We can invoke map with only one argument, the mapping function, and the result will be another function that takes as input a list and returns another list.

ghci> :type map (+2)
map (+2) :: (Num a) => [a] -> [a]


In the above snippet, map (+2) is a valid invocation of the map function, that generates another function, that takes a list of integers and generates another list whose every element will be incremented by 2.

Note that every application of the function binds to the parameter list from the left - hence in Haskell, function application is left-associative. In the above example, we did a partial application to map and the parameter we supplied (+2) binds to the leftmost argument of the type signature. Without resorting to the tricks of anonymous lambdas, normally idiomatic Haskell does not allow you to bind to middle parameters in a function application.

In the function findCarrierBillingAddress above, lookup carrierMap is a partial application to lookup, that produces another function, to which the result of the previous lookup in chain (phoneNumber) gets bound. And this continues for every element of the chain in bind.

But notice the where part, where we need to redefine lookup and flip the positions of arguments for Data.Map.lookup. This needs to be done since partial application in Haskell, by default (without resorting to some lambda tricks), associates to the left and all API designs that plan to be partial application friendly need to honor this constraint. The argument that flows from the previous element in the monadic bind can only associate to the rightmost parameters in the list. Hence the added verbosity in the implementation ..

And now for Scala

Unlike Haskell, Scala forces that a method has to be applied to all its arguments. The arguments can be explicitly specified or they can be partially applied using the placeholder _.

scala> val l = List(1, 2, 3, 4)
l: List[Int] = List(1, 2, 3, 4)

scala> l map (+ 2)
res7: List[Int] = List(3, 4, 5, 6)


Since Scala offers placeholder arguments that need to be supplied, it becomes easy to bind any parameter through partial application. Let us change the above program fragment, where instead of a map lookup, the final fetch of the address is done through a function that takes some additional parameters ..

def get_address(carrier: MobileCarrier,
  out_of_state: Boolean, address_map: Map[MobileCarrier, String]) = {
  if (out_of_state == true) address_map.get(carrier)
  else None
}


Scala handles it nicely, since it can explicitly bind parameters through placeholders in partial application ..

def find_carrier_billing_address(name: String,
  phone_map: Map[String, String],
  carrier_map: Map[String, MobileCarrier],
  address_map: Map[MobileCarrier, String]) = {

  phone_map.get(name)
           .flatMap(carrier_map.get(_))
           .flatMap(get_address(_, true, address_map))
}


We get the same conciseness as the first implementation.

How do we implement the same in Haskell ? Here is the getAddress function like the one in Scala ..

getAddress :: MobileCarrier
           -> Bool
           -> M.Map MobileCarrier BillingAddress
           -> Maybe BillingAddress

getAddress carrier outOfStation addressMap =
    case outOfStation of
      False -> Nothing
      _     -> M.lookup carrier addressMap


We cannot have the same concise chaining as above, without changing the position of the parameters, since Haskell partial application is left-associative and the parameter carrier that comes in from the previous element of the chain in the monadic bind needs to be applied to the leftmost position ..

This creates problems when dealing with third party APIs when switching argument positions is not in your control. The alternative is to apply the ubiquitous indirection that uses anonymous lambdas ..

findCarrierBillingAddress person phoneMap carrierMap addressMap =
    lookup phoneMap person >>= lookup carrierMap >>= \c -> getAddress c True addressMap
  where lookup = flip M.lookup


Workable, but not as concise as the Scala one or the earlier Haskell variant. The Scala idiom of using placeholders for parameter binding is quite powerful as well as intuitive. Scala, being a hybrid OO-FP language has to take care of issues like subtyping (which Haskell does not have) - I guess it is for this reason Scala cannot support pure function currying as the default like Haskell. But the placeholders work out very well in offering the flexibility of partial application and any-position parameter binding.

Sunday, December 21, 2008

What is special about combinators in Joy

I am continuing my rendezvous with Joy, and this post is yet another rant about my perception of why combinators are real first class citizens in the paradigm of concatenative languages. Many of today's languages offer combinator libraries, but, with the possible exception of Haskell, none match the elegance that Joy offers.

Combinators help programming at a higher level of abstraction. In any functional language, using maps and folds definitely make your program more expressive than explicit recursion or iteration. Consider this snippet that uses map as a transformer over a list to convert every element of the list to upper case ..

upperCase xs = map toUpper xs


and here is the same using explicit recursion ..

upperCase (x:xs) = toUpper x : upperCase xs
upperCase []     = []


Apart from the former being more expressive, using the map combinator also helps you separate the strategy from the implementation of your program. map is a higher level abstraction and the implementation of map can be varied from recursive to iterative without any impact on the client code base. For example, without tail call optimization on the JVM, Scala implements List.map as an iterative method as opposed to the recursive implementation of Erlang map.

So, always use combinators to program at a higher level of abstraction.

In Joy, programming with combinators is the most natural way of doing things. In case you are just wondering, why is it not so with the programming language of your choice, the reason is in the architecture of the language. The combinators which I implemented in Scala, in my last post, did the stuff, but do we see such usages in real life ? Even with languages like Ruby, which do not have the noise of type annotations, combinators like divide_and_conquer, are not in common usage. The only possible exception to this today is Haskell, which allows point free programming, is referentially transparent and allows algebraic program transformations much like using formal methods through the medium of programming. And combinators are the most natural idioms in concatenative languages like Joy.

Why is that ?

Combinators are most effective when you compose them from atomic programs and evolve towards the bigger whole. And the closer you can get towards algebraic transformation with your combinators, the more elegant the process of composition looks. Today's mainstream languages are based on application of functions on parameters, leading to complex evaluation rules. Formal parameters add to the noise of composition and make the process less elegant. In statically typed languages, you also have the type annotations that add to the complexity of composition. On the other hand, in Joy, every combinator has the same form (Stack -> Stack) and gets all parameters including quoted programs from the stack itself. Hence you can write elegant compositions like 1 [*] fold to define the product function over the elements of an aggregate. Today Haskell allows such point free style of programming (product in Haskell will be foldl (*) 1), but none of the other mainstream languages come even close. It will be interesting to compare Joy with Haskell with respect to rewriting capabilites and application for formal methods.

Perhaps, the most important feature in Joy that makes non-invasive composition of combinators work like a charm is the datatype of quoted programs. A quoted program is just another list that happens to be on top of the stack when a combinator expects it. And the list can very well be generated as a result of other combinator operations. It is this generalization that unifies all combinator operations. In other languages, combinators work on different abstractions, while in Joy, it is always the same thing - combinators get lists on top of stack (which are incidentally quoted programs), execute them and return a stack with the result. Hence map is a higher order function in other languages, while in Joy, map is still a unary first order function from stacks to stacks.

At the same time, quoted programs in Joy, allow the normal evaluation order semantics to be imposed during program execution, in an otherwise applicative model.

Here is an example ..

0  [dup * +]  fold


This is an example of using the fold combinator. The quoted program [dup * +] will be on the stack in passive form, and will only be active when we have the fold combinator being applied. The result is the sum of squares of the input aggregate.

Recursion Combinators

As mentioned at the beginning of the post, if explicit recursion makes programs hard to understand, how do we hide 'em ? Inside recursive combinators. But can we remove explicit recursion from individual combinators ? Sure .. we can, by introducing the venerable y-combinator. So, only the y-combinator will contain explicit recursion, and all other erstwhile recursive combinators will be implemented in terms of Y.

Let us visit the factorial example and find out for ourselves, how the y-combinator can be used to remove explicit recursion and what trade-off does it imply ..

Here is the explicitly recursive version of factorial in Joy ..

recursive-fac  ==  [0 =] [succ] [dup pred recursive-fac *] ifte


Just for the uninitiated, ifte is the if-then-else combinator in Joy that takes 3 program parameters - quite intuitively, the if-part, the then-part and the else part. Note how effectively, quotation has been used here to impose normal order of evaluation, aka call-by-name. Otherwise it would not have been possible to implement it at all.

and here is the version that uses the y-combinator ..

[ [pop 0 =] [pop succ] [[dup pred] dip i *] ifte ] y


where y is implemented as simply as ..

[dup cons] swap concat dup cons


and .. and .. which version do you think is more readable ? Of course the one that uses explicit recursion. Huh!

This is the problem with the y-combinator. It is too generic, all-purpose, and hence not enough descriptive. And when combinators are not self-descriptive, the readability goes for a toss. And this is yet another area where Joy shines ..

Joy has special purpose combinators for recursion that are independent of data types and enough descriptive depending on the structure of the recursion. Here is an example with the factorial computation ..

fac  == [null] [succ] [dup pred] [*] linrec


linrec is a recursion combinator that takes four parameters in addition to the data that it needs.

  • The first one is the if-part, where we specify a condition.

  • If the condition evaluates to true, then the second part (the then-part) gets executed and the combinator terminates.

  • Otherwise we look at the next 2 parameters - called the rec1-part and rec-2 part.

  • The rec1-part is executed and then the 4 program parameters are once again bundled together and the quotation form is immediately executed.

  • This is followed by the execution of the rec2-part.


The entire sequence models what we call a linear structural recursion.

For other variants of recursive problems, Joy offers a host of similar other combinators ..

  • binrec is for binary recursion, that makes problems like fibonacci computation or quicksort a trivial one liner

  • tailrec is for tail recursion, quite similar to linrec but without the rec2-part

  • primrec is for primitive recursion which offers suitable defaults for the if-part and the rec1-part


Then there are a host of other recursive combinators for more advanced recursions like mutual recursion, nested recursion, generic recursion with n-ary branching etc.

Tuesday, December 16, 2008

Combinators for fun and profit

Wikipedia says ..

"A combinator is a higher-order function which, for defining a result from its arguments, solely uses function application and earlier defined combinators."

Primitive functions that do not contain any free variables. The functions take some arguments and produce a result solely based on those arguments only. No side-effects, nothing. In concatenative languages like Joy and Factor, combinators are formed from the primitive elements by quotation and concatenation, using nothing but function composition, transforming your system into a rich algebra of programs. As I mentioned in my last post, there are no variables in Joy. Data is manipulated through a stack. Along with data, Joy pushes programs themselves onto the stack and manipulates just like data through combinators.

I am an enterprise programmer - what do combinators buy me ?

Nothing, except maybe a good evening over a weekend. Seriously! This is a continuation of my last post dedicated fully towards pure joy of writing programs. And this post is about some possibly wasted efforts that I have been plunging into, quite off and on. Dear prudent reader, you may not find anything useful out of it .. it is just a truthful and honest account of self indulgence into something I found enjoyable.

OK .. so you have been warned ..

Consider the simple function that finds the arithmatic mean of a list of numbers .. using a language that's quite not-so-elegant for combinator programming .. Scala ..

def arith_mean(list: List[Int]) = {
  list.foldLeft(0)(+ _) / list.size
}


The method is purely functional, has no side-effects, but really not a manifestation of being produced as a composition of primitive functions. The parameters get in the way, the type annotations are a noise. Joy makes combinator programming more concise, concatenative and reveals the intention purely as a composition of combinators .. Shout out the following aloud, and that is exactly what the program does, as every function works through the available stack .. Here is the program for arithmatic mean in Joy ..

dup  0  [+]  fold  swap  size  /


and here is the commentary of how the operators and combinators above compose ..

  1. Duplicates the list on top of the stack

  2. Does a fold to add elements of the list, which is on top of the stack. So the top of the stack contains the sum of all elements of the list. And the next element of the stack contains a copy of the original list

  3. Swaps the top 2 elements of the stack - the top now contains the list

  4. Replace the top list with its size

  5. Apply division on the top 2 elements of the stack to get the result



A more idiomatic version of the above will use the cleave combinator that does compact the above program and even parallelizes the computation of the sum of the elements and the size of the input list ..

[0 [+] fold]   [size]   cleave   /


Meanwhile, Reginald Braithwaite is having a party with combinators in his specialty un-blog homoiconic. He is exploring ways to write more enjoyable programs in Ruby using Krestel, Thrush and the other combinator birds that Raymond Smullyan has immortalized in his creation To Mock a MockingBird, as a tribute to the bird watching passion of Haskell Curry. Twenty years since the book came out, it is still a joy to read.

In his series on refactoring code with combinators, Reginald presents how to refactor a sum_of_squares method using a divide_and_conquer combinator. The method computes the sum of squares of a tree of integers represented as a nested list. The idea is to model the divide_and_conquer paradigm as a combinator and have the sum_of_squares method as an implementation of that combinator.

Here is my sum_of_squares method in Scala for integers .. functional and recursive ..

def sum_of_squares(list: Iterable[Any]): Int = {
  list.foldLeft(0) {
    (s, x) =>
      s +
        (match {
          case l: Iterable[_] => sum_of_squares(l)
          case e: Int => e * e
        })
  }
}


And now a divide_and_conquer combinator in Scala that defines the strategy ..

def divide_and_conquer(value: Iterable[Any],
                       conquer: Any=>Any,
                       divide: Iterable[Any]=>Iterable[Any],
                       recombine: Iterable[Any]=>Any): Any = {
  recombine(
    divide(value).map { (x: Any) =>
      x match {
        case l: Iterable[_] =>
          divide_and_conquer(l, conquer, divide, recombine)
        case e =>
          conquer(e)
      }
    }
  )
}


where ..

  • divide is the step that divides the main problem into subproblems

  • conquer is the trivial case for the smallest part

  • recombine is the step that pieces solutions to all subproblems together


And here is the implementation of sum_of_squares using the combinator ..

def sum_of_squares(l: List[Any]): Any = {
  divide_and_conquer(l,
    (x) => x.asInstanceOf[Int] * x.asInstanceOf[Int],
    (x) => x,
    (x) => x.asInstanceOf[List[Int]].foldLeft(0){+ _}
  )
}


Quite a bit of noise compared to an implementation using a concatenative language, with all parameters and type annotations sticking around. But it's not too ugly compared to what other mainstream languages can offer .. and anyway it's fun .. I am sure someone more conversant with Scala will be able to make it a bit more succinct and idiomatic.

Here is another implementation of the divide_and_conquer strategy using the combinator above .. flattening a list ..

def flatten(list: List[Any]): Any = {
  divide_and_conquer(list,
    (x) => x,
    (x: Iterable[Any]) => x,
    (x: Iterable[Any]) =>
      x.foldLeft[List[Any]](Nil){
        (s, x) =>
          s :::
            (match {
              case l: List[_] => l
              case e => List(e)
            })
      }
  )
}


Quite ugly .. huh ? Looks like Scala is not an ideal language for combinator based programming. Though we have some very good implementations of combinators in Scala, the parser combinator library, the pickler combinator library etc. But if you are serious about combinators and combinator based programs, jump into the concatenative languages. Joy gives us the following implementation of flatten ..

flatten  ==  [null]  []  [uncons]  [concat] linrec


Here goes the commentary ..

  1. If the parameter list is empty, nothing to flatten, leave the empty list

  2. Otherwise, uncons to get the first and its rest

  3. Flatten rest through anonymous recursion on it

  4. Concatenate the saved first to the result


Quite intuitive, and implemented using function composition only. Just one trick - what is the linrec doing out there ?

linrec is one of the most widely used recursion combinators in Joy which can be used to avoid recursive definitions in your program. It encapsulates the recursion part within its implementation, much like what we have done with the recursion in our divide-and-conquer combinator. Joy also offers a host of other recursion combinators like genrec, binrec, tailrec etc. Using them you can write non-recursive definitions of recursive tasks through simple composition and quotation. But that is the subject another rant .. some other day ..

Monday, December 08, 2008

enJOY

Programming for the sole purpose of having fun with it - How does this sound ? We have been talking about learning a different programming language every year, taking on the pill of polyglotism, learning languages that inhabit the common runtimes like the JVM or the CLR. But all this for the purpose of making ourselves a better programmer and raising our programmer value index in today's market. Really how many of us can afford to do so ? Most of us have a day job where we work on mainstream programming languages, strive hard to meet up to the expectations of the enterprise client and ensure that each of us remain billable to the client in the foreseeable future.

This post is not about polyglotism for today's market. If you enjoy programming, yet missing the fun part of it, this post is to tickle your programming instincts, and reminding you once again about the loads of fun that you can get out of hacking over a seemingly weird programming language. You may be doing Java / C# in your day job, feel guilty about not yet joining the bandwagon of functional programming, and try to dabble with Haskell, Erlang, Scala or F# over the weekend and during the evenings. Or you may already have joined the bandwagon privately, forked off a branch of your favorite project repository, basking in the sunshine of Lambda The Ultimate, and eagerly awaiting the day when your manager will approach you with a Haskell or Lisp assignment in hand.

If you thought you are having enough fun with lambdas, look out for more fun with functional programming without lambdas. Lambda calculus is about functions and applications of functions on arguments. I have been just been introduced to a functional programming language named Joy, which is NOT based on application of functions to arguments.

Holy crap ? No parameters .. what do functions do ?

They execute instructions on values that they find on the stack. And this post is about enjoying the beauty and savoring the fun that learning Joy has brought forth to me. And remember, you can have that too .. remove your day job hat for some time (maybe over a few weekends) and get immersed in the beauty of recursion and theory of computability.

Let us take an example ..

Suppose you want to write a function that computes the square of a number .. here is a Joy version for that ..

dup *


>> (dup)licate the element on top of the stack and apply multiplication operator (*) on the 2 top elements of the stack.

You can have a name for the function if you want to reuse it many times ..

square == dup *


and invoke it as

5 square


or without names

5 dup *


What about cubes ?

dup dup * *


Intuitive huh!

What does the following do ?

[dup dup * *] map


Ok .. possibly I have jumped too far ..

In Joy, programs evolve through 2 operations - quotation and concatenation.

Quotation can be thought of as a passive data structure and is indicated by surrounding the quoted program with square brackets. In the last example, [dup dup * *] is a quoted program. But it is also a list - in Joy lists are as well indicated within square brackets e.g. [1 2 4 6] is a list. So generically speaking lists are quoted programs in Joy. And a list can contain elements that are not literals and can be made to operate on values. And for that we need to activate the passive quotation.

Concatenation of two programs in Joy is effectively the composition of the functions denoted by the programs. Consider the example program ..

2 3 + dup *


which is effectively, the composition, denoted abstractly by *(dup(+(2, 3)))

In the above example that we left off without solving, the quoted program is activated by the combinator that follows it .. map. Combinator is another useful device that causes execution of the quoted program on top of the stack. There are a variety of combinators in Joy, the most trivial one being the i-combinator. i-combinator just dequotes the quoted program, i.e.

[dup dup * *] i


is equivalent to

dup dup * *


What about the map combinator ? Let us try to run the last example where we left off .. with some actual data ..

[4 6 8 9] [dup dup * *] map


Here is what happens ..

  • Joy first pushes the list of integers on the stack

  • Then the quoted program is pushed .. remember quoted programs are passive - hence nothing gets executed

  • The map combinator then removes the list of integers and the quoted program and constructs a new list by applying the program to each element of the list. The usual stuff that map is so well-known to do ..


The result is a list that contains the cubes of the original elements.

I have only scratched the surface in order to tickle you towards a functional language that is not based on lambda calculus. I am sure you will also find Joy equally interesting, and loads of fun learning.

Tailpiece

Did you notice that we can have any element within a list ? Sounds familiar ? Yes, you got it right ! Program as data, as Joy would have it. A quoted program is data unless operated upon by a combinator. And, as this landmark paper suggests ..

"Since programs are data, it is possible to define a Y-combinator for recursion and several variants. It follows that there are self-reproducing and self-describing programs in Joy. Practical programs can be written without recursive definitions by using several general purpose recursion combinators which are more intuitive and more efficient than the classical ones."

Go read this paper .. it is an absolute joy (pun intended) if you want to learn more about recursion, combinators and computability. This paper was my starting inspiration in dabbling with Joy. You can also have it .. enJoy!

Thursday, December 04, 2008

Data 2.0 - more musings

Martin Fowler writes ..

"If you switch your integration protocol from SQL to HTTP, it now means you can change databases from being IntegrationDatabases to ApplicationDatabases. This change is profound. In the first step it supports a much simpler approach to object-relational mapping - such as the approach taken by Ruby on Rails. But furthermore it breaks the vice-like grip of the relational data model. If you integrate through HTTP it no longer matters how an application stores its own data, which in turn means an application can choose a data model that makes sense for its own needs."

and echoes similar sentiments that I expressed here.

Today's application stack has started taking different views on using the database, particularly relational database, as a persistent store. The driving force is, of course, to reduce the semantic distance between the application model and the data model.

For the case Martin mentions above, the data model for the application need not be relational at all. We are seeing more and more cases where applications need not bolt the data that it operates on, forcibly into the clutches of the relational paradigm. In other words, the data remains much closer to the application domain model. Instead of splitting a domain abstraction into multiple relational tables and using the SQL glue to join them for queries and reports, we can directly operate on a semantically richer persistent abstraction. Storage options have evolved, RAM is the new disk, creation of RAM clusters is now easier than creation of disk clusters. And technologies like Map/Reduce have enabled easy parallelization of data processing on commodity hardware.

I have been hacking around with CouchDB for some time now. This is one platform that promises an HTTP based interface to the entire application stack. It's a server and a database, and the best part of it is that, the database driver is HTTP. JSON based storage of application documents, REST APIs, map/reduce based queries in Javascript - no schema, SQL, no database constraints .. your application data is semantically closer to the domain model. Loosely coupled document storage, multi-version concurrency control, easy replication - try replacing the relational database with this stack if it fits your application model.

Another train of thoughts that positions the database in a new light within an application stack, is the upcoming grid computing platforms on the JVM. Java as a platform is growing fast and grid vendors like Terracotta and Gigaspaces are catching up to this trend. They offer coherently clustered in-memory grid that enables a POJO based programming model without any synchronous interaction with the persistent data store. The application layer can program to the POJO based Network Attached Memory of Terracotta, using standard in-memory data structures. Terracotta offers an interface, which, if you implement, will be called to flush your objects to the database asynchronously in a write-behind thread. One other way to reduce the impedance mismatch of your domain model from the data model.

Monday, December 01, 2008

Guido's thoughts on Scala

Many people who have reacted to Guido's gripes with Scala have bent the post towards a static-typing-is-bad-and-dynamic-typing-is-good syndrome. I think the main point of concern that Guido expresses relates to the complexity of Scala's type system and the many exceptions to the rules for writing idiomatic Scala.

While it is true that there are quite a few rough edges in Scala's syntax today, none of them look insurmountable and will surely be addressed by the language designers in the versions to come. We need to remember that Scala is still a very much growing language and needs time and community feedback to evolve into one of the mainstream general purpose languages. As Martin has mentioned elsewhere, Scala's grammar is no bigger than Java, OCaml or C# and hence the language should not impose a more complex mental model for using the same set of features that the other statically typed languages espouse.

However, Scala's typesystem is expressive enough to offer succinct solutions to many problems that would look much more obtuse in Java. A better modeling of the Expression Problem is a classic example, which, as this paper demonstrates, looks much more verbose, inelegant and non-extensible using Visitors in Java. I have been programming in Scala for sometime now, and I have the feeling that Scala's extremely rich type system can be exploited to implement domain models that encapsulate most of the business rules through declarative type constraints. I have blogged on this very recently and described my experiences of how Scala's type system, specifically, features like abstract types, self type annotations etc., works for you to implement expressive domain models, without writing a single line of runtime type checking code. Less code, more succinct abstractions, and most importantly less tests to write and maintain. After all, everything has a cost, it depends whether you abstract the complexity within the language or within your application.

Another excellent point that has been raised by David Pollak in one of the comments to Guido's post is that we need to judge complexity of language features separately depending on whether you are a library producer or a library consumer. As a library producer, you need to be aware of many of the aspects of Scala's type system, which may not be that essential as a library consumer. Sometime back, I had demonstrated an example of an internal DSL design technique in Scala, where, given a sufficiently typed library for financial trading systems, you can write DSLs in Scala of the form ..

new Order to buy(100 sharesOf "IBM")
  maxUnitPrice 300
  using premiumPricing


Another great example is the Scala parser combinator library, which hides all nuances of Scala's type system and allows you to design external DSLs. Have a look at this example of another financial trading system DSL built using Scala parser combinators ..

On the whole I think Scala has a compelling appeal to the programming community on the JVM. Statically checked duck typing that Scala offers makes your code look and feel like Ruby and yet offers the safety net of compile time checking. Scala is object oriented, though most of idiomatic Scala is functional. In a real world application development environment, where statefulness is the way of life, Scala offers a very good compromise with its hybrid model of programming. Model your states using objects, and model your computations using functional Scala.

Regarding the other concern raised by Guido on the non-uniformity of Scala's syntax, it is indeed true that there are some rough edges to deal with .. like .. () versus {} in for-comprehensions and inconsistent inferring of semi-colons, the number of interpretations that an underscore (_) can have, some subversions in partial function application syntax, syntax for def foo() { .. }. I guess there are quite a few of them that are currently being debated as possible candidates for change. The good part is that the language experts are working with Martin to iron these issues out and smoothen out the rough edges, even at the risk of losing some amount of backwards compatibility.

Tuesday, November 18, 2008

Euler Problem #18 and #67 using functional Scala

Project Euler is one of the forms of exercising one's programming instincts. Last weekend, I happened to come across Problem 18 and 67, the latter being a variant of the former in the sense that a brute force algorithm may work for Problem 18, but it will never work for 67. Another interesting point regarding this pair of problems is that it has a certain aura of graph theory that will tempt you towards trying out the solution.

To make things short, both problems relate to finding the maximum sum over a triangle of numbers by moving from one row of number to the next *only* via adjacent cells. For the example triangle, cited in the problem definition page ..

3
7 5
2 4 6
8 5 9 3

The adjacency definition is mapped as follows ..

adjacent(3) => 7, 5
adjacent(7) => 2, 4
adjacent(5) => 4, 6
adjacent(2) => 8, 5
adjacent(4) => 5, 9
adjacent(6) => 9, 3

From the above, we can generalize the adjacency function as ..

adjacent(element(row(r), position(i))) =
element(row(r+1), position(i)), element(row(r+1), position(i+1))

The following is the implementation in Scala. It uses functional techniques like pattern matching to capture the essence of the problem and closes over the solution quite succinctly. Compared to an imperative implementation, the structure of the solution is quite apparent in the implementation itself. The solution has an O(lg n) complexity and Problem 67 completes in 1 millisecond on my Windows XP laptop running Scala 2.7.2 RC3.

In the following implementation, the function meld is not tail recursive. Making it tail recursive is quite trivial though. I decided to keep it the way it is, since it does not affect the structure of the solution. And often in garbage collected environments, non-tail recursive versions can perform better than their tail recursive version.


object Euler {

  // for Problem 18
  val triangle =
    List(List(75),
         List(95, 64),
         List(17, 47, 82),
         List(18, 35, 87, 10),
         List(20, 04, 82, 47, 65),
         List(19, 01, 23, 75, 03, 34),
         List(88, 02, 77, 73, 07, 63, 67),
         List(99, 65, 04, 28, 06, 16, 70, 92),
         List(41, 41, 26, 56, 83, 40, 80, 70, 33),
         List(41, 48, 72, 33, 47, 32, 37, 16, 94, 29),
         List(53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14),
         List(70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57),
         List(91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48),
         List(63, 66, 04, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31),
         List(04, 62, 98, 27, 23,  9, 70, 98, 73, 93, 38, 53, 60, 04, 23))

  /**
   * Takes 2 lists, where bl.size is 1 greater than sl.size. It will process pairs of rows
   * from the triangle in the reverse order, that will satisfy the size constraint, since
   * Row i contains 1 less element than Row (i+1).
   *
   * Hence, if row(i) contains k elements, row(i+1) will contain (k+1) elements.
   * A meld(row(i+1), row(i)) will produce a new row(i)(call nrow(i)) with
   * k elements and nrow(i, j) = row(i, j) + max(row(i+1, j), row(i+1, j+1)).
   *
   * In summary, meld merges the two rows and increments every element in the smaller row
   * with the algebraic value of the bigger of its two adjacent elements.
   */
  def meld(bl: List[Int], sl: List[Int]): List[Int] = ((bl, sl): @unchecked) match {
    case (bf :: bs :: brest, sf :: srest) =>
      sf + Math.max(bf, bs) :: meld(bs :: brest, srest)
    case (bf :: brest, s) if (brest.size == 1 && s.size == 1) =>
      List(s.head + Math.max(bf, brest.head))
    case (b, Nil) =>
      Nil
  }

  /**
   * Iterates recursively over the triangle and melds all rows to reduce to the maximum sum.
   */
  def reduce(trng: List[List[Int]]): List[List[Int]] = (trng: @unchecked) match {
    case f :: s :: tail =>
      reduce(meld(f, s) :: tail)
    case f :: Nil => List(f)
  }

  def main(args: Array[String]) {
    /**
     * file processing for Problem #67
     */
    import scala.io.Source
    val strng: List[List[Int]] =
      Source.fromFile("triangle.txt")
            .getLines.toList
            .map(_.split(" ")
                  .elements
                  .toList
                  .map(_.stripLineEnd.toInt))

    println(reduce(triangle.reverse).head.head)
    println(reduce(strng.reverse).head.head)
  }
}