25 November 2013

Axon framework - behavior driven testing

Rich domain model that emerges from application of DDD techniques has many advantages over anemic model but is hard (or even impossible) to deliver when modeled entities need to be stored in relational database. The reason lays deeply in the mismatch between goals of DDD and ORM models. Without going into details, the main difference is that the goal of ORM models is to enable "easy" persistence/storage and retrieval (including ad-hoc queries) of modeled objects leveraging SQL/RDBMS technology while the goal of DDD models is to decompose complex business domain into manageable pieces taking into account scalability and consistency requirements. Therefore applying both DDD and ORM modeling techniques simultaneously seems to be at least risky endeavor. But with help of Axon framework doing DDD on top of ORM is possible and may lead to better overall design.

In my previous article in this series Axon framework - DDD and EDA meet together (which I recommend to read before proceeding) I have introduced Axon framework and described how it enables application of DDD, CQRS, SAGA design techniques. I have also described key building blocks of DDD (Aggregates and Events) and mentioned event sourcing as "optional building block".
In this article I will show how to upgrade JPA managed entities (shortly persistent entities) that play role of Aggregate Roots (AR) to event sourced Aggregate Roots, so that they can be loaded from an event stream. The goal is not to replace relational database with event store (we are not ready yet to be eventual consistent ;-), but to improve quality of unit tests of ARs (remember, ARs are not simple data container, they contain business logic and thus deserve careful testing). As we will see, having possibility to construct ARs from events will let us write self-documenting unit tests expressed purely in terms of events and commands.
The main problem that needs to be addressed when modeling Aggregate Roots that are both persistent and event sourced entities is lack of consistency boundary concept in ORM model. Persistent entity can hold direct reference to any other persistent entity, while Aggregate Root can't hold direct reference to other Aggregate Root. Let's take a closer look at this problem and try to enumerate possible solutions.

Direct dependencies between Aggregate Roots


We can easily load persistent entity that contains graph of other persistent entities because EntityManager can execute any series of SQL select statements (using joins if possible) while fetching the entity. There is nothing that prevents EntityManager from assembling single graph containing several different persistent entities even if we modeled those entities as different Aggregate Roots. When dealing with event sourced ARs, its not the case. Event sourced repository only knows how to load single AR. Thus, to be able to load AR from event stream we need to ensure that it has no direct dependencies to other ARs. 
Note that it is perfectly valid for AR to keep direct references to owned entities. When loading from event store, AR is responsible for recreating all owned entities.
Of course, by definition AR should not have reference to other AR, but because our ARs are persistent entities direct references in some cases might occur because they are for some reasons convenient or even necessary. Lets think of those reasons.

Entities need to be queried


Sometimes we model dependencies between persistent entities just to be able to perform effective queries using JPQL. The solution in this case is easy: referencing by id.
It turns out we can easily replace object reference to related entity with its identifier (aka primary key) by providing targetEntity parameter in mapping annotation (probably not very commonly used JPA feature):

And that's it. Problem solved.

Entities should not contain duplicated data


There is another reason for corrupting boundaries between ARs when modeling on top of relational database: avoiding data duplication (aka normalization). The normalization leads to complexity that can be easily noticed by looking at any "robust enough" ERD diagram (ex.: online shop data model).
Complexity shows up in the code of business logic. It quickly turns out that logic of any business operation (except some simple CRUD operations) requires access to data from multiply entities.
In example below Payment Plan (AR) (part of operational domain/context) implements register payment operation that performs check if received payment is complete (the received payment amount is equal to amount defined in Payment Plan (AR) (part of planning domain/context)) and thus accesses paymentAmount property of PaymentPlanAR:

We can get rid of direct relationship to payment plan AR by simply duplicating paymentAmount in Payment Plan and Payment Period (we would copy the value from PaymentPlanAR during creation of PaymentPeriodAR). That way we keep both ARs (contexts) separated.
So solution in that case is simple: data duplication.

(There is also a third possibility: model the activity as business process (SAGA) that can interact with many ARs, but we will not consider this option here).

OK, but lets assume we can't do it (duplicate data). The reason we can't could be that we want payment amount to be editable and we don't want to care about synchronization of changes between payment plan and payment period, assuming such synchronization would be necessary...

Shortly, lets assume we need to keep direct references between some ARs...

Now, we still want to be able to construct ARs from events for testing purposes.
Fortunately, Axon framework allows registering AggregateFactory that can assist AR construction and initialization from event stream.

Aggregate Factory


Lets introduce Subscription Plan (AR) that holds reference to Subscription Pool (AR):

When loading the subscription plan from jpa repository (production environment), the pool will be automatically initialized by underlying JPA provider. When loading from event sourced repository (test environment) Axon framework will check if AggregateFactory is registered for SupscriptionPlanAR type and call AggregateFactory#doCreateAggregate method providing aggregate identifier and first event from the event stream to create empty AR instance (AggregateFactory does not fill business properties of AR, they will be initialized when applying events from event stream).

Thus, we need to implement SubscriptionPlanFactory capable of instantiating SubscriptionPlanAR with SubscriptionPoolAR injected:

Logic inside SubscriptionPlanFactory#instantiate is also used by command handler when processing SubscriptionPlanCreateCommand so good idea is to promote SubscriptionPlanFactory to standard factory for SubscriptionPlanAR that is not only able to instantiate empty aggregate root but also create new valid aggregate root as dictated by business requirements. To do so we will add public create method to the factory:


Please note that event sourced ARs can't rely on auto-generated identifiers (identifiers generated after AR is saved to database), that's why our factory generates identifier before creating AR (previously it was responsibility of command handler).

Please also note that main construction logic (application and handling of SubscriptionPlanCreatedEvent) is still placed inside AR itself, but now it is contained in create method instead of constructor:

Worth noting is separation between event publishing (create method) and event application (handle method). Previously (without support for event sourcing) both operations could be implemented in one method (or constructor) (see: Account.java), now they need to be split so that loading aggregate root from event stream is possible.

Now, implementation of command handler is straightforward:

Test fixture


We are now ready to configure Axon's given-when-then test fixture that can be used for testing different types of Aggregate Roots:


Concrete implementation of test class will need to define aggregate type, aggregate factory (Axon provides GenericAggregateFactory that can be used if no custom initialization of AR is required) and command handler. Both aggregate factory and command handler are then passed to specialized registration methods of Axon's FixtureConfiguration. At the end, command handler is passed reference to repository that was created by GivenWhenThenFixture (in production scenario command handler works with JPA-based implementation of Repository interface, in test scenario the repository (of class EventSourcingRepository) and event store are constructed by Axon's test framework).

Finally we can implement unit tests by simply declaring commands and events using given-when-then style as following:

 - Given: a set of historic events
 - When: I send a command
 - Then: expect certain events

Sometimes it is shorter to declare commands instead of events in given section as one command can result in multiply events. Axon supports that too. It is also possible to assert return value or exception returned/thrown by command handler. It should be noted that all events or commands should refer only to single AR. If you want to test Saga classes (interactions between different ARs) you need to use AnnotatedSagaTestFixture.

Let's see couple of example tests:



What happens in the background when test is executed can be described in a few steps:
 - given section
    - given events (or events published as the result of given commands execution) are saved to in-memory event store
 - when section
    - command handler is invoked
    - command handler asks repository to load aggregate root by id
    - repository loads AR from event store (aggregate factory creates empty AR and then events are applied to that AR)
    - command handler invokes appropriate business method on AR

Looking at test class you can see that there is no code related to infrastructure (no dependency to jpa, sql import statements, etc.), we don't care about persistence layer at all. We test external interface represented by command (input) and events (output) decoupling tests from actual AR being tested. Tests are self-explanatory out of the box. Creating a test in this way could be easily supported by some gui tool that would allow building the test by dragging widgets representing events, commands, entities and exceptions into a form (containing given, when, then section) and filling their properties :)

16 September 2013

Monads - function composition on steroids

Once you start learning functional programming (whatever language you choose) sooner or later you will hit on Monad.

Your initial understanding of Monad concept, after reading a couple of tutorials (many of them are available in the web), might not be very good and that's because of The Curse of the Monad:
In addition to it begin useful, it is also cursed and the curse of the monad is that once you get the epiphany, once you understand - "oh that's what it is" - you lose the ability to explain it to anybody.
        -- Douglas Crockford
Although the curse might be real, I'm writing here yet another monad tutorial because, what I believe, the best way to understand monad is to write tutorial about it :-)

Monad - introduction


The concept originates from category theory. In functional programming:

monad as generic concept describes how to build chains/pipelines of operations while concrete monad type defines what it actually means to chain/compose operations.

Thus we can extract two concepts here:
  • Monad - provides the abstract interface for monadic operations
  • Monadic type - a particular class that implements monadic operations
Thinking of Monad as interface and monadic type as its implementation might be a good initial approach, but we need to keep in mind that we are talking about very generic concepts here. Scala (we will use Scala to implement some examples) does not even provide abstract representation of Monad (there is no Monad trait). Despite this, Scala supports monadic types (more on this later on), and the only requirement for any class to be monadic type is to implement monadic operations. Haskell, on the other hand, being a purely functional language provides abstract representation of Monad (in form of class constructor). I mention Haskell because it was the first language that popularized use of monads.

Generally speaking monads are used to make things composable. Many different things (such as computations or data containers) can be expressed as monadic types. Because of that universal nature of monads, many programming functional languages provide special syntax (Haskell, Scala) or functions (Clojure) to make composition of monadic types easy to write in the code.

Before we get to monadic types, we will first see how function composition and application work and then we will see how to use those concepts to build composable structures (monadic types). We will use Haskell notation to define functions as it is very concise.

Function application


Lets define function f that returns its argument multiplied by 2:
f :: Int -> Int // f takes int, returns int

f = \x -> 2 * x // f is constructed using lambda expression (anonymous function), thus \x -> ...
Then:
f 2 --> has value 4
Now, calling a function with an argument can be generalized as a new function that calls given function (passed as first argument) with given argument (passed as second argument), or in other words, applies given function to given argument. This new function can be expressed as function application operator ($ - in Haskell) :
($) :: (a -> b) -> (a -> b)
// $ takes function, returns function, this is the same as ((a -> b), a) -> b, instead of passing all arguments at once, we can pass first argument to function, receive partially applied function, and pass second argument to it. This is called currying (http://en.wikipedia.org/wiki/Currying).

f $ 2  --> has value 4
// $ is an infix operator, so it must be put between its arguments

($) f 2  --> has value 4
// Putting parenthesis around an infix operator converts it into a prefix operator
If we define reverse apply operator:

(>$>) :: a -> (a -> b) -> b

we can express function application as following:

2 >$> f

Notice that unix shell's pipe (|) operator works this way - you produce some data and then apply a program to it to transform it in some way. The arrows in >$> indicate the direction of data flow (result from previous operation is forwarded to next operation).

Function composition


Let's say we have two functions f and g and a value x with the following types:
x :: a
f :: a -> b
g :: b -> c
for some types a, b, and c.

We can create new function h :: a -> c that combines f and g.

h = g . f = \x -> g (f x)

So, composing two functions produces a new function that, when called with a parameter x is the equivalent of calling f with the parameter x and then calling the g with that result.

Note that this will only work if the types of f and g are compatible, i.e. if the type of the result of f is also the type of the input of g.

Example usage:
f = \x -> x + 2
g = \x -> x * 3
h = g . f 
h 2 --> has value 12

Composition in terms of application


We can easily discovery that h can be defined as application of g to result of f.
To better visualize data flow we need another operator that is reversed version of function composition operator .:

h = f >.> g

So, the key here is to understand that in order to combine two functions we need to provide a new function that calls second function with result of first function.
As it will turn up, >.> function is the core ingredient of monadic type.


Monadic type


So far we have only considered functions as units of composition. But what we want to achieve is composable structure / class.
Lets introduce classes to our functions and start thinking how we can compose them applying the rules of function composition.
To implement examples, we will use Scala - hybrid functional-OO language.

Lets introduce abstract generic class Monad[A] and simple class IntWrapper that will extend Monad and wrap integer value:

trait Monad[A] {}
case class IntWrapper(value: Int) extends Monad[Int] {}

Now, instead of function Int -> Int lets define function Int -> Monad[Int]

f :: Int -> Monad[Int]
f = \x -> IntWrapper(x + 2)

g :: Int -> Monad[Int]
g = \x -> IntWrapper(x * 3)

So, how we can compose those functions?

h = f >.> g // won't work

We can't use >.> operator because the types do not match. It is not possible to apply Int -> Monad[Int] to Monad[Int]. We need Int, not Monad[Int]
The solution would be to create a different composition operator/function that would extract Int from Monad[Int] before performing application but that would require Monad to provide some kind of extract method. That doesn't sound like a good idea (and in fact would break the whole idea of Monad).

As a great OO developer you have probably already come up with the solution. Monad itself should provide this function as a method! That way, the class itself will define what does it mean to compose it with another monad of the same type! This method is one of the two fundamental monadic operations, it is usually called bind, in Haskell it is >>=, in Scala it is flatMap. Lets stay with a Haskell notation for a moment.

Notice that >>= is very similar to >.> (regular function application operator), except that it works for "monadic functions" -  functions returning monadic type.

So now, assuming our IntWrapper implements >>= we can define h function that combines f and g:

h = f >>= g

or, keeping in mind that h must be called with a parameter that it will be passed to f, we can define h as following:

h = \x -> f x >>= g

Conceptually >>= is still a function application operator (takes a result of one function and applies another function to it):
(Int -> Monad[Int]) >>= (Int -> Monad[Int])
but in fact >>= is a method of monadic type with the following signature:
>>= :: (Int -> Monad[Int]) -> Monad[Int]
The method >>= must apply given function Int -> Monad[Int] to some Int (it must call given function with some argument Int), but what exactly will be the value of the argument is decided by the monadic type itself (our IntWrapper may decide just to pass a value member to the function).

With the implementation of >>= monadic type describes the meaning of composition. It defines what does it mean to compose it with another monad of the same type

We can also say, that by implementing >>= monadic type defines how to apply given function to itself.

Lets see how we can compose IntWrappers, assuming IntWrapper defines composition as just passing an Int value forward:
case class IntWrapper(value: Int) extends Monad[Int] {
  override def >>=(f: Int => Monad[Int]): Monad[Int] = {
    f(value)
  }
}
def h(a: Int) = {
  IntWrapper(a + 2) >>= (b => IntWrapper(b * 3))
}
h(2) // <-- has value IntWrapper(12)

OK, this is not very usable application of monad as nothing is happening during composition (in the background).
In next, still simple (or even stupid simple, but more funny) example we will define one more monadic operation that will let us take use of syntactic sugar provided by Scala to compose monads more easily.

Uncertainty principle


In quantum mechanics, there is a rule saying that there are pairs of physical properties of particle (observables) known as complementary variables, that can not be measured simultaneously with arbitrarily precision. Complementary variables are for example position and momentum (of a particle).

We can encapsulate uncertainty of measurement inside Observable monadic type by implementing >>= appropriately ensuring that any calculation that takes any number of Observables can only be performed as monadic operation, meaning Observables must be composed in order to unveil their values that are required to calculate the result (another Observable).

Putting it differently, Observable will not allow accessing its value directly, but only when used in special context (we can name it as laboratory or measurement) which can be easily created using Scala's for comprehension - syntax to simplify monadic composition.

The essential part will be of course >>= method of Observable class. What we can do is to create a new value as a sum of internal (real) value property of Observable and some random value (between 0 and 1) whenever composition occurs (to simulate the distortion of one of the two observables that are measured) and forward this new value in composition chain:
case class Observable[A <: Number](value: A) extends Monad[Number] {
  // remember, flatMap is what Scala expects as monadic apply (aka bind) operator, it is >>= in Haskell
  override def flatMap[B](f: Number => Monad[B]): Monad[B] = {
    f(Math.random() + value.doubleValue())
}
Now we can define a method that performs measurement and calculates some value in a result:
def combinationOfPositionAndMomentum = for /*start measuring*/ {
  position <- Observable(4)
  momentum <- Observable(6)
} yield /*calculate result*/ {
  BigDecimal.valueOf(
    if (position.doubleValue() > 4.5) {
      momentum.doubleValue() + position.doubleValue()
    } else {
      momentum.doubleValue() - position.doubleValue()
    }
  )
}

combinationOfPositionAndMomentum // <-- has value Observable(10.95)
combinationOfPositionAndMomentum // <-- has value Observable(1.83)
After executing two experiments and checking the results (10.95 and 1.83) we can conclude that uncertainty principle is a fact ;)
Well, what we can say for sure is that flatMap method has been called what indicates that composition took place.

As you can see there is no invocation of flatMap method on first Observable in the code. This method call is generated automatically by the compiler. The function which is passed to flatMap method of first Observable is created by the compiler and basically what it does it takes the expression inside yield block and creates monadic type from it (wraps it into monadic type). Complete code that is generated looks like this:
Observable(4) flatMap
  (position => Observable(6).map(
   momentum => //content of yield (result calculation)
  )
As you can see monadic type M[A] is expected to implement method map :: A -> B -> M[B] (second monadic operation next to bind) that is able to create M[B] using given function A -> B, assuming monadic type provides "constructor" applicable to B.

In our example:
  • type of input observables is Observable[Integer]
  • type of expression inside yield block is BigDecimal
  • type of the whole expression (for in Scala is an expression, it returns value) is Observable[BigDecimal]
therefore, invocation of map on the last observable in a block is necessary to do the "conversion" from BigDecimal to Observable[BigDecimal].

Now, lets do the final experiment:
val singleMeasurement = for {
  v1 <- Observable(4)
} yield {
   v1 + 4
}
singleMeasurement // <--- has value Observable(8)
Obviously this time flatMap was not invoked, just map. So, as expected, when we do some calculations with only one Observable, distortion of measurement does not occur.

You can check the complete source code here.

Lets see now some examples of monads that are available in Scala.

Some well known monads


Option


Option[A] - represents a value that is either a single value of type A, or no value at all. This is a hammer for nulls and helps avoiding NullPointerException. When dealing with several optional values (instances of Option), we avoid checking for null (using nested if statements) by simply using monadic composition.
val map: Map = ...
for {
  value1 <- map.get("key1")
  value2 <- map.get("key2")
} yield {
 value1 + value2
}
// result is Option[Nothing] if either value1 or value2 is null
// As soon as null is detected in the chain of function calls, Option[Nothing] is propagated to the end of the chain.

List


List[A] -  you might be surprised, but list is a monad too. Thats why you can compose lists easily:
for {
  v1 <- List(1, 3, 5)
  v2 <- List("2", "4", "6")
  if v1 < v2.toInt
} yield {
 v1 * 10 + v2.toInt
}
// result is List(12, 14, 16, 34, 36, 56)

Future


Future[A] - represents a value of type A being a result (initially unknown) of some operation. Helps chaining operations that are executed asynchronously. See: Future is a Monad
def slowCalcFuture: Future[Int] = ...
val future1 = slowCalcFuture
val future2 = slowCalcFuture
def combined: Future[Int] = for {
  r1 <- future1
  r2 <- future2
} yield r1 + r2

Either (from Scala core) or Validation (from scalaz)


Either[A, B] - represents value that is either A or B (typically error or result of computation). Helps hiding the boilerplate "try/catch logic".
Validation (from scalaz) allows to accumulate errors (Validation is actually not a monad but applicative functor).
def costToEnter(p : Person) : Validation[String, Double] = {
    for {
      a <- checkAge(p)
      b <- checkClothes(a)
      c <- checkSobriety(b)
    } yield (if (c.gender == Gender.Female) 0D else 5D)
  }
//example source code found here: https://gist.github.com/oxbowlakes/970717

Other examples


- Towards an immutable domain model - monads - interesting way of handling validation and operation chaining in event sourced aggregates using monadic type (Behavior)

Composing monads


It may sound strange but monads generally do not compose with each other. But of course, there is always a workaround: monad transformers. We will not go so far.. I will end up here.

13 February 2013

Breaking the Spring by example

First of all, I'm not against Spring, IoC frameworks or frameworks in general. Quite the contrary, I'm big fan of many different frameworks. Said that I must admit that sometimes when a framework hides too much magic you risk loosing control of your application. Once you realize that you've just created zombie app you are afraid of, it may be too late ;) So be careful with framework selection even if you think of commonly used framework like Spring...

Spring - ultimate IoC container


Spring Framework is one of the most popular IoC frameworks and probably the richest in terms of provided features. With every new release number of capabilities, supported technologies increases making the Spring attractive candidate for wide range of Java projects.

One would expect that using such matured, popular, battle-proofed framework should be easy and straightforward at least when it comes to its core functionality. You just annotate your beans, create configuration bean and everything should work as expected. But sometimes it doesn't... As it turns out, having large number of capabilities doesn't mean you can freely play with them, mix them and expect that Spring handles all cases seamlessly.

While working recently with Spring I've found several cases when mixing some features resulted in unexpected or undocumented behavior. I was also surprised to find out
that some features were missing and to solve the problem I had to find a workaround.

Don't have to say, the often you are being surprised by the framework, the often you are wondering if the decision of using it was correct... After all, the time you lost on learning nuances, limitations and bugs of the framework you could better spend on implementing your own infrastructure, especially if all that you need is just a small subset of what the framework provides.

Let's go to the examples.
I will show you how to make Spring surprise you (well, at least it surprised me).

Let's break the Spring

1) Lazy secondary beans (using @Primary and @Lazy annotations)

Spring provides @Primary annotation that should be used to indicate "that a bean should be given preference when multiple candidates are qualified to autowire". Great, that could be useful for tests. By configuring primary beans inside test configuration we could easily replace default beans from main configuration.

Main configuration:

Additional test configuration:


When starting test, the test service will be instantiated. Since default service bean is configured as lazily instantiated, it should not be instantiated by Spring because it is not used at all, right? Well, unfortunately Spring ignores @Lazy annotation in this case. So you can be surprised if you put some heavy initialization logic inside your default service and expect that it will not be executed during the tests...

2) @Primary annotation and service locator

Sometimes when working with prototype scoped beans it is necessary to ask Spring for specific bean using service locator pattern. Please see example below that complements the code from example 1:

Surprisingly Spring will throw exception when trying to run the code above complaining that two candidates of type MyService are available... This is just a bug, nothing more.

3) Events and prototype listeners

Spring provides support for event base communication. You can easily publish application event from one corner of your system and handle the event on the other corner by registering appropriate event listener. It works until you need your listener to be prototype scoped bean.
The Spring will notify only one instance of listener about the event, the instance that has been created during Spring initialization (when ContextRefreshedEvent is published by Spring). There is no way to inform all instantiated prototypes about the event.

4) Custom qualifiers and annotation-based configuration

Using annotations like @Qualifier you can control the selection of candidates among multiple matches. But default usage of @Qualifier allows only bean names (Strings) as identifiers that is error-prone and refactoring-unfriendly.

Fortunately there is another solution. You can define your own annotation that can be used as classifier. Just use @Qualifier as meta-annotation! Example of such custom qualifier you can find below:

You can than define several beans of the same type but with different qualifier type.

..and inject independently beans with different qualifiers:

So far so good. Now, where is the surprise you may ask... As you probably noticed, we used xml configuration to define beans with different qualifiers. Nobody uses xml nowadays ;) so there must be a good reason to use it... The reason is that (surprise!) you can't define several beans of the same type with different custom qualifier using annotation-based configuration because it is not possible to annotate factory method with custom qualifier:

! The code below does not compile !


You can only annotate the type with custom qualifier:

but this approach requires new class for each qualifier type.

I don't know why Spring does not support annotating factory method inside configuration as described above. You will find only the following explanation In Spring documentation:

"As with most annotation-based alternatives, keep in mind that the annotation metadata is bound to the class definition itself, while the use of XML allows for multiple beans of the same type to provide variations in their qualifier metadata, because that metadata is provided per-instance rather than per-class."

5) Properties and annotation parameters

What's strange about the following code?

Configuration for scheduler 1 is read from  Properties while for scheduler 2 it is hardcoded. Why? Because you can assign property expression (which is String) only to parameters of type String. Unfortunately fixedRate parameter is of type long... The solution would be to switch to xml configuration (again? oh no..) or implement some workaround: http://stackoverflow.com/questions/11608531/injecting-externalized-value-into-spring-annotation.

6) ...

I guess if you worked with Spring you could put another case here.
Feel free to drop a comment on your biggest surprise from Spring.