Wednesday, July 21, 2010

Lambdas in Java Preview - Part 5: Apache Wicket

This is the fifth part in a series of blog posts (read the previous part) giving some practical examples of lambdas, how functional programming in Java could look like and how lambdas could affect some of the well known libraries in Java land. This time I'll have a look at how the addition of lambdas could possibly change the API of the Apache Wicket web framework.

If you are not familiar with Wicket, I can't say how much sense it makes for you to read this article. But Wicket is a component oriented web framework and its programming model in some ways is similar to that of the Swing framework. Some of the ways lambdas will change Wicket could also apply to the Swing framework, so maybe this article actually is still useful for you.

The reason, why Wicket is a good example of how the introduction of lambdas into the Java language might change an API, is that it makes heavy use of anonymous inner classes. Now, anonymous inner classes in Java are often painful to look at and one of the primary reasons to add lambdas to Java is to reduce the need for anonymous inner classes, or better to simplify their usage. This is done mainly through SAM conversion, which means that a lambda expression or anonymous function can be converted to a SAM type, i.e. an interface or abstract class with a single abstract method, when it appears in a position where a SAM type is expected (see previous parts for examples). And as Wicket makes heavy use of implementing abstract classes anonymously (and some of these classes actually are simple callback classes that in fact are SAM types) it seems that there could be a lot of places where we can replace these anonymous inner classes with lambda expressions.

AbstractReadOnlyModels

One of the classes that are mostly implemented anonymously is AbstractReadOnlyModel. It has a single abstract method getObject(). In other words, it's a SAM type.
IModel<String> nameModel = 
   new AbstractReadOnlyModel<String>() {
     @Override
     public String getObject() {
       Person person = personModel.getObject();
       return person.getFirstName() + ", " 
                + person.getLastName();
     }
  };

TextField<String> name = new TextField<String>("name", nameModel);
The important part of this is just the body of the getObject method, but about the same amount of code is just noise. With lambdas we could remove most of this noise.
AbstractReadOnlyModel<String> nameModel = #() {
  Person person = personModel.getObject();
  return person.getFirstName() + ", " 
           + person.getLastName();
};

TextField<String> name = new TextField<String>("name", nameModel);
Here we basically replaced the anonymous inner class with a lambda expression, which has the same body as the getObject() method before. This lambda expression will automatically be converted to an AbstractReadOnlyModel.

One thing to note is, that the compiler must have a chance to infer the target SAM type. In the example above we explicitly declared the variable to be of type AbstractReadOnlyModel, but for example, if you call the constructor TextField(String wicketId, IModel model), you cannot pass a lambda expression inline like this

TextField<String> name = new TextField<String>("name", #() {
  Person person = personModel.getObject();
  return person.getFirstName() + ", " 
           + person.getLastName();
});
because IModel is not a SAM type and the compiler cannot infer that you want the lambda to be converted to an AbstractReadOnlyModel.


LoadableDetachableModels

Another commonly used model class is LoadableDetachableModel, which has the load() method as its single abstract method.
final Long personId = ...
IModel<Person> personModel = 
  new LoadableDetachableModel<Person>() {
    @Override
    protected Person load() {
      return getPersonDao().get(personId);
    }
};
So, same as with the AbstractReadOnlyModel, we can replace it with a lambda expression.
final Long personId = ...
LoadableDetachableModel<Person> personModel = #() (getPersonDao().get(personId));

Components

Components are another essential type of classes in Wicket. There are some Wicket components that are SAM types, especially those that handle some kind of event. Often such components are implemented as anonymous inner classes. Take for example an AjaxButton
Button submit = new Link("submit", form) {

 @Override 
 protected void onSubmit(AjaxRequestTarget target,
   Form<?> form) {
  System.out.println("form submitted");
 }

};
Its single abstract method is onSubmit(), but there is an important difference to the SAM types shown before. Components don't have a no-arg constructor, they all take at least a wicketId as a constructor argument. The proposals don't give an answer if to permit an argument list for other than the no-arg constructor for SAM conversion. If it would be allowed, it would possibly look like this
AjaxButton submit = 
    AjaxButton("submit", form)#(
      AjaxRequestTarget target, Form<?> form){
        System.out.println("form submitted")
      };
On the other hand, a future version of Wicket could be designed differently, in order to better support SAM conversion. Let's build a sub class of AjaxButton that relies more on composition instead of inheritance (Wicket in general is the other way round, though). Our subclass, instead of having to override onSubmit, it could take a callback argument:
public class MyAjaxButton extends AjaxButton {

private IAjaxButtonOnSubmitCallback onSubmitCallback;

public MyAjaxButton(String id, Form<?> form,
  final IAjaxButtonOnSubmitCallback onSubmitCallback) {
 super(id, form);
 this.onSubmitCallback = onSubmitCallback;
}

@Override
protected void onSubmit(AjaxRequestTarget target, Form<?> form) {
 onSubmitCallback.onSubmit(target, form);
}
IAjaxButtonOnSubmitCallback is a SAM type
public interface IAjaxButtonOnSubmitCallback 
    extends IClusterable {

  void onSubmit(AjaxRequestTarget target, Form<?> form);

}
so we can call the constructor of our button with a lambda expression as the last argument (here with the arrow syntax and type inference of the latest proposal)
AjaxButton submit = 
  new MyAjaxButton("submit", form, { target, form -> 
    System.out.println("form submitted");
  });
The same principle (composition over inheritance) would work for a lot of other Wicket components, generally for all components that have some sort of callback mechanism for certain events, e.g. form submit events, form validation error events, link clicked events, dropdown selection changed events and so on. If all these events were executed by callbacks instead of by abstract methods, then SAM conversion could be used efficiently.

In general, there are two cases, where we could think about introducing additional callback classes: First, if a class is a SAM type, but does not have a no-arg constructor, we could replace the single abstract method with a call to a callback we give to that component. And second, if a class isn't a SAM type, but has more than one abstract method instead, we could provide a callback for each of these abstract methods. This would quite change the design of Wicket, but would better support SAM conversion.

Another class of components where this principle applies, is all sorts of populators. Take for example the columns of a DataTable.
IColumn<Person> personColumn = 
  new AbstractColumn<Person>(Model.of("Person")) {

  @Override
  public void populateItem(
    final Item<ICellPopulator<Person>> cellItem,
    final String componentId, final IModel> rowModel) {
     ...
Again, AbstractColumn does not have a no-arg constructor, so we (probably) cannot use SAM conversion directly. But instead we could write our own sub class of AbstractColumn that takes an IColumnPopulator (a SAM type) and calls it from its populateItem method.
IColumn<Person> personColumn = new AbstractColumn<Person>(Model.of("Person"), IColumnPopulator populator) {

 @Override
 public void populateItem(
   final Item<ICellPopulator<Person>> cellItem,
   final String componentId, final IModel> rowModel) {
    populator.populateItem(cellItem, componentId, rowModel);
  }
A column could then be created a bit more concise like this:
IColumn<Person> personColumn = 
    new MyAbstractColumn<Person>(Model.of("Person"), 
     { cellItem, componentId, rowModel ->
 ...
});

In summary, Wicket has lots of opportunities to make use of SAM conversion, which would reduce the need for anonymous inner classes. In cases where these SAM types don't have a no-arg constructor it is not clear yet, if SAM conversion can be applied to them. For these types a different design which prefers composition over inheritance would allow SAM conversion, though. Same is true for classes that have more than one single abstract method, which could use callback classes instead of abstract methods.




Thursday, July 15, 2010

Lambdas in Java Preview - Part 4: Proposal Update

This is the fourth part in a series of blog posts (read part 3 and part 5) giving some practical examples of lambdas, how functional programming in Java could look like and how lambdas could affect some of the well known libraries in Java land. This part describes shortly the changes of a new proposal, that has been published while writing this series, and how it changes some of the examples in this series.

The new proposal
Last week Brian Goetz published a new proposal to add lambda expressions to Java, that builds on the previous straw-man proposal. The two biggest changes are that function types have been removed and that there will be more type inferencing than before. Also the new proposal uses a different syntax for lambda expressions, but it also states, that this doesn't reflect a final decision on the syntax. But let's have a short look at each of these points.

No more function types
At first, this sounds irritating. Without function types, how could we pass functions to other methods or functions? We need to declare them as parameters somehow. The answer is that functions or better lambda expressions have to be converted to SAM types. Remember that SAM types are single abstract method types, like interfaces with only one method or abstract classes with only one abstract method, and that lambda expressions can automatically be converted to such types. This is called SAM conversion, which I've also discussed in the first part of this series.

So, function types will be primarily replaced by SAM conversion, lambda expressions can only appear in places where they can be converted to a SAM type. I don't fully understand the reasons behind this decision, but in a way it actually makes sense. As shortly mentioned in the first part of this series, from an API designer's perspective you don't have to scratch your head anymore if your methods should take a function type or a SAM type as an argument. And SAM types would probably turn out to be more flexible, anyway.

However, this will change almost all of the examples in this series. In all places, where a method took a function as an argument, the type of this argument must be changed to a SAM type, and I've almost everywhere used function types. We'll have a look at this later, and see if it makes sense now to have some generic function interfaces for functions like Function<X,Y>.

More type inferencing
As we've also seen in the examples in this series, lambda expressions can be quite verbose, because of type declarations. The new proposal says, that basically all types in lambda expressions can be inferred by the compiler - their return types, the parameter types and the exception types. This will make lambdas much more readable.

Alternative syntax
As said, the syntax of the new proposal does not reflect a final decision, but it is much more like in Scala or Groovy in this proposal.
FileFilter ff = {file -> 
    file.getName().endsWith(".java") };
Although I personally like this arrow syntax much better, I'll stick with the #-syntax in this series because the prototype uses it.

More changes
Just for completeness, the proposal contains some more changes. this in lambdas has different semantics than before, break and continue aren't allowed in lambdas, and instead of return, yield will be used to return values from a lambda expression (there are no details on how yield exactly works). Also, there will be method references, that allow for referencing methods of an existing class or object instance.

Impact on the examples
Now, let's have a look at how the new proposal changes some examples of the previous posts. The file filtering example from part 1 looked like this:
public static File[] fileFilter(File dir, #boolean(File) matcher) {
    List<File> files = new ArrayList<File>();
    for (File f : dir.listFiles()) {
        if (matcher.(f)) files.add(f);
    }
    return files.toArray(new File[files.size()]);
}
The client code looked like this:
File[] files = fileFilter(dir, #(File f)(f.getName().endsWith(".java")));
As there will be no function types anymore, we will have to change the fileFilter, so that it takes a SAM type instead of a function type. In this case it's simple, we could simply change the second argument's type to FileFilter and call its accept() method:
public static File[] fileFilter(File dir, FileFilter matcher) {
    List<File> files = new ArrayList<File>();
    for (File f : dir.listFiles()) {
        if (matcher.accept(f)) files.add(f);
    }
    return files.toArray(new File[files.size()]);
}
The client side is still the same, though.

Now, as you might have noticed (or remembered from part 2) this is quite a bad example, because there actually is already the File.listFiles(FileFilter) method we can also call with a lambda:
File[] files = dir.listFiles(#(File f)(f.getName().endsWith(".java")));
The lambda will be converted to a FileFilter automatically in this case. But despite the fileFilter method is a bad example, it shows quite well that the removal of function types has very little impact, if there is already a corresponding SAM type.

Before we go further, here's the client side with the arrow syntax and type inference:
File[] files = fileFilter(dir, f -> { 
    f.getName().endsWith(".java") });
Now, let's see how the removal of function types changes the examples in the abscence of an appropriate SAM type. For that here is the List<T>.map function from part 3 again:
public <S> List<S> map(#S(T) f) {
    List<S> result = new ArrayList<S>();
    for (T e : this) {
        result.add(f.(e));        
    }    
    return result;
}
We need to replace its function type argument f by a SAM type. We could start by writing an interface Mapper with a single method map. This would be an interface especially for usage with the map function. But having in mind, that there are probably many more functions similar to our map function, we could create a more generic Function interface, or more specifically an interface Function1<X1,Y> which represents a function that takes exactly one argument of type X1 and returns a value of type Y.
public interface Function1<X1,Y> {
    Y apply(X1 x1);
}
With this we could change our map function taking a Function1 instead of a function type argument:
public <S> List<S> map(Function1<T,S> f) {
    List<S> result = new ArrayList<S>();
    for (T e : this) {
        result.add(f.apply(e));        
    }    
    return result;
}
Again, the client side would still be the same. The lambda expression will be converted to a Function1.
List<Integer> list = new List<Integer>(2,4,2,5);
List<String> result = list.map1(n -> {"Hello World".substring(n)});
The other examples are basically the same, so I think this is enough to show the changes that the new proposal brings by removing function types.

Finally let's have a short look at the new syntax and the stronger type inferencing. With the initial proposal a File.eachLine method, could be called like this (see also part 2).
file.eachLine(#(String line) {
  System.out.println(line);
});
This would look more like this with the new proposal's syntax:
file.eachLine(line -> {
  System.out.println(line);
});
And with some more syntactic sugar, e.g. if it was allowed to remove parentheses, leave out a lambdas argument if it just takes a single argument and call it it in this case, then it would look even more like a groovy control structure:
file.eachLine {
  System.out.println(it);
};
But this is not part of the any proposal, yet.


To summarize, the removal of function types does not have such a huge impact on the examples, and in a way it removes complexity and feels more Java-like. To me it would be quite reasonable to have generic function SAM types in the standard libraries, e.g. Function0 for a function that takes no arguments, Function1 for a function that takes one argument and so on (up to Function23, then Java would have more than Scala, yay!). Further complexity is removed by the stronger type inferencing capabilities.



Tuesday, July 13, 2010

Lambdas in Java Preview - Part 3: Collections API

This is the third part in a series of blog posts (read part 1, part 2 and part 4) giving some practical examples of lambdas, how functional programming in Java could look like and how lambdas could affect some of the well known libraries in Java land. In this part I'll focus on how the addition of lambdas could affect one of the most used standard APIs - the Collections API.

Note: While writing this post, a new proposal has been published, that uses a different syntax for lambdas, removes function types and uses more type inferencing. I'll post a short description of the changes soon. The new proposal would change some of the examples in this post, but the examples are still essentially valid and useful.


The Collections API is one of the most used APIs around in Java. Amongst its mostly used collection types are lists, sets and maps. Also the Collections class, which provides utility methods for operating on these collections is commonly used. The reason why I chose the Collections API as an example on how lambdas might influence existing APIs is that certain kinds of operations on collections can be written much more natural in a functional programming style than in an imperative style. This is also one of the reasons for the growing number of projects that aim to bring functional elements to Java in the form of libraries, e.g. Google Collections or lambdaj.

Status quo
One of the common operations on lists is filtering a list for elements matching a certain condition, say for example filtering a list of integers for the even ones. We could implement a method findEven like this:
List<Integer> findEven(List<Integer> list) {
  List<Integer> result = new ArrayList<Integer>();
  for (Integer n : list) {
    if (n % 2 == 0) result.add(n);
  }
  return result;
}
We can call it with a list of integers and it will return a list containing the even integers. Works just fine, but what if we'd also want a method that returns only the odd elements? There is no straight way of building an abstraction for that, so we'd write a method findOdd which would look 99.3865% the same as findEven.

Google Collections to the rescue we can use the Iterables.filter method which takes a list and filters it by a given Predicate.
Iterable<Integer> even = Iterables.filter(list,
  new Predicate<Integer>() {
    @Override
    public boolean apply(final Integer n) {
        return n % 2 == 0;
    }
  });
This filter method is way more flexible as it can take any predicate you like to filter by. But actually it really doesn't look nice, at least unless you build your own library of reusable predicates.

Simplification through lambdas
Now, if we have a look at it, the essential part of the predicate is just the expression n % 2 == 0. The rest is just boilerplate. With lambdas we can actually remove all(!) of this boilerplate. To do that, we implement a filter method quite similar to that of the Google Collections, but instead of taking a predicate of type Predicate as the second argument, it'll take the predicate as a lambda:
public static <T> List<T> filter(List<T> list, 
    #boolean(T) predicate) {
  List<T> result = new ArrayList<T>();
  for (T t : list) {
    if (predicate.(t)) result.add(t);
  }    
  return result;
}
Say we have implemented this filter method in a class called FCollections, calling it looks like this then:
List<Integer> list = Arrays.asList(4,3,2,6,4);
List<Integer> even = FCollections.filter(list, #(Integer n)(n % 2 == 0));
Compare that to calling the Google Collections filter method. It's much more concise, readable and expressive.

Extension methods
Before we have a look at more operations on collections of this sort, I want to mention a feature that will probably also be introduced in JDK7 - extension methods.

Notice that we have put our filter method in a utility class FCollections. But filter should probably rather be a method of List or Collection itself. Unfortunately, adding a new method to these interfaces would break existing sub classes of List or Collection, something we definitively wouldn't want. Now, extension methods allow for adding a new method to an interface and specifying a default implementation of this method:
public interface List<E> extends Collection<E> {
    
    List<E> filter(#Boolean(E) predicate) import static Collections<E>.filter;
    ...
Now, if a sub class of List doesn't implement a filter method itself Collections.filter is called by default. Collections.filter would look like our FCollections.filter method - it takes a list as the first argument followed by the predicate lambda and it must return a List.

So with extensions methods it would be possible to add new operations on collection types like those showed in this post without breaking existing sub classes (at least as long as these subclasses didn't implement an incompatible filter method before). But extension methods are not the primary topic of this post (search the web for extension methods or public defender methods for more details), let's go back and see some more useful operation on collections.

More operations
There's a bunch of operations that can be implemented in a more functional style, e.g. iterating a list and applying a function to each element.
public class MyList<T> extends ArrayList<T>
  
  public void each(#void(T) f) {
    for (T e : this) {
      f.(e);        
    }     
  }
Instead of iterating with a for loop, we can call the each method like this:
list.each(#(Integer n)(System.out.println(n)));
Another example is the map function that takes a function, applies it to each element of the list and returns the result as a list:
public <S> List<S> map(#S(T) f) {
  List<S> result = new List<S>();
    for (T e : this) {
        result.add(f.(e));        
    }    
    return result;
}
We could use it to create a list of squares from a list of integers:
List<Integer> squares = list.map(#(Integer n)(n*n));
foldLeft is a another method that combines the elements of a list together using a function f from left to right (foldRight does the same from right to left) starting with a value z, i.e. the result is f(f(f(f(f(z,a1),a2),a3),a4),a5)
public <S> S foldLeft(S z, #S(S, T) f) {
    S acc = z;
    for (T e : this) {
        acc = f.(acc, e);
    }    
    return acc;
}
A simple way to use it would be to calculate the sum of the elements:
Integer sum = list.foldLeft(new Integer(0), #(Integer a, Integer b)(a+b));
reduce does the same, but without an initial value.
public T reduceLeft(#T(T, T) f) {
    if (isEmpty()) throw new UnsupportedOperationException("reduce on empty list"); 
    if (size() == 1) return get(0);

    T acc = get(0);
    for (T e : subList(1, size())) {
        acc = f.(acc, e);
    }    
    return acc;
}
Integer sum = list.reduceLeft(#(Integer a, Integer b)(a+b));
The find method searches for the first element matching a given predicate and returns it:
public T find(#Boolean(T) predicate) {
    for (T e : this) {
        if (predicate.(e)) return e;
    }    
    return null;
}
Integer found = list.find(#(Integer n)(n>5));
There are many more possibilities to use predicates in this way, here are some short examples:
boolean hasPrime = list.exists(#(Integer n)(isPrime(n));
boolean allPrimes = list.forall(#(Integer n)(isPrime(n));
boolean countPrimes = list.count(#(Integer n)(isPrime(n));
boolean withoutPrimes = list.remove(#(Integer n)(isPrime(n));
Pair<List<Integer>> oddAndEven = list.partition(#(Integer n)(n%2==0));

To summarize, as you can see the Collections API could benefit in a lot of ways from the addition of lambdas. For some sort of operations, lambdas would increase readability, conciseness and expressiveness. Extension methods could make it possible to have some of the new operations directly in the collection classes instead of in utility classes like Collections.




Monday, July 5, 2010

Lambdas in Java Preview - Part 2: Functional Java

This is the second part in a series of blog posts (read part I) giving some practical examples of lambdas, how functional programming in Java could look like and how lambdas could affect some of the well known libraries in Java land. This part focusses on general functional programming techniques, which will be available through the addition of lambdas. Functional programming (although I still wouldn't consider Java a functional programming language) will make Java code more concise, more expressive and more readable in certain kinds of problem situations.

Note: All the following examples work with the current prototype implementation of lambdas.

Simplifying code
Let's start with a simple example that shows how higher-order functions in general can lead to simplified code. Suppose we want to search for files of a certain type. Up to now, we would probably use a FileFilter:
File dir = new File(".");

File[] files = new File(".").listFiles(new FileFilter() {
    public boolean accept(File file) {
        return file.getName().endsWith(".java");
    }
});
But with higher-order functions we could write a method that takes the filtering logic as a function argument. Think of this code as being part of an API.
public static File[] fileFilter(File dir, #boolean(File) matcher) {
    List<File> files = new ArrayList<File>();
    for (File f : dir.listFiles()) {
        if (matcher.(f)) files.add(f);
    }
    return files.toArray(new File[files.size()]);
}
The client code of this API would be a one-liner then:
File[] files = fileFilter(dir, #(File f)(f.getName().endsWith(".java")));
Callback mechanisms like filtering files with a FileFilter like above can generally be greatly improved by replacing the callback class with a function. And in fact, callbacks with only one method are so common in Java, that there will be special support for so called SAM types (single abstract method, see also first part of this series) from project lambda. This allows us to call the existing File.listFiles(FileFilter) method with a function instead of a FileFilter. Because FileFilter is a SAM type, the function will automatically converted into a FileFilter then:
File[] files = dir.listFiles(#(File f)(f.getName().endsWith(".java")));

Currying
One of the features most functional languages support (some even make very excessive use of it) is currying. Currying allows for defining a function with multiple parameters and allow the client to call it with fewer parameters, which will return a function in which the given parameters are fixed. A simple example would be a function that takes two parameters x and y and returns the sum. The client could the call this function with a fixed value for x, say 5, but without a value for y. This would result in a function that takes one parameter y and returns 5+y. Let's try to implement this curried function in Java:
##int(int)(int) sum = #(final int x)(#(int y)(x+y));
assert sum.(3).(5) == 8;

#int(int) add5 = sum.(5);
assert add5.(4) == 9;
Here we first define a function sum of type ##int(int)(int), i.e. of type function that takes an integer and returns a function of type #int(int). We can evaluate this in a single statement sum.(3).(5). The first call sum.(3) will basically result in a lambda expression #int(int y)(3+y). The second call will evaluate this expression passing 5 as the value for y. But instead of evaluating the whole expression, we can also evaluate it partially, as in line 4. This will result in a function add5, that could be called with different values of y and would return 5+y.

That's what currying in Java could look like. Agreed, it looks nicer in other languages, but it generally works. Maybe some syntactical sugar could still be added.

Control Abstractions
Another feature of many functional languages is the possibility to build control abstractions that look like natural syntax. This is made possible through currying and higher-order functions in general. Let's have a look at a simple example.
##void(#void())(int) afterDelay = #(final int n) {
  return #(#void() f) {
    try {
      Thread.sleep(n*1000);
      f.();
    } catch (Exception e) {}
  };
};
afterDelay is a higher-order function, it takes an integer n as argument and returns a function that takes a function f. The returned function will execute f after n seconds. We can invoke it directly like
afterDelay.(2).(#()(System.out.println("foo")));
which will print "foo" to the console after waiting for two seconds. Or we can make use of currying to create another function after5Seconds, which is a function that takes another function and evaluates it after waiting for 5 seconds.
#void(#void()) after5Seconds = afterDelay.(5);

after5Seconds.(#()(System.out.println("foo")));
after5Seconds.(#()(System.out.println("bar")));
Now this still looks like a normal function call and not like a control abstraction, yet. But it is simple to make it look like one. Just replace the parentheses with curly braces
after5Seconds.(#() {
    System.out.println("foo");
    System.out.println("bar");
});
and it almost looks like other control structures like for or if.

Now, let's have a look at another example that demonstrates the loan pattern:
##void(#void(PrintWriter))(File) withWriter = 
  #(final File file) {
    return #(#void(PrintWriter) doWithWriter) {
      PrintWriter writer = null;
        try {
          writer = new PrintWriter(file);
          doWithWriter.(writer);
        } catch (Exception e) {
          // Just ignore exceptions, see straw-man
          // proposal for exception handling with 
          // lambdas
        } finally {
          if (writer != null) { 
            writer.close();
          }      
        }
    };
};
Simply said, withPrintWriter is a function that takes a file as its argument and opens a PrintWriter on that file. Then it calls a function given by the caller with this PrintWriter (or loans it to the function). Things will probably get a lot clearer having a look at how to call withPrintWriter.
File file = new File("log.txt");

withWriter.(file).(#(PrintWriter writer) {
    // Printing to the file
    writer.println("foo");
    for (int i=0;i<10;i++) writer.println(i);
});         
As withWriter is a curried function, we can evaluate it partially (just with a file) and assign the result to a variable of function type, which can be called multiple times.
#void(#void(PrintWriter)) logger = withWriter.(file);

logger.(#(PrintWriter writer) {
    writer.println("foo");
});         
logger.(#(PrintWriter writer) {
    for (int i=0;i<10;i++) writer.println(i);
});
Note, withWriter can also be implemented as a non-curried function. We could pass the function directly as a second argument to it.
#void(File, #void(PrintWriter)) withWriter = 
  #(final File file, 
      final #void(PrintWriter) doWithWriter) {
    PrintWriter writer = null;
    try {
      writer = new PrintWriter(file);
      doWithWriter.(writer);
    } catch (Exception e) {
    } finally {...}  
  }
};

withWriter.(new File("log.txt"), #(PrintWriter writer) {
  writer.println("Hello ");
  writer.println("World.");
});         
This kind of control abstractions can be very useful for code reuse, e.g. to remove boilerplate code in resource handling (withWriter, withReader, ...) or transaction handling (inTransaction).

For the fun of it let's do another example, which simplifies reading the lines of a file one by one:

#void(File, #void(String)) eachLine = 
  #(final File file, final #void(String) fun) {
    FileInputStream in = null;
    try {
      in = new FileInputStream(file);
      BufferedReader reader = 
        new BufferedReader(new InputStreamReader(in));
      String line = null;
      while((line = reader.readLine()) != null) {
        fun.(line);
      }
    } catch (Exception e) {
    } finally { /* close in */ }      
  }

};
Again, think of eachLine as being part of an API, hidden from the client. On the client side things get really easy.
File file = new File("log.txt");

eachLine.(file, #(String line) {
  System.out.println(line);
});
In fact eachLine should probably be part of the class File itself. And maybe it will get in there in JDK7 via an extension method (which is a topic of a following post).

Originally, I didn't plan to write that much about control abstractions (it's just too much fun, though), but instead cover recursion as a functional technique, too.

Recursion
Unfortunately, until now it's not quite clear how recursive calls in lambda expressions would look like or if it will even be possible to write recursive lambdas (if it wasn't possible we could still call recursive methods from a lambda). There are currently some discussions going on the project lambda mailing list, so I'll skip this topic for now.


To summarize, functional techniques allow for some code simplifications in certain scenarios as described above. Actually, this was also the primary reason to introduce lambda expresssions, namely for simplifying code for the new fork/join framework and parallel arrays. But lambdas make it possible to simplify code in a lot of other use cases as well.