Feature Literals

In the current open source release of Gosu there is a new feature called, er, feature literals. Feature literals provide a way to statically refer to the features of a given type in the Gosu type system. Consider the following Gosu class:

  class Employee {
    var _boss : Employee as Boss
    var _name : String as Name
    var _age : int as Age

    function update( name : String, age : int ) {
      _name = name
      _age = age
    }
  }

Given this class, you can refer to its features using the '#' operator (inspired by the Javadoc @link syntax):

  var nameProp = Employee#Name
  var ageProp = Employee#Age
  var updateFunc = Employee#update(String, int)

These variables are all various kinds of feature references. Using these feature references, you can get to the underlying Property or Method Info (Gosu’s equivalents to java.lang.reflect.Method), or use the feature references to directly invoke/get/set the features.

Let’s look at the using the nameProp above to update a property:

  var anEmp = new Employee() { :Name = "Joe", :Age = 32 }
  print( anEmp.Name ) // prints "Joe"
  
  var nameProp = Employee#Name

  nameProp.set( anEmp, "Ed" )
  print( anEmp.Name ) // now prints "Ed"

You can also bind a feature literal to an instance, allowing you to say “Give me the property X for this particular instance“:

  var anEmp = new Employee() { :Name = "Joe", :Age = 32 }
  var namePropForAnEmp = anEmp#Name

  namePropForAnEmp.set( "Ed" )
  print( anEmp.Name ) // prints "Ed"

Note that we did not need to pass an instance into the set() method, because we bound the property reference to the anEmp variable.

You also can bind argument values in method references:

  var anEmp = new Employee() { :Name = "Joe", :Age = 32 }
  var updateFuncForAnEmp = anEmp#update( "Ed", 34 )
  
  print( anEmp.Name ) // prints "Joe", we haven't invoked 
                      // the function reference yet
  updateFuncForAnEmp.invoke()
  print( anEmp.Name ) // prints "Ed" now

This allows you to refer to a method invocation with a particular set of arguments. Note that the second line does not invoke the update function, it rather gives you a reference that you can use to evaluate the function with later.

Feature literals support chaining, so you could write this code:

  var bossesNameRef = anEmp#Boss#Name

Which refers to the name of anEmp‘s boss.

You can convert method references to blocks quite easily:

  var aBlock = anEmp#update( "Ed", 34 ).toBlock()

Finally, feature references are parameterized on both their root type and the features type, so it is easy to say “give me any property on type X” or “give me any function with the following signature”.

So, what is this language feature useful for? Here are a few examples:

  1. It can be used in mapping layers, where you are mapping between properties of two types
  2. It can be used for a data-binding layer
  3. It can be used to specify type-safe bean paths for a query layer

Basically, any place you need to refer to a property or method on a type and want it to be type safe, you can use feature literals.

Ronin, an open source web framework, is making heavy use of this feature. You can check it out here:

  http://ronin-web.org

Enjoy!


Gosu’s Secret Sauce: The Open Type System

Earlier last week we finally released Gosu to the general public, the JVM language we use for configuring our enterprise applications at Guidewire.   Gosu has been in the works for several years here beginning at the dawn of our engineering history in 2002.  Over the years the language has evolved to meet the growing demands of our applications.  Initially we created Gosu because we needed an embeddable, statically typed, scripting language for business rules.  We needed static typing so that we could 1) statically verify business logic and 2) build deterministic tooling around the language via static analysis e.g., for code completion, navigation, usage searching, refactoring, etc.  Tooling was, and is now more than ever, a vital part of our technology offering.  For instance, we don’t want to burden our users with having to commit our entire API to memory in order to be productive with our platform.  But it doesn’t end there.

Our platform, like that of most large-scale enterprise application vendors, consists of a relatively tall stack of technologies, most are developed in-house.  The stack is comprised of the following subsystems in no particular order: Database Entity/OR layer, Web UI framework, Business Rules, Security/Permissions, Web Services, Localization, Naming services, Java API, Query model, XSD/XML object models, various application-dependent models of data, and Gosu.  A primary challenge with any enterprise application is to adapt to customer requirements through configuration of its subsystems.  As I’ve stated, Gosu is the language we use to accomplish that, but how does, say, the Gosu code in the Web UI or Rules access the Entity layer and XSD-modeled information?  In other words, if a primary goal of Gosu’s static type system is to help users confidently and quickly configure our applications, how can it possibly help with domains such as these which are external to the type system?

This is where Gosu separates from the pack.  Unlike most (all?) other programming languages, Gosu’s type system is not limited to a single meta-type or type domain.  For instance, Java provides one, and only one, meta-type, namely java.lang.Class.  If you want to directly represent some other type domain to Java, you’re out of luck.  Your only options are to write a class library to access information about the domain or resort to code generation.  Neither option is desirable or even adequate in many cases, but as Java programmers we don’t know any better.  Dynamic languages such as Ruby provide varying degrees of meta-programming, which can be a powerful way of dynamically representing other type domains.  But with dynamic  languages we escape type safety, deterministic tooling, and other advantages of static typing, which defeats the purpose of our primary goal of exposing other type domains to the language.  Our customers, for instance, would be at a terrible disadvantage without the ability to statically verify usages of various domains in their application code.  Thus, Gosu’s type system must provide a similar (or better) level of flexibility provided by dynamic meta-programming, but without sacrificing static typing — a tall order.  How did we do it?

Gosu’s type system consists of a configurable number of type domains.  Each domain provides a type loader as an instance of Gosu’s ITypeLoader interface.  A type loader’s primary responsibility is to resolve a type name in its domain and return an implementation of the IType interface.  This is what is most unique about Gosu — it’s type system is open to other domains to participate with first-class representation.  For instance, Gosu does not discriminate between a Gosu Class a Java Class or Entity Type or XSD Type or what have you; they’re all just types to Gosu’s compiler.  This unique and powerful feature extends all the benefits of static typing to a potentially huge range of otherwise inaccessible domains.

Before we consider a specific domain, it is important to understand that not all type loaders in Gosu are required to produce bytecode.  Those that do, like Gosu classes, implement an interface for getting at the corresponding Java bytecode. Those that don’t provide call handlers and property accessors for reflective MethodInfo and PropertyInfo evaluation.  Note all types provide TypeInfo, see IType.getTypeInfo().  For instance, the parser works against the TypeInfo, MethodInfo, etc. abstractions as the means for a level playing field between disparate types. At runtime, however, unless a type provides a Java bytecode class the MethodInfos and PropertyInfos also are responsible for handling calls.  Not requiring bytecode at runtime accommodates a potentially much wider range of type loaders and makes it possible  to quickly prototype loaders. 

With that understanding, we can jump to some specific examples.  Let’s take a look the Gosu Class type loader. It’s job is to resolve names that correspond to the domain of Gosu classes, interfaces, enumerations, enhancements, and templates (primarily .gs* family of files).  The result is an instance of GosuClass, which implements IType etc.  One of the methods in there is getBackingClass().  Essentially, this method cooperates with Gosu’s internal Java classloader to compile the parse tree from the Gosu class to a conventional Java class.  So it follows that Gosu’s compiler performs dynamic compilation direct from source; there is no separate build process or persisted classfiles. This does not imply that the Java class loader is transient, however.  It is not.  Since Gosu is a statically typed language it’s classes are compiled to conventional bytecode and, thus, don’t require any classloader shenanigans as with most dynamically typed languages.  But I digress.

What connects the Java bytecode class to its Gosu type is the IGosuObject implementation; all Gosu classes and most other types implicitly implement this. The getIntrinsicType() method is instrumental, for example, in making reified generics work in Gosu.  An instance of a Gosu class, or any type in the entire type system for that matter, can produce its underlying type via getIntrinsicType().  And from there Gosu’s type system takes over. For instance, you can reflectively access Gosu-level type information from Java code via TypeSystem.getFromObject( anyObject ).

It’s worth repeating that Gosu classes are no more important to Gosu’s type system than any other type.  For instance, XSD types resolve and load in the same manner as Gosu classes.  Given an XSD type loader (we have one internally), Gosu code can import and reference any element type defined in an XSD file by name.  No code generation, no dynamic meta-programming, and all with static name and feature resolution.  The XSD type defines constructors, properties, and methods corresponding with elements from the schema.  In addition it provides static methods for parsing XML text, files, readers, etc.  The result is an instance of the statically named XSD type corresponding with the element name, which can be used anywhere a Gosu type can be used, because it *is* a Gosu type!  Here’s a very simple example that demonstrates the use of an XSD type:

From a sample driver.xsd file:

<xs:element name=”DriverInfo”>
 <xs:complexType>
  <xs:sequence>
   <xs:element ref=”DriversLicense” minOccurs=”0″ maxOccurs=”unbounded”/>
   <xs:element name=”PurposeUse” type=”String” minOccurs=”0″/>
   <xs:element name=”PermissionInd” type=”String” minOccurs=”0″/>
   <xs:element name=”OperatorAtFaultInd” type=”String” minOccurs=”0″/>
  </xs:sequence>
  <xs:attribute name=”id” type=”xs:ID” use=”optional”/>
 </xs:complexType>
</xs:element>
<xs:element name=”DriversLicense”>
 <xs:complexType>
  <xs:sequence>
   <xs:element name=”DriversLicenseNumber” type=”String”/>
   <xs:element name=”StateProv” type=”String” minOccurs=”0″/>
   <xs:element name=”CountryCd” type=”String” minOccurs=”0″/>
  </xs:sequence>
  <xs:attribute name=”id” type=”xs:ID” use=”optional”/>
 </xs:complexType>
</xs:element>

Sample Gosu code:

uses xsd.driver.DriverInfo
uses xsd.driver.DriversLicense
uses java.util.ArrayList

function makeSampleDriver() : DriverInfo {
  var driver = new DriverInfo(){:PurposeUse = “Truck”}
  driver.DriversLicenses = new ArrayList<DriversLicense>()
  driver.DriversLicenses.add(new DriversLicense(){:CountryCd = “US”, :StateProv = “AL”})
  return driver
}

Notice the fully qualified names of the XSD types begin with xsd.  That’s just the way the XSD loader exposes the types; a type loader can name them any way it likes, but it should do its best to avoid colliding with other namespaces by using relatively unique names.  Next you may have discovered the name of the XSD file, driver, as the namespace containing all the defined element types.  That’s also a convention the loader chooses to define.  What’s most interesting, however, is how seamlessly the XSD types integrate with Gosu.  You can’t distinguish them from Gosu classes or Java classes, and that’s the whole idea!  All type domains are created equally.  Notice you can even parameterize Java’s ArrayList with the DriversLicense XSD type.  Isn’t that sweet?  And of course all the powerful static analysis tooling applies including code completion, feature usage, refactoring, etc.

Another interesting tidbit: Type domains aren’t limited to tangible resources like files on disk. For instance, one evening I decided to see what it would take to provide a dynamic type in Gosu, similar to C#’s dynamic type.  In theory, I thought,  the type system could handle a dynamic type as just another type domain.  And wouldn’t you know, it could. Well, in the spirit of full disclosure I did have to tweak the type system internals a bit to better handle the “placeholder” concept, but just a little.  Aaanyway, unlike most type domains this one consists of just a single type name, namely dynamic.Dynamic.  Note Gosu requires that a type live in a namespace, so we can’t name it just Dynamic.  It was surprisingly simple to implement the type loader, you can download the source from our website here (http://gosu-lang.org/examples.shtml).  In a future blog I’ll cover the implementation of the Dynamic type loader (or some other one) step-by-step.

Another super cool type loader is the one from the RoninDB project by Gus Prevas (http://code.google.com/p/ronin-gosu/).  Here Gus provides a zero-configuration JDBC O/R layer via Gosu type loader.  The tables are types, the columns are properties, and no code generation; it just works.  Pretty cool, eh?  Download it and give it a spin.

So far I’ve only touched on the basic concepts underlying Gosu’s open type system.  But understanding the core idea is critically important to appreciate Gosu’s full potential.  More so, if you plan to use Gosu in your personal or professional software projects.  Because once you see the light with respect to type domains and the benefits they provide, it’s hard to think of developing enterprise applications without them.  To that end I promise to blog more and shed more light on building type loaders and the type system in general.


Developing like there’s no tomorrow.

The apocryphal story of a large portion of the development team going on a ski trip and the bus careening down a ravine and killing them all is a story which strikes fear into the heart of any business.  Another is the cranky cowboy who writes half of the code-base in a single drunken night and proceeds to quit when he’s offended by an innocent feature request from Marketing.

In both of those cases there’s the dreaded fear of a significant chunk of code that’s been left behind.  Sometimes as developers our instincts are to be the domain expert in a particular area of code.  After all, it’s a certain amount of job security.  I’d like to argue, though, that in the end, everyone on the team benefits if the code is written for others in mind.

Others can help you fix your bugs.

Everyone encounters bugs in her/his code.  Even if you believe you’ve written something completely perfectly, sometimes the specifications change, and what was a feature is now a bug.  If you’ve written code that is understandable and documented, someone else can come and help you fix those problems.  No longer are you chained to your productivity; it’s actually possible to go on a vacation.  While this may seem like a recipe for you being replaceable, it makes you, as a developer, more valuable than some yahoo who might alienate everyone shortly before jetting.

More viewpoints == more opportunities.

If others understand what you’ve written, then it’s rather possible for others to contribute ideas.  Sometimes we become wedded to a particular implementation or path, and while it’s easy to become attached to it, it’s significantly more useful to have a marketplace of ideas to choose from.

You don’t want to be That Asshole.

You know the type; the guy who insists everything be written in a very particular way which no one else understands and he’s not the sharing type.  He might be invaluable now, but if he were ever to encounter you again, you’ll be sure not to recommend him.  After all, you’ve had to deal with his mess after he left for greener pastures.  Don’t be That Asshole.

So, what can you do?

For one, document your code and your changes as comments about why you do certain things.  Sometimes it is possible to be too verbose, but more often than not it’s important to know “yes, we’re doing something that looks totally dumb here, but it’s for a good reason.”  Second, write up a design document for anything more complex than a simple refactor or a bugfix.  If it’s a module or a system, someone will need to know how to use it or make changes to it, and if there’s a document to refer to as canon, then at least the next developer that comes along will have a starting out point.  Third, code reviews (or pair programming, if you’re into the whole “Agile” thing) are your friend.  Having a second, or in some cases, third pair of eyes there when you justify your decisions not only helps someone else understand what you’ve been working on for a few days, but the anticipation that that’s happening just naturally makes you write better code.

The way I like to approach it is to develop code in such a way that I’m always willing to show someone else, or use it as an example for a programming student.  In the cases where that particular statement isn’t true, there by necessity needs to be a good reason; whether it be the need to minimize the change footprint, or the need to maintain the same functionality (even if it’s broken as is).  Otherwise, I’m always developing as though someone’s looking over my shoulder.

Finally, much like the title of this post, I try to develop like there’s no tomorrow.  I have just today, and whatever I get done needs to be in such a state where someone else could sit down at my computer and finish what I’ve done.  Much like those authors of large epic series (*ahem* I’m looking at you, Robert Jordan), it’s important for developers not only to document what they’ve done, but also what they intend to do so that others can pick up where they’ve left off.

Note: this post was previously written by me while I was at a different company.  Still applicable in the general sense, but definitely something we’re a lot better at doing than other places.


Two Ways to Design A Programming Language

Method One:

Have a legend of the field think deeply and make precisely reasoned arguments for features.

Method Two:

Look at code-gen features in IntelliJ and figure out how to avoid needing them for your language:

Implicit Type Casting

IntelliJ has a macro for the common pattern of checking the type of something and then immediately downcasting/crosscasting the expression to that type:

instanceof

Java:

  Object x = aMethodThatReturnsAnObject();
  if( x instanceof List ) {
    System.out.println( "x is a List of size " + ((List) x).size() );
  } 

Gosu:

  var x : Object = aMethodThatReturnsAnObject()
  if( x typeis List ) {
    // hey, you already told us x was a list.  Why make you cast?
    print( "x is a List of size ${x.size()}" )
  } 

Delegation

IntelliJ has a wizard to assist you in delegating all the implementations of an interface to a field:

delegation

Gosu:

class MyDelegatingList implements List {

  delegate _delegateList represents List

  construct( delegateList : List ) {
    _delegateList = delegateList
  }

  override function add( Object o ) : boolean {
    print( "Called add!" )
    return _delegateList.add( o )
  }

  // all other methods on List are automatically
  // delegated to _delegateList
}

I omit the java version out of respect for your eyes.

I should note, Gosu is the new name for GScript, our internal programming language


Automation of Burn-up and Burn-down charts using GScript and Entrance

I have always found that the burn-up and burn-down charts are very informative and fit to the iterative story-based development very well. Every project that I work on, I will try to figure out different ways to generate burn-up and burn-down charts.

Two months ago, I took the job on putting Platform on Sprints. After some consideration, I have decided to follow the setup that I have for the AF team, creating the stories in the form of JIRA issues. However, the chart generation that I had for AF team was still semi-manual, which means that it takes a couple of minutes to download, and a couple of minutes to update the stats every morning. The worst part is that when I get busy or sick, I will forget.

So my first action item was to figure out how to generate the same kind of charts with the push of a button. The idea seems to be easy enough:

  1. Figure out the search criteria to retrieve all the JIRA issues of the backlog
  2. Count the issues that are in different states
  3. Update the data with the counts, and check it into Perforce.
  4. Refresh the chart with the updated data

Number one and two were actually not that hard, because Guidewire GScript has a nice WebServices support. With a few tries, I was able to count the beans.

Here is an example of the data generated. I think you get the idea just looking at it.

Date,Day,Closed,Deferral Requested,In QA,Open Stories,Open New Features,Open Bugs
10/09/2009,41,55,0,1,13,7,40
10/12/2009,42,55,0,1,14,7,40
10/13/2009,43,56,0,0,14,7,40
10/14/2009,44,56,0,0,16,7,41
10/15/2009,45,56,0,0,21,8,42
10/16/2009,46,58,0,1,19,8,42
10/19/2009,47,58,0,2,28,8,42
10/20/2009,48,58,0,6,26,8,42
10/21/2009,49,58,0,6,26,8,42
10/22/2009,50,58,0,7,25,8,44

Number three took less time but a bit of research because Perforce Java library’s API is not exactly straightforward.

It took me a while to figure out how to do the last one. After looking into JFreeChart and Google Chart API, I eventually turned to my dear friend, Tod Landis, who is also my partner at Entrance, and he quickly drafted an entrance script for me. Based on it, I was able to write a template that can be used for all teams within a few hours.


PLOT
  very light yellow  area,
  light yellow filled circles and line,
  DATALABELS
  very light orange  area,
  light orange filled circles and line,
  very light red  area,
  light red filled circles and line,
  very light blue area,
  light blue filled circles and line,
  very light gray area,
  dark gray filled circles and line,
  very light green area,
  dark green filled circles and line,
  all AXISLABELS
WITH
  ZEROBASED
  TITLE "Sprint"
  TITLE X "Days"
  TITLE Y "Points"
  SCALE Y 0 100 0
  CLIP
  legend
  gridlines
  collar
  no sides
SELECT
  `Open Bugs`,
  `Open Bugs`,
  date,
  `Open New Features`,
  `Open New Features`,
  `Open Stories`,
  `Open Stories`,
  `In QA`,
  `In QA`,
  `Deferral Requested`,
  `Deferral Requested`,
  `Closed`,
  `Closed`,
  day
from report;

Please note this is the final PLOT script, there are other SQLs run before this to import the data into the MySQL database, sum up the data to produce a stacked chart, and even out the labels.

And I now have this chart generated automatically every morning with the help of a windows scheduler.


Taking Responsibility

I’ve been a bit hesitant to write something about this, since I think it’s something of a delicate subject . . . it’s just a bit odd to write about, knowing that people I work with will read it.  But it’s an important enough subject, and one that I feel strongly enough about, that I think it’s finally time to say something about it.

For those of you who are reading this and don’t know, my technical title is something like “PolicyCenter Architect.”  Since I think the word “architect” in a software context has taken on all sorts of horrible connotations (it conjures up images of people who draw boxes and arrows but don’t actually write any code themselves), I tend not to use it personally.  Instead, I tend to think of my role on the team as being the guy who should be blamed when things don’t work.  Not the guy who should be blamed when something he did doesn’t work, but just the guy to blame, period.

I know it’s something of an extreme stance to take, which is why I think it’s worth an explanation.  Coming at it from a different angle:  my job is to make sure that the project doesn’t fail for technical reasons.  If the code is buggy, or it doesn’t perform, or it’s not flexible enough or in the right ways, then I’ve failed to do my job.  It doesn’t really matter why, or who wrote the code:  I’m supposed to make sure that doesn’t happen, and to do whatever it takes to get there.  In some ways that’s fundamentally unfair, since I can’t possibly control everything that goes on, yet I’m personally responsible for it anyway.  I honestly think that’s still how it has to be.

Taking responsibility for things is a powerful thing, especially when they’re things that you don’t directly control.  Part of being a professional is certainly taking responsibility for your own work, and not passing the blame to someone else when things go wrong.  But I think it’s even more powerful to simply say I will not let this fail.  Full stop.  But I think the only way to really get to that mindset is to take responsibility for that failure; otherwise, there will always be the temptation to throw up your hands and pass the blame on to someone else, and at the moment you absolve yourself of responsibility for that failure you’re lessening your personal drive to do something about it.  The other thing that needs to happen is that you have to accept the fact that things could fail, that you have limited control over that, and if they do it’ll be your fault anyway.  Failure sucks, and it’s a hard thing to swallow:  on my current project, it took me about five months to really come to peace with the fact that I was going to have to make a lot of very, very hard yet speculative decisions and that my best efforts could still end in failure (and a very, very costly failure at that).  Once I finally did, though, I slept a lot easier, and it freed me up to do better work than I otherwise would have been able to do.  Whatever role I managed to play in making the project succeed, I think it was significantly aided by the fact that I got over being afraid of screwing up.

I think there are three primary reasons why taking responsibility and blame for things you don’t actually have control over works out.  First of all, it forces you to do whatever you can to minimize that chance of failure.  Since you don’t have complete control, you can’t get it down to zero, but you sure have an incentive to get it there.  And not only that, but you have an incentive to do what’s right for the project as a whole, not just for your part of it or for you personally.  That extra drive, extra motivation, and whole-project orientation will lead to you doing more to make the project succeed than you would otherwise.  Secondly, it lets you make hard decisions under uncertain conditions.  A lot of projects fail simply because no one is willing to make those decisions; it’s always less personally risky to leave things as they are, or to take the well-trod route, or to pass the decision off to someone else even if they’re not in the best position to make the decision.  Sometimes, you’re the best person to make the decision, and you just have to be willing to do it.  Sometimes you’re not, so you have to pass the analysis off to someone else, but in order for them to really make the best decision they can they have to know that you’re going to back them up and not blame them if it goes wrong for some reason.  Lastly, while I really have no idea what anyone who works with me really thinks, I like to think that having someone willing to take that level of responsibility lets everyone else on the team do better work too, since they know that someone has their back, they can focus on doing their best instead of worrying about trying to look good, and they have the freedom to do what they think is the right thing.

It’s worth mentioning that the flip side of taking the blame is not taking the credit.  I’m not the first one to point out that good leaders should accept blame and deflect responsibility, but I think it’s one of those things that it’s hard to repeat enough.  Never, ever take credit for other people’s work (even if you helped out), and when in doubt err on the side of taking too little (preferably way too little) credit.


I Am Hate Method Overloading (And So Can You!)

My hatred of method overloading has become a running joke at Guidewire. My hatred is genuine, icy hot, and unquenchable. Let me explain why.

First Principals
First of all, just think about naming in the abstract. Things should have good names. A good name is unique and easy to understand. If you have method overloading, the name of a method is no longer unique. Instead, the real name of the method is the sane, human chosen name, plus the fully qualified name of each argument’s type. Doesn’t that just seem sort of insane? If you are writing a tool that needs to refer to methods, or if you are just trying to look up a method reflectively, you have to know the name, plus all the argument types. And you have to know this even if the method isn’t overloaded: you pay the price for this feature even when it isn’t used.

Maybe that strikes you as a bit philosophical. People use method overloading in java, so there must be some uses for it. I’ll grant that, but there are better tools to address those problems.

In the code I work with day to day, I see method overloading primarily used in two situations:

Telescoping Methods
You may have a function that takes some number of arguments. The last few arguments may not be all that important, and most users would be annoyed in having to figure out what to pass into them. So you create a few more methods with the same name and fewer arguments, which call through to the “master” method. I’ve seen cases where we have five different versions of a method with varying numbers of arguments.

So how do I propose people deal with this situation without overloading? It turns out to be a solved problem: default arguments. We are (probably) going to support these in the Diamond release of GScript:

  function emailSomeone( address:String, subject:String, body:String,
                         cc:String=null, logToServer:boolean=false, 
                         html:boolean = false ) {
    // a very well done implementation
  }

A much cleaner solution. One method, with obvious syntax and, if your IDE is any good, it will let you know what the default values of the option arguments are (unlike method overloading.)

True Overloading
Sometimes you truly want a method to take two different types. A good example of this is the XMLNode.parse() method, which can take a String or a File or an InputStream.

I actually would probably argue with you on this one. I don’t think three separate parse methods named parseString(), parseFile() and parseInputStream() would be a bad thing. Code completion is going to make it obvious which one to pick and, really, picking a unique name isn’t going to kill you.

But fine, you insist that I’m a terrible API designer and you *must* have one method. OK, then use a union type (also probably available in the Diamond release of GScript):

  function parse( src:(String|File|IOStream) ) : XMLNode {
    if( src typeis String ) {
      // parse the string
    }
    ...
  }

A union type lets you say “this argument is this type or that type.” It’s then up to you to distinguish between them at runtime.

You will probably object that this syntax is moderately annoying, but I’d counter that it will end up being fewer lines of code than if you used method overloading and that, if you really want a single function to handle three different types, you should deal with the consequences. If it bothers you too much, just pick unique names for the methods!

So?
Let’s say you accept my alternatives to the above uses of method overloading. You might still wonder why I hate it. After all, it’s just a feature and a pretty common one at that. Why throw it out?

To understand why I’d like to throw it out, you have to understand a bit about how the GScript parser works. As you probably know, GScript makes heavy use of type inference to help developers avoid the boilerplate you find in most statically typed languages.

For example, you might have the following code:

  var lstOfNums = {1, 2, 3}
  var total = 0
  lstOfNums.each( \ i -> { total = total + i  } )

In the code above, we are passing a block into the each() method on List, and using it to sum up all the numbers in the list. ‘i‘ is the parameter to the block, and we infer it’s type to ‘int’ based on the type of the list.

This sort of inference is very useful, and it takes advantage of context sensitive parsing: we can parse the block expression because we know the type of argument that each() expects.

Now, it turns out that method overloading makes this context sensitive parsing difficult because it means that when you are parsing an expression there is no guarantee that there is a single context type. You have to accept that there may be multiple types in context when parsing any expression.

Let me explain that a bit more. Say you have two methods:

  function foo( i : int ) {
  }
  
  function foo( i : String ) {
  }

and you are attempting to parse this expression:

  foo( someVar )

What type can we infer that the context type is when we parse the expression someVar? Well, there isn’t any single context type. It might be an int or it might be a String. That isn’t a big deal here, but it becomes a big deal if the methods took blocks, or enums or any other place where GScript does context type sensitive parsing. You end up having lists of context types rather than a single context type in all of your expression parsing code. Ugly.

Furthermore, when you have method overloading, you have to score method invocations. If there is more than one version of a method, and you are parsing a method invocation, you don’t know which version of the method you are calling until after you’ve parsed all the arguments. So you’ve got to run through all the argument types and see which one is the “best” match. This ends up being some really complicated code.

Complexity Kills

Bitch, bitch, moan, moan. Just make it work, you say. If the java guys can do it, why can’t you? Well, we have made it work (for the most part.) But there’s a real price we pay for it.

I’m a Berkeley, worse-is-better sort of guy. I think that simplicity of design is the most important thing. I can’t tell you how much more complicated method overloading makes the implementation of the GScript parser. Parsing expressions, parsing arguments, assignability testing, etc. It bleeds throughout the entire parser, its little tentacles of complexity touching places you would never expect. If you come across a particularly nasty part of the parser, it’s a good bet that it’s there either because of method overloading or, at least, is made more complicated by it.

Oh, man up! you say. That’s the parser’s and tool developer’s problem, not yours.

Nope. It’s your problem too. Like Josh Bloch says, projects have a complexity budget. When we blow a big chunk of that budget on an idiotic feature like method overloading, that means we can’t spend it on other, better stuff.

Unfortunately, because GScript is Java compatible, we simply can’t remove support for method overloading. If we could though, GScript would have other, better features and, more importantly, fewer bugs.

That is why I am hate method overloading. And so can you.