Two Ways to Design A Programming Language

Method One:

Have a legend of the field think deeply and make precisely reasoned arguments for features.

Method Two:

Look at code-gen features in IntelliJ and figure out how to avoid needing them for your language:

Implicit Type Casting

IntelliJ has a macro for the common pattern of checking the type of something and then immediately downcasting/crosscasting the expression to that type:



  Object x = aMethodThatReturnsAnObject();
  if( x instanceof List ) {
    System.out.println( "x is a List of size " + ((List) x).size() );


  var x : Object = aMethodThatReturnsAnObject()
  if( x typeis List ) {
    // hey, you already told us x was a list.  Why make you cast?
    print( "x is a List of size ${x.size()}" )


IntelliJ has a wizard to assist you in delegating all the implementations of an interface to a field:



class MyDelegatingList implements List {

  delegate _delegateList represents List

  construct( delegateList : List ) {
    _delegateList = delegateList

  override function add( Object o ) : boolean {
    print( "Called add!" )
    return _delegateList.add( o )

  // all other methods on List are automatically
  // delegated to _delegateList

I omit the java version out of respect for your eyes.

I should note, Gosu is the new name for GScript, our internal programming language

A tester’s view

As this is my first post on this blog, let me introduce myself and explain why I joined Guidewire. First, this blog had a lot to do with it – it gave me a pretty good idea what sort of folks I would be working with and made me excited enough to apply. Historically, most of this blog has focused on production code development. My intention is to shed some light on our testing adventures.

Most of my professional experience has been as a developer. Initially, I built business apps, but then more recently I gravitated towards building tools for both developers and testers. I was lucky to get into Agile development early and had a chance to work with many of the pioneers in the field. I especially fell in love with test-driven development and still maintain the site.

One of the tools I built was a framework for developing acceptance tests in java. I had the great fortune to present it at the Austin Workshop on Test Automation ( ) which is where I was first introduced to Exploratory Testing ( ). Guidewire turns out to be a great place to practice exploratory testing because of the inherent complexity of the applications, the fact that Guidewire has been Agile from day one, and the copious amounts of unit and acceptance tests that the developers write. Guidewire products combine core business functionality and a full-fledged development environment so the testing needs are diverse and challenging on both the business and technical axes.

Now you may wonder: Why would a professional developer want to focus on exploratory testing? – isn’t it manual testing, which is considered the simplest and least well compensated of testing disciplines? Hopefully, if you read the wiki article above, this prejudice will begin to dissolve for you as it did for me. It’s true that I do a fair amount of manual testing, but it’s what we call ”brain-engaged testing”, and I only do manually what is either not worth automating or impossible to automate. If anyone has automated ”thinking”, please contact me privately so we can make billions off this discovery. 🙂

As a team, we do automate many of the data generation, data collection, and data processing tasks that support exploratory testing. Of course, we do all sorts of testing at Guidewire. We collect metrics and perform root cause analysis to tune our testing mix based on actual bug discovery data. One of the promising directions for us is high-volume, pseudo-random testing. We are reusing some of the tools and tests developed as part of UI-driven acceptance testing to accomplish this. I hope to share some of the results of this experiment in the near future. Bye for now.