Dare points out the object-hierarchical impedance mismatch

No sooner do I finish ranting about the discipline required in going with a contract-first approach, than I start going through my RSS feeds and find that Dare has already done so. He also points out (which I think is *truly* significant):

    This isn’t theoretical, more than once while I was the program manager for XML Schema technologies in the .NET Framework I had to take conference calls with customers who’d been converted to the ‘contract first’ religion only to find out that toolkits simply couldn’t handle a lot of the constructs they were putting in their schemas.Those conversations were never easy.

Having done a few of those contract-first conversions of customers myself, I can attest to the uneasy conversation. You know that this is by far the best approach to Real Interop between platforms, but you still feel somewhat slimy: "Doing this yields the best interoperability, but you gotta be careful not to do this… or this… or this… and for God’s sake, don’t do that… or that… or this, either… Oh, and when we codegen all this, all the schema rules we put into place will probably be stripped off by the code-generating utilities we run the schema through, so we’ll probably need to do some validation by hand…." I’m always worried that a client is going to look me in the eye and ask if I’m just trying to create more work for them so they have to keep me around longer as a consultant. Ouch.

    The main thing people fail to realize when they go down the ‘contract first’ route is that it is quite likely that they have also gone down the ‘XML first’ route which most of them don’t actually want to take. Folks like Tim Ewald don’t mind the fact that sometimes going ‘contract first’ may mean they can’t use traditional XML Web Service toolkits but instead have to resort to SAX, DOM and XSLT. However for many XML Web Service developers this is actually a problem instead of a solution.

Now, I’m not sure I agree 100% with Dare that this is a problem instead of a solution–I think this is a challenge, to be met with the right tools for the job, and again, that tool being the integration of XML into our programming languages. If XML becomes a first-class construct in the language(s), then it becomes MUCH more approachable to take a XML-in/XML-out approach to doing this whole Web service (or whatever we’re going to call it in five years, because I’m pretty sure "Web services" as a term is done) thing.

Sleeper hit of the year

No, it’s not the new John Grisham book (there’s always a new John Grisham book), nor the dare-I-hope-this-one-doesn’t-suck Star Wars III film. No, this one is the book I just picked up at the local bookstore after watching it get raffled away at VSLive! a few weeks ago: Customizing the Microsoft .NET Framework Common Language Runtime, by Steven Pratschner. Whether you are a .NET developer or Java developer, you should read this.

One of the sleeper features of .NET 2.0 has always been the incredible degree of "hook points" they baked into the 2.0 CLR, some of which were needed by (for example) the Yukon team in order to bake the CLR into the database, much as Oracle and DB/2 did with the JVM a few years ago. (In fact, IBM has a version of DB/2, code-named "Stinger", that’s going to embed the CLR in much the same way as Yukon and the JVM-embedded databases do, as well.) SQL Server has always kept a tight grip around memory management, for example, and needed to exert tight control over how assemblies get loaded into the database, in order to support traditional database semantics. The baseline feature set of the CLR just wasn’t gonna cut it.

So the CLR team paused, sat down, and wrote a smashing set of COM interfaces (I know, I know, COM is dead, yadda yadda yadda, but the reality is that it will remain the underpining of the CLR forever, just as the underpining of the JVM is C++) that allow a CLR host–that is, any unmanaged code that wants to call CorBindToRuntimeEx and obtain an ICLRRuntimeHost*–to effectively take over parts of how the runtime functions, including (but not limited to):

    assembly loading: Don’t like the way the current assembly loading scheme works? Write your own
    failure policy: How the CLR handles failures, such as exceptions
    memory: ‘Nuff said
    threading: Ditto
    thread pool manager: Just about everybody I’ve taught asynchronous delegates to immediately asks, "Is there any way I can control the size of the system thread pool?"
    garbage collection: force GCs, collect statistics about GCs, and so on. No, this is not a replacement for finalizers (but it might help)
    CLR events: "information about various events happening in the CLR, such as the unloading of an application domain"

As a self-proclaimed "plumbing guy", I’m already smacking my lips.

But why do you care?

    You may have a system that turns out to want a wee bit more control over how thread-switches take place. (Games, for example, want some more precise control over quanta, I’m told.) Create a threading manager that uses cooperative fibers instead of raw Windows threads to do that.
    You may want to have something that just "listens" to the application and reports failures and/or other events to a management interface without requiring intrusive coding inside the app. Write a "host" process that kicks off the code in the usual way after establishing an "events host" that listents to the events in question, and potentially discards the old app domain and starts up a new one every 24 hours (if necessary).
    The classic one: you want to replicate the "-ms" option of the JVM, which tells the JVM to start with "x" amounts of memory to begin with, rather than have to go through a series of "I’m-out-time-to-ask-the-OS-for-memory" cycles that the CLR normally goes through when spinning up apps that consume, say, 512M or 1GB of heap. (I know a financial firm in San Francisco that wanted this two years ago.)

And so on, and so on, and so on.

Why should you care if you’re a Java guy? Because these kinds of hooks are necessary if Java is to keep up, number one, but also because these kinds of hooks would allow for commoditization of the JVM itself, creating a market for graduate students and entrepreneurs to create customized memory management algorithms, for example, that right now require those same entrepreneurs to create an entire JVM. The open-source world would have a field day, creating all sorts of vertical plug-ins at the JVM level that we could pick-and-choose, selecting whichever ones happen to fit our needs best–including a very-real, very-credible memory allocator that just keeps allocating until we run out of room in the heap (for short one-shot gotta-run-as-fast-as-frickin-possible batch jobs, for example).

You want to read this book. And in the meantime, I can’t wait ’til Rotor Whidbey ships, because this is the kinds of stuff I want to rip apart. 🙂

BTW: One very reasonable possibility for a custom host is a java-like launcher (clr.exe?) that allows you to pass some customization options on the command-line, just as java.exe does. We would use it solely in cases like -ms/-mx, to enable an otherwise normal .NET app to have some of its environment customized via the command-line rather than having to write our own managed host. Hmm…. On the surface, this wouldn’t represent too much work to toss off in a weekend….

One noticeable thing missing from the list above, though: JIT compilation, and I think I know why. (Because in the CLR, JIT compilation and bytecode verification are tightly interwoven and would be almost impossible to tease apart into a pluggable manner. Still, I’d love to see it, guys….)


Rocky thinks I’ve gone over the edge

In his imitable style, Rocky weighs in on my recent web services rants:

    I think this is all wrong-headed. The idea of being loosely coupled and the idea of being bound by a contract are in direct conflict with each other. If your service only accepts messages conforming to a strict contract (XSD, C#, VB or whatever) then it is impossible to be loosely coupled. Clients that can’t conform to the contract can’t play, and the service can never ever change the contract so it becomes locked in time.

    Contract-based thinking was all the rage with COM, and look where it got us. Cool things like DoWork(), DoWork2(), DoWorkEx(), DoWorkEx2() and more.
    Is this really the future of services? You gotta be kidding!

Loose coupling isn’t about loosely-bound calls, but there is some common ground, and Rocky, I hate to say it, but if we want to avoid the tight-coupling nature of RPCs, then yes, this is the future of services.

Look, in the beginning, there was a maxim that drove a good deal of how the Internet itself evolved: "Be strict in what you send, be liberal in what you accept." It’s attributed (as best I can ascertain) to the late Jon Postel, who was saying that if we really want this world-wide network thing to work, really work, then we need to get off the lawyering-prone nit-picking your-stuff-put-an-extra-semicolon-where-there’s-not-supposed-to-be-one soapbox and start thinking about ways we can accept data even if it’s not quite perfect.

Look at it this way; dspite th obvius errs n ths sentnc, you stll gt th idea, ya? This is because the human brain is wired in exactly that way: liberal in what it accepts, logically trying to piece together parts into a coherent whole that fits. Is it easier to simply force a strict validation on incoming data? Sure it is, and that’s the default mode issued by most Web services toolkits–"You conform to this schema or the call fails." But what happens when you want to evolve that format later, because one of your clients wants to add some new data that your other clients/partners aren’t prepared to handle? "Um, guys….? Would you mind recompiling against a new WSDL contract, please?" Or maybe, "OK, so we’ve got one endpoint for each of our clients, and now two of them want 7 features, 5 of which are identical but the last two are different. Do we cross-wire URL endpoints back and forth between them, or duplicate the code across two different endpoints, or…?"

Just send me an XML message that (more or less) conforms to a format–whether described by Schema, RelaxNG, or just plain-old-whiteboard-chicken-scratch–and I’ll figure out what to do with it from there. It’s worked thus far for the Internet, and I’m betting that it’s going to take us a lot farther than if we try to create One True Schema that everybody who cares about this service must obey. (Because, as anybody who’s ever tried to develop said schema amongst a collection of partners and vendors knows, it’s almost impossible to get that right the first time, and even more impossible to revise it later.)


Of commonality and variation

When I was a wee lad, younger than Michael (12) is now, I happened to pick up a copy of the Hobbit, and fell in love. It was swords, and magic, and dragons, and Elves and Dwarves, and… I dunno, it just fired my imagination like nothing I’d ever read before. I became an instant convert to Tolkien, and desperately began to cast around for "more about Hobbits" (which was, ironically, exactly how Tolkien’s publisher put the problem to him in 1940 after The Hobbit was first published: "Write something more about Hobbits").

And then I found The Lord of the Rings trilogy.

It was one of those situations where I knew, intellectually, that this was good stuff, that this was more of what I was craving, but my brain just… couldn’t… parse it. I couldn’t understand any of what was being done or said–I mean, I got through the character dialogues OK, and the action scenes seemed fine, but there was all these references to people and places I’d never heard of, and it was like I was watching a movie in black and white when everybody around me seemed to see it in color. I just felt like I was missing something.

Later, as I got older and happened to flip to the back of the third book of the trilogy, I discovered some of Tolkien’s rich and incredibly detailed history of Middle-Earth (and then later the Silmarillion and the subsequent books published after his death). As I went back and re-read the trilogy again, those pieces started to fall into place, and suddenly I could see The Lord of the Rings in color for hte first time. It was a beautiful feeling.

Fast forward to five years ago, and today.

Five years ago, I picked up James Coplien’s Multi-Paradigm Design for C++, and struggled O so hard to read it. I mean, I knew what I was reading was Good Stuff–Cope just has this great analysis of the industry and patterns and other such things that I could see was trying to come up off the page and batter its way through my skull, but I just couldn’t Get It. I’m ashamed to admit, I put the book down after about ten pages or so, and the book sat on my shelves for the last five years. Every so often I’d pick it up, flip through the pages, sigh at the lost opportunity, and back it’d go, on the shelf.

Then, a few nights ago, while wandering around the house restlessly waiting for the tendonitis in my right shoulder to fade some so I could go back to bed, I happened to see the book sitting on my shelf again, and thought, "What the hell, why not?" and picked it up again.

Suddenly I was reading in color all over again.

Coplien’s point, taken now particularly in light of the new movement towards aspect-oriented programming, is that software abstractions hover around two important principles: commonality and variation. "Commonality and variation provide a broad, simple model of abstraction, broader than objects and classes and broad enough to handle most design and programming techniques." In essence, the goal of a language or tool is to find the things in common (hence, commonality) and from there, figure out the differences between them (variation) and capture them as such. Absurdly simple? Yep. But if you look at what object-orientation does, this is exactly the exercise we go through: we find the things in common (and relate them via inheritance) and then the things that are different amongst them (and vary them through overridden methods/properties or added fields/methods/etc). If you look at aspect-orientation, it’s much the same idea, where now the commonality is gathered into aspects rather than classes. Citing Parnas (from 1976!), Cope points out that this isn’t a new technique, and that the idea of "software families" goes back two decades: "Families are collections of software elements related by their commonalities, with individual family members differentiated by their variations.

Still think Cope is off his rocker? Check this out: written in 1999, Cope says,

    Many contemporary methods view design as a phase intermediate to architecture and coding, instead of viewing architecture and coding as products of design. But from a more practical point of view, we can’t separate design fro either architecture or implementation. If design is the activity that gives structure to the solution, and if architecture is about structure, isn’t "design" a good term for the activity that produces it? And much of the code is about structure as well. Why shouldn’t that be "design" as well? If you look at how real programmers work, you’ll find they really don’t delineate architecture, design, and implementation in most application domains, regardless of whether the official house method says they should or not. (How many times have you completed the coding before holding the review for your design document?) Object-oriented designers gain insight into the allocation of responsibilities to classes by coding them up. Empirical research on the software design process reveals that most developers have at least partially coded solutions in hand at the time of their design review and thus design decisions continue into the last throes of coding [Cain+1996].

Sounds very agile of him, if you ask me. And it also helps resolve the "design/don’t-design" dilemma that most people face with agile methodologies–in other words, when you’re tapping out classes into the IDE, you’re not just "coding", you’re designing, right there and then. Design doesn’t have to be an exercise that involves UML, it’s an exercise that just involves putting structure around the solution domain, and that can take a lot of different forms.

Now, fast forward to service-orientation and the question I keep asking the world: what is a service? Where is its atomicity? And the answer, of course, lies rooted in commonality and variation: what is common, and what is then different within those common elements? SOA suddenly becomes just another paradigm, along with objects, procedures, modules, aspects and other approaches, because fundamentally SOA seeks the same thing that the rest of them do: commonality and variation. I won’t pretend to be able to answer my own question yet, but certainly Cope’s analysis helps some. And my first reaction, my first cut at answering the question of service granularity is thus: The granularity of a service is higher/coarser than objects (in that objects may collectively make up a service, but not the other way around), but smaller than the machine on which the service sits. It’s not much, but it’s a start, at least to me….


CLR Hosting API and Java equivalents

Attila Szegedi wrote:

    Ted, do you have an idea how do these hooks compare to the JVMTI, the new (meant to be universal) tool interface in JDK 1.5 that supersedes earlier standalone debug/profiling/etc. JVM APIs?

In truth, Attila, the CLR Hosting APIs aren’t really equivalent to JVMTI; the CLR Hosting APIs are more equivalent to the JNI Invocation API than anything else. The CLR has similar APIs to JVMTI in what they call the Profiling API and the Debugging API (sound familiar?), both of which are intended to be used from unmanaged code, as with JVMTI, but aren’t intended to bootstrap the environment into place as does JNI Invocation. So, to imagine an equivalent, picture JNI Invocation working something like this:

JavaVMInitArgs vm_args;
JavaVMOption options[4];

int n=0;
options[++n].optionString = "-Djava.class.path=.";
options[++n].optionString = "-verbose:jni";

// What follows is my flight of fancy
options[n].optionString = "threadMgr";
options[++n].extraInfo = new MyThreadingManager;
    // Custom class I wrote that extends ThreadManager base class
    // or something similar
options[n].optionString = "memoryMgr";
options[++n].extraInfo = new MyMemoryManager;
    // Same idea; MyMemoryManager extends MemoryManager base class
    // that does custom allocation

vm_args.version = JNI_VERSION_1_6;
    // Have to bump up version # for my new changes
vm_args.options = options;
vm_args.nOptions = n;
vm_args.ignoreUnrecognized = TRUE;

res = JNI_CreateJavaVM(&vm, (void **)&env, &vm_args);

// Assuming that res is not less than 0, we have a valid
// JNI_Env* in env, and can do the usual JNI Invocation stuff
// like look up main() on whatever the host class is, and so on

Now THAT would be nifty. 🙂

Should I (can I) redirect RSS requests?

As I’m contemplating the work involved in setting up a new blogging engine at tedneward.com (which isn’t live yet, so don’t go looking), I found myself thinking about how I could minimize the disruption to people already subscribed to this blog. In particular, I was thinking about creating a "redirect" feature for *this* blog, such that RSS requests (to rss.jsp) would basically get redirected (or pull from) the new blog.

Then I started thinking about that, though, and the idea that effectively I’m co-opting the blog that you *wanted* to read with a blog that *I* want you to read. Is this kosher? Or am I essentially betraying a trust implicit in the RSS-feed mechanism by doing this? I’m serious when I ask you to vote (via email, if you prefer: ted-at-neward-dot-net) and tell me what you think: can I just redirect requests for this blog to the new one, or should I just not even pretend and force you to resubscribe to the new blog?

Cameron announced his new book…

At TheServerSide Java Symposium, Cameron Purdy (creator of "the world’s most expensive HashMap", according to Dion Almaer, among others) announced his new book:

Enterprise Java Hashmap

I can’t say how proud I am that he chose to model his precious work of art after my own. Really. I can’t say. At all. (Wonder if I can sue him for copyright infringement?)



Sun blows me off… again….

    Sun Microsystems, Inc. and the JavaOne(sm) Conference Content Team are grateful for your proposal to present at the 2005 JavaOne conference. The high quality of submissions made the selection process extremely difficult. We regret to inform you that we will be unable to accept your proposal entitled ‘ The Fallacies of Enterprise Computing ‘ | ‘ J2EE and .NET, 2005 edition ‘ | ‘ Introduction to Web Services, 2005 edition ‘ | ‘ Effective Enterprise Java: 7 of the 75 Items ‘.

    Thank you very much for your submission. We appreciate your continued support of the JavaOne conference.

I’m not sure why I would have expected Sun to be any different this year about their proposals–after all, it’s been a pretty regular theme that unless you (a) work for Sun, (b) work closely with Sun, or (c) chant the Sun party line ("small devices will be hot this year, we promise!"), you’re not going to speak at JavaOne. So it’s been since 2000, so it looks to forever remain. Apparently even "proprietary technology company" Microsoft is more willing to allow outside speakers than Sun.

Don’t look for me there–my support for their conference pretty much dried up. Look for me at the NFJS shows instead, TechEd, or maybe there’s IBM’s developerWorks….

To Annotate or Not? A rebuttal

Mike Keith has written a piece, To Annotate or Not?, in which he openly criticizes some of the decisions of the JSR-175 committee. As a member of said committee, I thought it might be somewhat constructive to discuss his criticisms and offer up a (potentially) contrary point of view.

He opens his article by offering up a comparison of two metadata-capture schemes, one using XML files to describe a fictitious Remotable component and its methods, and the other using JSR175/JDK 1.5 metadata. At the end of the description, he offers

    I think it is obvious which approach is less verbose, easier to specify, and more effectively conveyed.

We thought so too, Mike. 🙂 Thanks.

    Other advantages of coupling the metadata with the source code are purely practical. For example, if the application needs to change the addNewItem method to accept an additional backOrder parameter, the application or the entity that changed the addNewItem method must also remember to change the XML file, which may be stored in an entirely different location from the class. This requirement to remember to change the XML file is simply because the method parameters were part of the contextual XML information required to specify the method. The maintenance incentive for using annotations, then, is fairly obvious.

It gets worse, too, when you consider that on any non-trivial application source base, a given programmer may not be the only one to work and modify a given piece of source code. Thus, when a modification gets made, a given programmer (particularly if they’re new to the system or the source base) makes a modification, they may not even realize that the code needs modification in multiple places (source code and descriptor). This my classic "college intern" scenario, since it’s often the college intern who (a) doesn’t realize the need to make the change, and (b) often draws these little one-off kinds of changes.

    Similarly, XML that is not connected to the code is also not an integrated part of the same version-controlled element as the code. Changing one element and creating a new version of it does not intrinsically imply creating a new version of the other, although it is possible to configure some version control systems to do such a thing. Although there are cases in which changing one does not require changing the other, even a dependency in one direction (that is, the metadata on the code) points to the more appropriate coupling of the two.

It gets deeper than this, though. If the metadata is stored outside the code, it becomes painful to try and access the metadata in a consistent format. As it stands in his XML-driven format, to access the method in question and discover if it is Remotable requires both Reflection calls and XML parser calls (assuming we’ve already established a well-known location for that file at runtime, which of course all of the J2EE specs have done, buried deep inside the .jar/.ear/.war files as appropriate). Correlating across the two APIs becomes a pain, never mind the problems of programmers accidentally fat-fingering the "addNweItem" method name in the XML metadata file.

He then goes on to offer up some criticisms, though:

    So what is the cost of brevity? Specifying the metadata beside the source instead of in a decoupled XML file has certain repercussions. … Take the example of a tool for adding metadata to existing classes. At the first step of the process, we immediately hit the first and most obvious problem. What if only the class files are present but the source code is not available? Annotations may only be read at runtime, not added. Annotating the classes is not an option, and the tool is forced to use XML or some other mechanism external to the class.

We discussed (for some time) the idea of adding an "At Arm’s Length API", so that one could consume metadata without having to go through the Reflection APIs (and thus load the class, rendering it immutable at that point), but had to drop the idea since Java currently has no such API for reading class files as a whole. Or rather, no such API within the J2SE boundaries–the Apache BCEL library serves that role quite impressively (in fact, it’s buried deep inside the Sun J2SE implementation, if you go looking for it–they renamed the packages from "org.apache" to "com.sun.org.apache", but it’s all there), and it was our expectation that BCEL would evolve to fill that role post-release. In the meantime, this lack of low-level access is an obvious hole that should probably be fixed in another JSR, but that’s another debate for another day. In the meantime, it’s entirely feasible to imagine writing a tool that uses BCEL to consume the annotations and modify the .class structures on disk without the corresponding source code… assuming you have some way of even stating what it is you want to modify, and where (which Mike never really suggests a concrete use case for).

    The next step is to actually add the annotations. This becomes a fairly intensive process for the tool, because the source needs to be parsed and the new annotations added at the correct position. Whereas using XML was a simple matter of having to specify the context and then write out the XML according to the given schema, we now have to find the source code for the element, parse it, add the correct syntactical annotation pieces, and then rewrite it all back out. Just explaining it is exhausting.

Pshaw. Write an "apt-let" (a plugin for the apt, or Annotation Processing Tool, utility) that sucks in the metadata and spits it back out in either source (easier but slower) or binary (harder but faster) form. I’m not suggesting either of these is easy, but again, Mike’s use-case here is that you want to take an existing class and hack up its metadata in a third-party form. Which, if you ask me, is really sort of a ludicrous case, unless you’re a tool vendor (like Oracle). See, the idea of metadata was largely to allow programmers, the authors of those code files, to express customized messages to entities that wanted to consume those messages in an out-of-band and nonintrusive format, like Serialization does. It was never intended as a generalized utility for third parties (like system administrators) to add metadata to the source "after the fact".

    Once the annotations have been added, the classes need to be recompiled. For this to be possible, the definitions for the annotations inserted into the source code need to be on the class path. Although this may not be an issue for immediate recompilation, it may be problematic later on. Annotations are, like an XML file, quite happy to exist in environments in which they get completely ignored. If the processing layer for which the annotations were added in the first place uses runtime reflection as a means of interpreting the metadata (which is the typical scenario), the annotations will clearly need to be preserved in the class files until runtime. Fortunately, the VM is more forgiving at class load time of elements that have annotations for which the definitions are not on the class path. It would be nice if the compile-time dependencies were similarly relaxed and any annotations for which interfaces were not found were simply ignored, having a net semantic of a SOURCE RetentionPolicy (that is, the annotations would not be retained in the class files). The XML artifact equivalent to the annotation interface definitions is the schema, which is required only when the XML file is read.

The "annotations are required on the classpath" part of the annotations spec was easily by far the most contentious, and if I recall the discussions correctly, it was the Oracle rep who said that if we required the annotations to be present on the classpath at compilation time, "this will render annotations entirely useless to us". The problem is, though, we couldn’t come up with a scheme that would allow for some kind of lazy-evaluation processing of annotations’ format against the source, and still remain a strongly-typed language. In almost every proposal, it was defeated by the simple question, "But what happens if the programmer fat-fingers the annotation typename? How will we know when the annotation doesn’t exist?" If we never throw some kind of error, we end up with a subtle source of bugs that would plague all of Java programmer-kind until the end of days. Java is a strongly-typed language, Mike, and that means when you compile against something, you need to have that something handy at compile-time to validate against it. If this upsets you, there’s always ECMAScript….

Then we start getting into the heat of his arguments:

    If you liken annotations to a coup, it would have to be one in which no capable or trained government is ready to take power. For metadata programming, annotations are currently on the immature side of the spectrum. Some of the obvious features are missing, and others are inadequate for day-to-day use.

Now THEM’s fightin’ words. 🙂

    Inheritance. There is none. … If it were possible for annotation interfaces to extend other interfaces, either the SyncEvent or AsyncEvent could be used to annotate an event method and an optional debug string could be specified to be printed to the event log if debugging were enabled. Then, just as in code, if another common member were required, you could easily add a common member by adding it to the Event superinterface. Instead, you must choose one of a few unsightly alternatives. The obvious one is to duplicate the member in both of the events. This produces a potential maintenance headache and goes against the commonly accepted software antipractice of cloning code.

Well, to start with, annotations aren’t really interfaces, so we could never allow annotation types to extend interfaces, but if you replace the word "interfaces" in the above with "annotation types", your concern is largely justified. What Mike fails to note is that annotations do compose somewhat cleanly, and what I would suggest as an alternative would be:

public @interface DebugString {
    String value() default "";

public @interface SyncEvent {
    DebugString options() default @DebugString;
    boolean allowPreempting();

public @interface AsyncEvent {
    DebugString options() default @DebugString;
    boolean joinAfterCompletion();

@AsyncEvent(options=@DebugString("myEventMethod called"),
public void myEventMethod() { … }

In other words, we’re not going after inheritance as a form of reuse, but as a logical design mechanism–which was sort of what it was intended to do in the first place (modulo the experience of Smalltalk, at least). What’s more, I get a little tired of people constantly trotting out annotations and using examples like "debug strings" or "tracing"–annotations are not aspects, were never intended to be aspects, and will never become aspects! This sort of example needs to be discussed in an AOP forum, not one on annotations.

    You may come up with your own favorite workaround, but the only elegant solution is to support inheritance of annotation types.

No, the only elegant solution is to use the right tool for the job; in this particular case, since he wants to introduce an annotation that somehow introduces behavior into the method, I suggest he look at AspectJ instead.

    Default Values. Thankfully, J2SE 5.0 offers some amount of support for providing defaults, but it is presently rather cumbersome and inadequate. Because null is not allowed as a valid default value, a special "uninitialized" value needs to be reserved for each type. If this is not possible, it will be hard to know, when reading the annotation, whether the default value was actually specified or whether the value was obtained because it was the default (that is, it was not specified). There is often a semantic difference between these types of values. It gets worse with nested annotations, because the default must be an actual annotation. The unpleasant workaround is to create a bogus annotation member that is used simply to distinguish whether the annotation was specified or was provided by default.

This was a case where Neal and Josh simply stated that the Java compiler would have problems consuming "null" as a value; I won’t pretend to understand the details, but let’s be honest–you can’t specify null as a default value in a lot of places inside the Java environment, such as:

public class Counter
  private int i = null; // Error!

and somehow we manage to get along just fine. Would it have been nicer if null could be specified as an annotation member default value? Sure. Is it a requirement? Nope.

    The resulting example above is arguably no worse than the XML case, in which a default value may be specified only for a simple type. A more useful feature that does not currently exist is when the default is actually dependent on some program element. Default values should be parameterizable and refer to other elements within the same program scope. For example, what if you wanted to make the default refer to a static variable that could be modified by an admin tool? Or if an annotation value were able to be defaulted to the name of the element it is annotating? The ability for default values to be parameterizable would be really helpful for the processor, but in practice this would be rather difficult to implement, because the annotation type does not have any real scope at definition time. Reserved words, such as this, would probably be needed to make default values parameterizable — an achievable goal.

NO, NO, NO. Mike, annotation values have to be known at compile-time, and allowing annotations to depend on other program elements would only work if those program elements could be known at compile-time. If we could allow the default to refer to a static variable, then what, exactly, do we put into the compiled .class file? Remember, annotations have to be stored there, for processing by apt-lets, and the apt-let may not have said static variable handy for dereferencing. It might be convenient for an annotation to be able to reference the name of the element it is annotating, but this easily turns into a slippery slope; first you want the name, then you want the type (for a field) or the parameters (for a method), or the accessibility flag (public/private/whatever), and suddenly you’re looking at having to introduce a "thisJoinPoint"-like modifier just for annotations to reference. It becomes a huge can of worms that keeps the javac authors busy for years.

    An allowance for single-value annotation types means that if the name of a member is the magic word "value," the name can be skipped (defaulted to value). Although this is somewhat helpful, I believe that it is a rather arbitrary and unnecessary choice. If a single-member annotation is being used, it is not obvious why it can’t be treated as a "value" member in any case, without actually calling it "value." Because the name is guaranteed to be useful only within a single-member annotation, it doesn’t seem to serve any real purpose, and the ability to assign more-descriptive names to my single-member annotations would be useful.

Actually, if I’m not mistaken, it doesn’t have to read "value"; it can be of any name. The name "value" was intended entirely as a documentation hint. And, in fact, I recommend not using it, for maintenance purposes–what happens when you need to add another member to an annotation later? Now you have one member named "value", and it’s not obvious what it’s supposed to be.

    Validation. The allowable set of member value types is fairly restricted and syntactical, and simple value type validation is basically free, just as XML parsers check the type definitions for the base XML schema types. Furthermore, an @Target annotation provides a convenient complimentary check by the VM to ensure that annotation types do not annotate inappropriate program elements or elements that were not meant to be annotated by that annotation. It is more difficult, however, to restrict the annotations that may coannotate an element without doing a great deal of code checking and/or annotation rework. It may not make sense for a method to be both an AsyncEvent and a SyncEvent, but what is to stop someone from annotating it that way? The processor would have to look for both annotations and error-check for the unusual case of both existing on a method. Annotations could be reworked to use enumerated types and thereby limit what could be specified. Inheritance could go some of the way toward solving this problem, because you could at least introduce a parent element that included a single Event member, thus constraining the element to only a single event. What would really help, though, is an additional built-in annotation that would allow an annotation definition to specify a set of annotations that are unacceptable peers annotating the same programming element.

But that presumes that the annotation definition in question knows all other possible set of annotations that could be used to annotate a target, and what happens when an annotation type it doesn’t know about gets added? Should we be strict or liberal in the interpretation of said annotation? Strict means you’d never be able to be annotated alongside third-party library annotations you have no problems with, liberal means you’re basically back to where we started. I don’t really see this as a problem in the future, but should this become a problem, it would essentially be a simple thing to create another javac-enforced meta-annotation like what Mike describes. The hard part would be to decide what the semantics of this "Restricted" annotation would be, strict or liberal.

    Namespace. A concern in the annotations world is that multiple layers of annotations will start colliding within the same global annotations namespace. For example, some of the common annotation type names, such as @Transaction, are likely to be used by multiple layers to signify slightly different semantics unique to each layer. I agree that this is a valid concern, because one of the strengths of annotations is the potential for terseness (potential because they are not necessarily terse; they must be designed to be so) and making names longer to lessen the possibility of collisions could reduce this. … In the end, the annotations namespace problem is analogous to the class namespace problem, and we may be forced to fully qualify the annotations as a last resort.

Which is exactly what Java does currently for any other class name clash. What Java really needs is yet another language enhancement along the lines of the "using" capability in C#:

import java.awt.List = Listbox; // make "Listbox" an alias
import java.util.List = List; // make "List" an alias, just to be clear

public class App
  List l = new List(); // no ambiguity: java.util.List
  Listbox lb = new Listbox(); // clearly a java.awt.List

which would then render the fully-qualified hideousness problem a thing of the past. JDK 1.6, anyone?

In the end, I don’t necessarily disagree with some of Mike’s points; in fact, I argued for us to allow annotations to inherit just like any other class element (and, in fact, I also wanted us to define a scheme to define annotation elements at the .jar level, so as to integrate Manifest directly into the language, since we’re almost there anyway), but as with any committee-driven effort, others disagreed with me, and the general discussion led to a majority opinion that it wasn’t a good idea. That’s part of what being a citizen in the community means: you win some, you lose some.

In a lot of ways, I wish I could publish the collective email messages that went back and forth among the 175 members, because I hear a lot of criticism that "you never considered …", when in fact, we did consider it, we discussed it, and we rejected it for various reasons. There were some very bright people on that JSR, all of whom I respect deeply (even if I didn’t agree with them), and to suggest that we just casually dismissed an idea or concept essentially insults them all, which I won’t allow to happen without a fight. 🙂

There’s nothing wrong with disagreeing with the decisions we made, so long as you’re willing to offer up something in its place that (a) meets all the criteria of the scope of the JSR, and (b) fits in with the needs of every JSR member on the 175 committee. That’s what we had to do, and in some cases we had to engage in that essential element of every democracy–compromise.



The JavaOne drinking game!

From the Webmink blog:

    Along with Mary and others my session proposal for JavaOne was declined (see, working for Sun isn’t the answer, Ted, maybe it’s not a conspiracy against you?).

Never said it was; just means that Sun isn’t letting anyone else up there to speak, either.

Proposal: The JavaOne drinking game! Every time…

    … a Sun employee gives a talk
    … a JavaOne speaker derides IBM
    … a JavaOne speaker says ".NOT" or derides Microsoft (double if they claim .NET can’t run anywhere but Windows, triple if they say that .NET is a proprietary technology that isn’t a standard but Java is)
    … there’s an EJB 3 reference
    … there’s a Hibernate reference
    … there’s an "AOP" reference (particularly by somebody who’s not really an AOP expert *cough* Bill Burke *cough* Marc Fleury *cough*)
    … a JavaOne speaker gives a talk on J2ME
    … a JavaOne speaker says "Write-Once Run Anywhere"

… you have to take a drink.

My guess is every player is smashed by lunchtime on the first day.