C’mon guys, please?

So I’m playing around with VS 2005 Beta 2 tonight, specifically the System.CodeDom.Compiler APIs, and I’m intrigued by one method on the base class CodeDom interface: Parse, taking a System.String, returning a CodeCompilationUnit, which is the root of the CodeDom interface. I’m thinking, "Now this is an interesting enhancement", because in the v1.x bits, the ICodeParser implementations of CSharpCodeProvider and VJSharpProvider, among others, were null (yep, just returned the constant null–no implementation whatsoever), depriving the .NET world of a doorway to some incredibly useful functionality.

So I fire up a CodeDomProvider for C#, call Parse… and get a NotImplementedException. WTF? VJSharpProvider… NotImplementedException. How about ANYone? Nope: JScript, MC++, VJ#, JScript, C#, not a single one provides an implementation to the Parse() method.

This is an open call to the .NET CodeDom team: If you’re not going to provide even a single implementation, then just rip the damn thing out and quit teasing me. Actually, go one step better, and actually provide what would be an awesome enhancement to .NET 2.0. Disgusting.

Carlos tries to one-up the .NET BCL team

Carlos wrote (in comments) in response to my last CodeDom post:

    After all these years, still no way to build an AST from source. Tsk, Tsk, Tsk. Recall, you had this in Java when they moved the compiler away from native, that was when? JDK 1.2, just look at the internals.

Carlos, quit being silly–if I wanted to look at internals, I would go read the source for the publicly-available and openly-licensed C# compiler that comes with the SSCLI tarball. That’s not the point here. The CodeDom already has the capacity to go from source to compiled assembly (something which Java still lacks, by the way) using the ICodeCompiler interface of the CodeDomProvider class (which Microsoft dutifully provides for each and every one of their languages, by the way). That’s not what I’m looking for; what I want is the ability to go from source to CodeDom tree representation (a CodeCompileUnit), so that I can go into the AST and hack up–or amend, or weave, or whatever you want to call it–the code from there, then feed it back into the Compiler and have good code come out the other end. And, since CodeDom is language-agnostic, it means I could conceivably… (wait for it) have aspects that are language-agnostic, as well.

This is not a hard problem to solve, in many ways–I could, for example, get hold of the various language grammars individually, feed it into ANTLR or SableCC, then compile into standalone assemblies and use that to produce the AST desired. That’s not the point, either. I want the CodeDom functionality–to take source, go to a DOM-like tree of elements that I can muck with, then from there go either back to source or to compiled output. I can go from source to compiled assembly in a snap, as well as from DOM to compiled assembly (meaning I don’t have to worry about doing bytecode generation). What’s missing is source-to-DOM.

But in the meantime, let’s also quietly ignore the fact that, unfortunately for Carlos, technically you can’t redistribute anything you write that makes use of the javac classes/internals (since you’re not allowed to redistribute the JDK or any of its constituent parts outside the JRE). Carlos, I hope you don’t get any phone calls from Sun legal staff anytime soon asking to take a look at your code for license violations….

And, if I’m not mistaken, the rewrite occurred with JDK 1.3. (I can check with Neal Gafter or Josh Bloch if this is a serious point of contention; they probably remember a lot better than anyone.)

So, with all due respect, Carlos, once again you’re playing at FUD. Sad, really, because Java has enough good reasons to hold its own against .NET that it doesn’t need this kind of "support"; it just makes Java proponents seem like zealots.

WebServices .NET resources

As part of teaching this week, I created a list of resources for interested WebServices (using .NET) implementors to use. Here ya go:

    People
    Rocky Lhotka (Magenic)
    Clemens Vasters (Newtelligence)
    Simon Guest (Microsoft)
    Michele Leroux Bustamente (iDesign, dasBlonde.com, interopwarriors.com)
    Steve Vinoski (IONA)
    Bruce Schneier (Counterpane)
    Books
    Enterprise Integration Patterns (Hohpe, Woolf)
    Applied XML Programming for Microsoft .NET (Esposito)
    .NET and J2EE Interoperability Toolkit (Guest)
    Essential XML Quick Reference (Skonard, Gudgin), see the free PDF as well
    Practical Cryptography (Schneier, Ferguson)
    Secrets and Lies (Schneier)
    Beyond Fear (Schneier)
    Resources
    Eight Fallacies
    Note on distributed computing
    Ten Must-have utilities …
    WS-i Basic Profile 1.1 Spec
    Schema Best Practices
    XML Schema Part 0 (Primer)
    SoapMSMQ WSE 2 Channel
    SoapUdp
    Securing the UserName Token
    Tools
    Reflector

I put this here both for their use, and if others might find it interesting, including the Java folks (who, in all honesty, will only not care about the Reflector tool at the bottom).

Taking responsibility

Recently, the United States Navy ran into a small problem. Or, rather, a submarine of the United States Navy ran into a large one; specifically, the San Francisco, a Los Angeles-class attack submarine, ran into an undersea mountain at full speed (35 knots, roughly 40 miles an hour) that "wasn’t on the charts" the captain and his command crew used to plot their course. One man was killed, close to a hundred or so were injured, and the US Navy has since found that the captain and command crew were at fault.

Or at least, that’s the sound bite. The truth, as always, is a bit more complicated.

Essentially, the Navy found that the captain and crew plotted their course using a chart that didn’t have the mountain anywhere on it–it apparently formed long after the chart in question was drafted, due to some volcanic activity in the region. Had they used another chart, they might have seen that some kind of geographic feature (that is to say, either a hill or a mountain or whatever you call a big chunk of land at 400 feet depth) was reported in the area but not confirmed. Had there been any sign of some kind of indication of anything other than clear water, the captain readily pointed out that they would have chosen an entirely different path, thus avoiding the situation entirely.

But the Navy found the captain and the crew responsible, suspended them from active duty, and essentially held them accountable for the tragedy. The captain and crew aren’t fighting or appealing this decision–they accept it with full cognizance of the fact that this decision pretty much ends their career as Navy personnel. Oh, certainly, they can probably find some kind of desk job to slip into and quietly retire in some number of years, but their upward advancement within the Navy is over. They’re quietly accepting the findings of the Navy–that they were at fault, for a good reason: they were at fault.

Before you start thinking I’m being harsh on the captain and his crew–after all, it’s not necessarily fair to hold them responsible for the contents of the charts–bear in mind that the Navy makes the captain’s role explicitly clear: in the final accounting, the captain is the final party responsible for the safety of his ship and his crew. In essence, with the captain, to quote the colloquial, "The buck stops here". Thus, by definition, anything that happens to the ship or its crew falls under the captain’s responsibility. Far from jumping on some bandwagon of "blame the captain" for the tragedy, I have the utmost respect for the man, despite never having met him, because he’s stepping up to the plate.

Recently, at the Reston, VA run of the No Fluff Just Stuff symposium, I was hosting an architecture forum, and was asked, "Why do you think projects fail?" Members of the audience chimed in with their favorites, including "Management!", "Bad requirements!", "Users don’t know how to identify what they want!", and so on. What struck me about these answers was that no one, not a single person in the group, even hinted that programmers might be to blame.

This, to me, is a disturbing trend.

I’m not about to suggest that the only reason that software projects fail is because of the programmers working on it, just as the San Francisco’s tragedy was hardly due to incompetence on the part of the captain or crew, far from it. The captain and his crew demonstrate their competence clearly, in the basic fact that all but one of the crew survived what clearly could have been far far worse. Programmers frequently demonstrate their competence by exerting vast amounts of blood, sweat and tears to ship projects that, by all rights, shouldn’t come anywhere close to a production server.

But we must never lose sight of the fact that, at the end of the day, a software project’s responsibility rests with us, the ones who have promised our bosses/clients/customers that we will deliver something useful. Just as you wouldn’t find it acceptable that your package sent through FedEx doesn’t arrive "because the van broke down, hardly our fault, right?", it’s not acceptable that a project fails because "the servers got hacked" or "the requirements weren’t clear" or "the programmers found it took longer than they thought" or….

At the end of the day, the buck stops with us. Regardless of the reason, regardless of whether it’s something we can control, it’s our job to ship software, and if a project doesn’t ship, we should accept that we failed somewhere. Doing so gives us the moral authority to then cross-examine the failure itself and drill deeper within the failure to find root causes that might have contributed to the failure, and fix those root causes. It’s not an easy admission, but in the end it’s the admission we have to make.

Web + Services

A lot has been written recently about Service-Orientation and Web services and REST and the massive amounts of confusion that seem to be surrounding the whole subject. After much navel-contemplation, I’m convinced that the root of the problem is that there’s two entirely orthogonal concepts that are being tangled up together, and that we need to tease them apart if we’re to make any sense whatsoever out of the whole mess. (And it’s necessary, I think, to make sense out of it, or else we’re going to find ourselves making a LOT of bad decisions that will come to haunt us over the next five to ten years.)

The gist of the idea is simple: that in the term "Web services", there are two basic concepts we keep mixing up and confusing. "Web", meaning interoperability across languages, tools and platforms, and "services", meaning a design philosophy seeking to correct for the flaws we’ve discovered with distributed objects and components. These two ideas, while definitely complementary, stand alone, and a quick examination of each reveals this.

Interoperability, as an idea, only requires that programs be written with an eye towards doing things that don’t exclude any one platform, tool or technology from playing on the playground with the other kids. For example, interoperability is easy if we use text-based protocols, since everybody knows how to read and write text; hence, HTTP and SMTP and POP3 are highly-interoperable protocols, but DCOM’s MEOW or Java’s JRMP protocols aren’t, since each relies on sending binary little-endian or big-endian-encoded data. Interoperability isn’t necessarily a hard thing to achieve, but it requires an attention to low-level detail that most developers want to avoid. (This desire to avoid low-level details isn’t a criticism–it’s our ability to avoid that kind of detail that allows us to write larger- and larger-scale systems in the first place.)

This "seeking to avoid exclusion" requirement for interoperability is why we like using XML so much. Not only is it rooted in plain-text encoding, which makes it relatively easy to pass around multiple platforms, but its ubiquity makes it something that we can reasonably expect to be easily consumed in any given language or platform. Coupled with recent additions to build higher-order constructs on top of XML, we have a pretty good way of representing data elements in a way that lots of platforms can consume. Does interoperability require XML to work? Of course not. We’ve managed for the better part of forty years to interoperate without XML, and we probably could have kept on doing quite well without it; XML makes things easier, nothing more.

Services, on the other hand, is a design philosophy that seeks to correct for the major failures in distributed object and distributed component design. It’s an attempt to create "things" that are more reliable to outages, more secure, and more easily versioned and evolvable, things that objects/components never really addressed or solved.

For example, building services to be autonomous (as per the "Second Tenet of Service-Orientation", as coined by Mr. Box) means that the service has to recognize that it stands alone, and minimize its dependencies on other "things" where possible. Too much dependency in distributed object systems meant that if any one cog in the machine were to go out for some reason, the entire thing came grinding to a halt, a particularly wasteful exercise when over three-quarters of the rest of the code really had nothing to do with the cog that failed. But, because everything was synchronous RPC client/server calls, one piece down somewhere on the back-end meant the whole e-commerce front-end system comes to a shuddering, screeching pause while we figure out why the logging system can’t write any more log messages to disk.

Or, as another example, the First Tenet states that "Boundaries are explicit"; this is a well-recognized flaw with any distributed system, as documented back in 1993 by Wolrath and Waldo in their paper "A Note on Distributed Computing". Thanks to the fact that traversing across the network is an expensive and potentially error-prone action, past attempts to abstract away the details of the network ("Just pretend it’s a local call") eventually result in nothing but abject failure. Performance failure, scalability failure, data failure, you name it, they’re all consequences of treating distributed communication as local. It’s enough to draw the conclusion "well-designed distributed objects are just a contradiction in terms".

There’s obviously more that can be said of both the "Web" angle as well as the "Services" angle, but hopefully enough is here to recognize the distinction between the two. We have a long ways to go with both ideas, by the way. Interoperability isn’t finished just because we have XML, and clearly questions still loom with respect to services, such as the appropriate granularity of a service, and so on. Work remains. Moreover, the larger question still looms: if there is distinction between them, why bring them together into the same space? And the short answer is, "Because individually, each are interesting; collectively, they represent a powerful means for designing future systems." By combining interoperability with services, we create "things" that can effectively stand alone for the forseeable future.

And in the end, isn’t that what we’re supposed to be doing?

Daniel in the Lion’s Den

So here I am, sitting at an Addison-Wesley author’s signing, amongst such .NET luminaries as Mark Fussell, Ken Henderson, Scott Schnoll, and Neil Roodyn, amongst others. And the book I’m supposedly signing while I’m here at this author signing at TechEd?

Effective Enterprise Java, of course. Talk about feeling just a wee bit out of place….

Where’d you go, anyway?

A lot of people have been sending me the odd email, asking if everything’s OK and if I’ve given up blogging or something, due to the relative… OK, let’s be honest, TOTAL… absence of blog posts from me over the last two months. To answer the questions in relative order of ask:

    Have you given up blogging? Are you kidding? Blogging is where I trot out half-baked ideas and get them shot up full of holes so I know how better to structure my arguments in the future. Give that up? Not on your life!
    So what’s with the big silence? Honestly, just general busy-ness. We’re in the process of moving up to Seattle (which will take place mid-July, by the way), I’m in the process of doing a couple of books (which I plan to make public before too long), plus I’ve had a number of conferences, several of which were intercontinental, so I’m just plain tuckered out. When you can’t remember the names of your kids when you get home, blogging just doesn’t seem real high on the priority list.
    Wait a minute… moving to Seattle? Yep, Redmond.
    Are you…? No, I am not taking a job at Microsoft. Or Amazon, for that matter. I’m moving up there because (a) I like the area, (b) a lot of my friends are in the area, (c) a lot more of my friends will be guaranteed to be coming to the area (largely because that’s where Microsoft is), and (d) there’s nothing really holding me to the Sacramento area besides inertia. Besides, there’s this curious effect, how money just somehow seems to find its way from the Microsoft coffers into the hands of those gathered close to campus, employee or not….
    Sounds like you’re making the switch to a .NET guy. Hardly. I’m part of JSR 261, the JSR for bringing WS-Addressing into the JAX* suite of specs, and I have lots of plans to continue speaking and writing in the Java space for a long time to come. Make no mistake, Java is hardly "done" (despite what some of my .NET-favoring friends believe), and I plan on riding the Java language for quite a while.
    So are you going to come back to blogging anytime soon? Sure, as soon as things slow down some–I’ve got a ton of blog posts just sort of hovering at the edge of my consciousness waiting to be written, publicized, and shot full of holes, I just need time to do the writing. Anybody want to volunteer to ghost-blog for me? 🙂

By the way, I’ll make a formal announcement before too long, but be aware that during the move the neward.net site (my blog and my wife’s blog, at the root of the site) will likely be down for a week or two. During that time, I’ll probably be revisiting my hosting options as well as my blogging engine options, so don’t be too surprised if the blog "forks" into a personal blog and professional blog, as others (such as Rocky) have done before me.

In the meantime, thanks for listening! Er… reading, I mean.

JEE? JSE? Oh, brother

Graham Hamilton notes that Sun has decided–finally–that the "2" in "J2EE" and "J2SE" sounds really awkward when coupled to a version number (a la "J2EE 5.0"). So Sun’s marketing and branding team have decided that now, roughly 5 years after we’ve all gotten used to having it there, to drop the "2" and make it just "Java Platform, Enterprise Edition". (Frankly, I refuse to acknowledge that change; I just can’t bring myself to call it "JPEE".)