WebServices .NET resources

As part of teaching this week, I created a list of resources for interested WebServices (using .NET) implementors to use. Here ya go:

    Rocky Lhotka (Magenic)
    Clemens Vasters (Newtelligence)
    Simon Guest (Microsoft)
    Michele Leroux Bustamente (iDesign, dasBlonde.com, interopwarriors.com)
    Steve Vinoski (IONA)
    Bruce Schneier (Counterpane)
    Enterprise Integration Patterns (Hohpe, Woolf)
    Applied XML Programming for Microsoft .NET (Esposito)
    .NET and J2EE Interoperability Toolkit (Guest)
    Essential XML Quick Reference (Skonard, Gudgin), see the free PDF as well
    Practical Cryptography (Schneier, Ferguson)
    Secrets and Lies (Schneier)
    Beyond Fear (Schneier)
    Eight Fallacies
    Note on distributed computing
    Ten Must-have utilities …
    WS-i Basic Profile 1.1 Spec
    Schema Best Practices
    XML Schema Part 0 (Primer)
    SoapMSMQ WSE 2 Channel
    Securing the UserName Token

I put this here both for their use, and if others might find it interesting, including the Java folks (who, in all honesty, will only not care about the Reflector tool at the bottom).

Taking responsibility

Recently, the United States Navy ran into a small problem. Or, rather, a submarine of the United States Navy ran into a large one; specifically, the San Francisco, a Los Angeles-class attack submarine, ran into an undersea mountain at full speed (35 knots, roughly 40 miles an hour) that "wasn’t on the charts" the captain and his command crew used to plot their course. One man was killed, close to a hundred or so were injured, and the US Navy has since found that the captain and command crew were at fault.

Or at least, that’s the sound bite. The truth, as always, is a bit more complicated.

Essentially, the Navy found that the captain and crew plotted their course using a chart that didn’t have the mountain anywhere on it–it apparently formed long after the chart in question was drafted, due to some volcanic activity in the region. Had they used another chart, they might have seen that some kind of geographic feature (that is to say, either a hill or a mountain or whatever you call a big chunk of land at 400 feet depth) was reported in the area but not confirmed. Had there been any sign of some kind of indication of anything other than clear water, the captain readily pointed out that they would have chosen an entirely different path, thus avoiding the situation entirely.

But the Navy found the captain and the crew responsible, suspended them from active duty, and essentially held them accountable for the tragedy. The captain and crew aren’t fighting or appealing this decision–they accept it with full cognizance of the fact that this decision pretty much ends their career as Navy personnel. Oh, certainly, they can probably find some kind of desk job to slip into and quietly retire in some number of years, but their upward advancement within the Navy is over. They’re quietly accepting the findings of the Navy–that they were at fault, for a good reason: they were at fault.

Before you start thinking I’m being harsh on the captain and his crew–after all, it’s not necessarily fair to hold them responsible for the contents of the charts–bear in mind that the Navy makes the captain’s role explicitly clear: in the final accounting, the captain is the final party responsible for the safety of his ship and his crew. In essence, with the captain, to quote the colloquial, "The buck stops here". Thus, by definition, anything that happens to the ship or its crew falls under the captain’s responsibility. Far from jumping on some bandwagon of "blame the captain" for the tragedy, I have the utmost respect for the man, despite never having met him, because he’s stepping up to the plate.

Recently, at the Reston, VA run of the No Fluff Just Stuff symposium, I was hosting an architecture forum, and was asked, "Why do you think projects fail?" Members of the audience chimed in with their favorites, including "Management!", "Bad requirements!", "Users don’t know how to identify what they want!", and so on. What struck me about these answers was that no one, not a single person in the group, even hinted that programmers might be to blame.

This, to me, is a disturbing trend.

I’m not about to suggest that the only reason that software projects fail is because of the programmers working on it, just as the San Francisco’s tragedy was hardly due to incompetence on the part of the captain or crew, far from it. The captain and his crew demonstrate their competence clearly, in the basic fact that all but one of the crew survived what clearly could have been far far worse. Programmers frequently demonstrate their competence by exerting vast amounts of blood, sweat and tears to ship projects that, by all rights, shouldn’t come anywhere close to a production server.

But we must never lose sight of the fact that, at the end of the day, a software project’s responsibility rests with us, the ones who have promised our bosses/clients/customers that we will deliver something useful. Just as you wouldn’t find it acceptable that your package sent through FedEx doesn’t arrive "because the van broke down, hardly our fault, right?", it’s not acceptable that a project fails because "the servers got hacked" or "the requirements weren’t clear" or "the programmers found it took longer than they thought" or….

At the end of the day, the buck stops with us. Regardless of the reason, regardless of whether it’s something we can control, it’s our job to ship software, and if a project doesn’t ship, we should accept that we failed somewhere. Doing so gives us the moral authority to then cross-examine the failure itself and drill deeper within the failure to find root causes that might have contributed to the failure, and fix those root causes. It’s not an easy admission, but in the end it’s the admission we have to make.

Web + Services

A lot has been written recently about Service-Orientation and Web services and REST and the massive amounts of confusion that seem to be surrounding the whole subject. After much navel-contemplation, I’m convinced that the root of the problem is that there’s two entirely orthogonal concepts that are being tangled up together, and that we need to tease them apart if we’re to make any sense whatsoever out of the whole mess. (And it’s necessary, I think, to make sense out of it, or else we’re going to find ourselves making a LOT of bad decisions that will come to haunt us over the next five to ten years.)

The gist of the idea is simple: that in the term "Web services", there are two basic concepts we keep mixing up and confusing. "Web", meaning interoperability across languages, tools and platforms, and "services", meaning a design philosophy seeking to correct for the flaws we’ve discovered with distributed objects and components. These two ideas, while definitely complementary, stand alone, and a quick examination of each reveals this.

Interoperability, as an idea, only requires that programs be written with an eye towards doing things that don’t exclude any one platform, tool or technology from playing on the playground with the other kids. For example, interoperability is easy if we use text-based protocols, since everybody knows how to read and write text; hence, HTTP and SMTP and POP3 are highly-interoperable protocols, but DCOM’s MEOW or Java’s JRMP protocols aren’t, since each relies on sending binary little-endian or big-endian-encoded data. Interoperability isn’t necessarily a hard thing to achieve, but it requires an attention to low-level detail that most developers want to avoid. (This desire to avoid low-level details isn’t a criticism–it’s our ability to avoid that kind of detail that allows us to write larger- and larger-scale systems in the first place.)

This "seeking to avoid exclusion" requirement for interoperability is why we like using XML so much. Not only is it rooted in plain-text encoding, which makes it relatively easy to pass around multiple platforms, but its ubiquity makes it something that we can reasonably expect to be easily consumed in any given language or platform. Coupled with recent additions to build higher-order constructs on top of XML, we have a pretty good way of representing data elements in a way that lots of platforms can consume. Does interoperability require XML to work? Of course not. We’ve managed for the better part of forty years to interoperate without XML, and we probably could have kept on doing quite well without it; XML makes things easier, nothing more.

Services, on the other hand, is a design philosophy that seeks to correct for the major failures in distributed object and distributed component design. It’s an attempt to create "things" that are more reliable to outages, more secure, and more easily versioned and evolvable, things that objects/components never really addressed or solved.

For example, building services to be autonomous (as per the "Second Tenet of Service-Orientation", as coined by Mr. Box) means that the service has to recognize that it stands alone, and minimize its dependencies on other "things" where possible. Too much dependency in distributed object systems meant that if any one cog in the machine were to go out for some reason, the entire thing came grinding to a halt, a particularly wasteful exercise when over three-quarters of the rest of the code really had nothing to do with the cog that failed. But, because everything was synchronous RPC client/server calls, one piece down somewhere on the back-end meant the whole e-commerce front-end system comes to a shuddering, screeching pause while we figure out why the logging system can’t write any more log messages to disk.

Or, as another example, the First Tenet states that "Boundaries are explicit"; this is a well-recognized flaw with any distributed system, as documented back in 1993 by Wolrath and Waldo in their paper "A Note on Distributed Computing". Thanks to the fact that traversing across the network is an expensive and potentially error-prone action, past attempts to abstract away the details of the network ("Just pretend it’s a local call") eventually result in nothing but abject failure. Performance failure, scalability failure, data failure, you name it, they’re all consequences of treating distributed communication as local. It’s enough to draw the conclusion "well-designed distributed objects are just a contradiction in terms".

There’s obviously more that can be said of both the "Web" angle as well as the "Services" angle, but hopefully enough is here to recognize the distinction between the two. We have a long ways to go with both ideas, by the way. Interoperability isn’t finished just because we have XML, and clearly questions still loom with respect to services, such as the appropriate granularity of a service, and so on. Work remains. Moreover, the larger question still looms: if there is distinction between them, why bring them together into the same space? And the short answer is, "Because individually, each are interesting; collectively, they represent a powerful means for designing future systems." By combining interoperability with services, we create "things" that can effectively stand alone for the forseeable future.

And in the end, isn’t that what we’re supposed to be doing?

Daniel in the Lion’s Den

So here I am, sitting at an Addison-Wesley author’s signing, amongst such .NET luminaries as Mark Fussell, Ken Henderson, Scott Schnoll, and Neil Roodyn, amongst others. And the book I’m supposedly signing while I’m here at this author signing at TechEd?

Effective Enterprise Java, of course. Talk about feeling just a wee bit out of place….

Where’d you go, anyway?

A lot of people have been sending me the odd email, asking if everything’s OK and if I’ve given up blogging or something, due to the relative… OK, let’s be honest, TOTAL… absence of blog posts from me over the last two months. To answer the questions in relative order of ask:

    Have you given up blogging? Are you kidding? Blogging is where I trot out half-baked ideas and get them shot up full of holes so I know how better to structure my arguments in the future. Give that up? Not on your life!
    So what’s with the big silence? Honestly, just general busy-ness. We’re in the process of moving up to Seattle (which will take place mid-July, by the way), I’m in the process of doing a couple of books (which I plan to make public before too long), plus I’ve had a number of conferences, several of which were intercontinental, so I’m just plain tuckered out. When you can’t remember the names of your kids when you get home, blogging just doesn’t seem real high on the priority list.
    Wait a minute… moving to Seattle? Yep, Redmond.
    Are you…? No, I am not taking a job at Microsoft. Or Amazon, for that matter. I’m moving up there because (a) I like the area, (b) a lot of my friends are in the area, (c) a lot more of my friends will be guaranteed to be coming to the area (largely because that’s where Microsoft is), and (d) there’s nothing really holding me to the Sacramento area besides inertia. Besides, there’s this curious effect, how money just somehow seems to find its way from the Microsoft coffers into the hands of those gathered close to campus, employee or not….
    Sounds like you’re making the switch to a .NET guy. Hardly. I’m part of JSR 261, the JSR for bringing WS-Addressing into the JAX* suite of specs, and I have lots of plans to continue speaking and writing in the Java space for a long time to come. Make no mistake, Java is hardly "done" (despite what some of my .NET-favoring friends believe), and I plan on riding the Java language for quite a while.
    So are you going to come back to blogging anytime soon? Sure, as soon as things slow down some–I’ve got a ton of blog posts just sort of hovering at the edge of my consciousness waiting to be written, publicized, and shot full of holes, I just need time to do the writing. Anybody want to volunteer to ghost-blog for me? 🙂

By the way, I’ll make a formal announcement before too long, but be aware that during the move the neward.net site (my blog and my wife’s blog, at the root of the site) will likely be down for a week or two. During that time, I’ll probably be revisiting my hosting options as well as my blogging engine options, so don’t be too surprised if the blog "forks" into a personal blog and professional blog, as others (such as Rocky) have done before me.

In the meantime, thanks for listening! Er… reading, I mean.

JEE? JSE? Oh, brother

Graham Hamilton notes that Sun has decided–finally–that the "2" in "J2EE" and "J2SE" sounds really awkward when coupled to a version number (a la "J2EE 5.0"). So Sun’s marketing and branding team have decided that now, roughly 5 years after we’ve all gotten used to having it there, to drop the "2" and make it just "Java Platform, Enterprise Edition". (Frankly, I refuse to acknowledge that change; I just can’t bring myself to call it "JPEE".)

Is Sun getting the component religion?

JSR 277 is to define

    "a distribution format and a repository for collections of Java code and related resources. It also defines the discovery, loading, and integrity mechanisms at runtime."

… which sounds a lot like components to me. In particular, they’re hitting a lot of my "hot buttons" regarding the use of Jar files when they write in Section 2:

    Java Archives (JARs) are widely used as both the distribution and execution format for Java applications. The JAR format dates back to the mid-1990s, and it has not scaled particularly well in either of these roles. JAR files are hard to distribute, hard to version, and hard to reference in general.

Amen to that, though I would take issue with the idea that they’re hard to distribute–a bit more verbose and/or less compact than we’d like, perhaps, but not a killing blow. The lack of versioning and references between Jars, though, that hurts.

    Distributing a simple Java application is considered to be a complicated task by many developers because it often involves creating a native installer to package multiple JARs into a distribution unit, and it sometimes involves converting the application into a Java applet or JNLP (Java Network Launching Protocol) application for web-based deployment.

Oh, sing it loud, sing it proud. Doing standalone/"rich client" Java application installations has been "Unzip this .jar file" for far too many years. (In truth, it’s a testament to Java’s standalone versatility that we could get away with that for this long, but we certainly could use something a bit more robust and standardized.)

    There is no built-in versioning support in the JAR format. There is no reliable mechanism for expressing, resolving, and enforcing the dependency of one JAR upon another. Referencing a JAR, moreover, involves specifying it in the classpath. Since the path of a JAR may change during deployment, developers are forced to fix up all the references to the deployed JARs as part of the deployment process.

Yes! But let’s be careful here, too, as one of the leading candidates for Java to model itself after (.NET) has it’s own share of issues in this space, too, and simple referencing of modules (as I guess they’ve decided to call them) isn’t necessarily going to solve all of our problems, either. It’s a good start, though….

    Developers also find it quite difficult to deploy installed Java extensions (a.k.a optional packages) because they can easily run into issues like versioning conflict and namespace collision. Java extensions can currently only be installed into a specific Java Runtime Environment (JRE); it is basically impossible to arrange for an installed extension to be shared across multiple JRE installations.

This is one of the first areas I disagree with the proposal thus far; the ability to set extensions on a per-JRE basis is one of Java’s strengths, in my opinion, and helps avoid some of the versioning/side-by-side issues that are currently plaguing the .NET crowd, specifically those who are at work trying to integrate SQLServer and .NET 2.0 together. (Microsoft’s recent pullback from "lots of managed code in the OS" stance for Longhorn is a testament to the concern over the problem, since–as it’s been explained to me–the concern is essentially one of how to version the .NET runtime without breaking Longhorn code in the process.) One of Java’s strengths is its multiple JRE stance–let’s not do anything to essentially break that.

    The specification of the Java Module System should define an infrastructure that addresses the above issues. Its components are likely to include:

        A distribution format (i.e., a Java module) and its metadata as a unit of delivery for packaging collections of Java code and related resources. The metadata would contain information about a module, the resources within the module, and its dependencies upon other modules. The metadata would also include an export list to restrict resources from being exposed outside the module unintentionally. The metadata may allow subset of exposed resources to be used by other modules selectively.
        A versioning scheme that defines how a module declares its own version as well its versioned dependencies upon other modules.
        A repository for storing and retrieving modules on the machine with versioning and namespaces isolation support.
        Runtime support in the application launcher and class loaders for the discovery, loading, and integrity checking of modules.
        A set of support tools, including packaging tools as well as repository tools to support module installation and removal.

This is awesome. This is exactly what Java’s needed for years. I wish it had been there from the beginning, but I guess 10 years later isn’t too bad, as major improvements in the environment go. And make no mistake about it, this is about as major an improvement to the platform as you can get without significantly rewriting the ClassLoading scheme from the ground up.

Now if we can just get the Isolates JSR back off the ground….

If you ever needed convincing that Java generics are badly done…

… might I suggest Ken Arnold’s recent blog post on the subject. My favorite quote from the piece:

    Or, to show the same point in brief, consider this: Enum is actually a generic class defined as Enum<T extends Enum<T>>. You figure it out. We gave up trying to explain it. Our actual footnote on the subject says:

        Enum is actually a generic class defined as Enum<T extends Enum<T>>. This circular definition is probably the most confounding generic type definition you are likely to encounter. We’re assured by the type theorists that this is quite valid and significant, and that we should simply not think about it too much, for which we are grateful.

    And we are grateful. But if we (meaning David) can’t explain it so programmers can understand it, something is seriously wrong.

Now, I can’t be certain if he’s criticizing generic types in general (meaning he would be equally distressed at C++ templates–which admittedly are far more confusing than Java’s generic mechanism–or .NET’s generic types–which, while simpler than C++ templates, can still be somewhat confusing, as ex-DM and still-currently-friends buddy Joe Hummel points out), or the Java implementation of them, but either way, it stresses an important point: approach genericizing your code with care, regardless of your language, be it C++, Java, or .NET. The complexities can sneak up on you when you least expect it.

JAX-WS and Indigo RC Interop

The Java/.NET integration story keeps getting better and better every day–two tidbits in particular stand out thus far.

From Eduardo Pelegri-Lopart’s weblog,

    I knew MTOM support was already implemented in recent JAXB 2.0 builds but I had missed that the full integration with JAX-WS 2.0 was also working. I just talked with Rajiv and he tells me that last night Simon and Rags got it working between the Indigo Beta RC and the JAX-WS 2.0 EA2 released last week. I hope I won’t jinx by talking about it, but Simon and Rags will demo it during their session this afternoon. … The session is TS-9866, "Advanced Web Services Interoperability", 2:45-3:45 in the Yerba Buena Theater.

And Simon echoes a similar story (though he did it with WSE 3, not Indigo) on his weblog:

    In my JavaOne session yesterday I showed (what I believe to be) the first MTOM Interop demo between .NET and Java using publicly available toolkits. For those that don’t know MTOM (Message Transmission Optimization Mechanism) is the new specification for optimizing the transmission and/or wire format of SOAP messages. Primarily this means that we have a new standard that allows the sending of attachments over Web Services – one that the industry agrees on, and one that is composable with the other WS-* specifications.

I still have to touch base with Simon to find out if there’s also an Indigo demo with JAX-WS* that he and Rajiv did, as I’ll want to steal… um, I mean leverage… it for my Indigo/Java talk for TechEd Europe in a week or two. 🙂

Better yet, in other news,

    You know who you are. We have finally publicly announced it; we will have a new JDBC driver for Sql Server 2005.

Microsofties at JavaOne

I’ve been cruising through the MSDN blog posts, and I’ve been surprised by the number of Microsoft FTEs (Full-Time Employees) who are at JavaOne this year, it seems. Most interestingly is their thoughts and opinions about being at JavaOne; not a single one (thus far in my reading) has recounted a negative tale about being at JavaOne, meaning that it sounds like they’ve been given a pretty warm reception. Or, perhaps it would be better to say, they’ve NOT been given the cold shoulder, something I genuinely worried about.

What strikes me most of all, though, is wondering how much of actively being exposed to the Java space is going to affect the .NET 3.0 ("Orcas") release–for example, CyrusN makes this comment (which I’ve excerpted from the longer post):

    I got to go see the BirdsOfAFeather talk with Josh Bloch concerning the new collection in Java1.5 (which you can read about: here) and the upcoming collections they have planned for the future (which you can read about: here). I’ve long been a fan of the java collections and i’ve found that they’ve normally held a good balance between simplicity and power. It’s actually a case of "Goldilocks and the Three Bears" for me. stl is too complex and painful to use, .Net is too simplistic and limited in power, whereas java get’s it just about right. It’s not perfect, but it’s flexible enough to handle almost anything you’d ever need. I always found the deep use of abstractions to be enormously helpful for writing your own special collections while only writing 1/10th or evern 1/100th of the code necessary to implement the full interface. It’s also not cluttered with a gazillion interfaces like i’ve seen in other packages which isn’t especially helpful in a language like java which doesn’t have unions.

What struck me is that if the Collections API seems to be "just right", can we expect a richer .NET API–one that closely mirrors the Collections API–to come in Orcas? Believe me, I would be the first to vote for that; I’m with Cyrus on this one, I love the Collections API’s approach, and have actually suggested to .NET developers to have a look at it for ideas for their own collections classes.

Now, the interesting challenge will be to see if any (and how many) Sun and/or other Java-developing companies or individuals show up at PDC this year to start… um… leveraging more ideas from the .NET space in return. 🙂