“What the hell…?”

I know. I said exactly the same thing.

Some of you reading my weblog or visiting the site probably wondered if I’d fallen off the face of the planet–I had no weblog, I had no site, I didn’t even have any kind of email connectivity, and if you’re part of my messenger buddy list, you didn’t even see me online that much. Some people tried to call, and got nothing but voice mail the entire week. A few people even expressed downright concern–was everything OK? Was I moving early? Was I burned out and just wanted to get away from it all for a while…?

No, Ted just discovered first-hand how addicting a broadband connection can be, and how dangerous hosting your own email and web server from that broadband connection can be when that connection goes down while you’re 8,000 miles away.

I left for London on Saturday (to teach a Web Services .NET class, for the record), and by the time I landed at Heathrow on Sunday, my site was down, more or less permanently, until Thursday. We’d been having some flakiness with it already before I left, but it was a sporadic thing; we had SBC come and look at the line, but the line looked fine. The DSL modem might have been suspect, or the router, but major power spikes aside, it’s hard to imagine what might have killed a piece of electronic equipment that has zero moving parts. My router was doing some strange things on Thursday (at one point it went into its best imitation of a blue screen, going into continuous reboot cycle until I powered it off and on again), but it was again pretty sporadic and I just chalked it up to some random static.

By Monday, it was pretty apparent it wasn’t static.

Here’s where things get fun: as an exercise to the reader (which is the ubiquitous catch-phrase of authors meaning "I don’t want to figure it out myself, so YOU do the work and I’ll claim the credit for having posed such a cool problem to you"), debug, from an airport 8,000 miles and 8 hours time zone differential away, a flaky Internet connection. You can’t Telnet into the box, you can’t VNC into the box, you can’t even ping the box (because you shut ping down at the firewall for security reasons).

Monday, Tuesday and Wednesday were spent more or less round-robining between the local ISP and Southwestern Bell (SBC), trying to each selectively determine why it was the other guys’ fault. Finally, the local ISP, Omsoft, took it upon themselves to come inside the house and have a look at the DSL modem (which they replaced), then a second time (to replace the DSL modem that burned out after one day), then finally a third time (to discover it was my router, after which point they just plugged the server into the DSL modem directly). By Friday, the server was alive, even though it wasn’t accepting HTTP requests. (Somewhere in all the wackiness the Web service got stopped.)

Needless to say, I’ve learned my lesson–www.tedneward.com, my new "professional" domain and future home of the technical weblog (so this can remain a more personal one, though it may still talk about technology at some levels, since that’s so much a part of my life), will be professionally hosted, so that outages become THEIR problem and not mine. Hopefully they won’t go on vacation to London anytime soon….

Oh, and class was great, thanks. 😉

Next technology, I want it to be about something OTHER than reuse

I was reading an article by Aaron Skonnard (of Pluralsight) in MSDN Magazine while on a break teaching last week, this one on SOA and a sort of general introduction to what SOA was supposed to be. No offense to Aaron, but I was struck by how the opening paragraphs of his article sounded suspiciously familiar–no, he wasn’t plagiarizing anybody, but it was those catch-phrases, the ones we’re all so painfully familiar with, that really leapt out at me. SOA was supposed to foster better modularization and encapsulation, enable re-use, and so on.

I’m curious: just how many technologies are created that DON’T foster re-use at some level?

A quick Google search reveals that "Enable reuse" turns up 2M hits, and the top two links?

    Catalogs. Catalogs enable reuse and create efficiency. Automation modules are composed of objects, which are derived from catalogs. (From www.kw-software.com, whatever that is.)

Wow. Catalogs enable reuse. AND create efficiency. Gotta get me some of that!

The next link I found even better:

    … "The key to value-added custodial servicing in this market is to enable re-use of existing market standards and practices – particularly by those buy-side …

was from a site labeled as SWIFT. Custodial servicing. Enabling re-useof existing market standards and practices. Custodial servicing.

Is there anything in this world that doesn’t enable reuse? The remaining top ten links cite "Web services", "Tests and Testing Methodologies", "Web Parts" (from MSDN), "Want to service-enable your enterprise? Model first!" (from TechTarget), "— provides effective solutions that enable people reuse structured content across networks", "Enable code reuse" (through .NET Reflection), and of course, lest we leave THEM out of the picture, "Migrating CORBA 2.x applications to CCM", which tells us that "… CCM based applications will enable "Reuse" and "Assembly" that will be the key to productivity enhancement, software longevity and cost reductions." Across 10 hits, we stretched across 5 or 6 different technology platforms. And the only reason we didn’t fall as far back as C or COBOL links is because those are "dead" languages, despite having more lines of code written in them then Java and .NET combined.

Are we starting to get a clue yet? If the problems with our industry were all about enabling reuse, then either we’re all collectively pretty stupid, or else the problem isn’t about enabling reuse. Or modularization, or strong typing or weak typing or strong coupling or weak coupling or….

Do I know what the problems that plague our industry are? Nope. Or rather, I guess I can list the symptoms off as well as the next guy, but that doesn’t mean I have any deep insights as to what solves the problem. But just to try something new, the very next application I write, I’m going to forget about "enabling reuse" and see how far I get.

Who knows? Somebody else might find the approach useful.

Speaking schedule for the next month or two

More than a few people have asked me already what shows I’m speaking at this coming year, and the short answer is that there is no short answer. 🙂

First stop on the 2005 speaking circuit for me is TheServerSide Java Symposium 2005 March 3-5 in Las Vegas, where I’m doing two talks ("Effective Enterprise Java", of course, and "WS-Peril: The Perils of Web Services"). I’m looking forward to hanging out with my former gang from TheServerSide, and seeing how TechTarget is treating them these days. Of course, I’m also looking forward to seeing Rod, Gregor, Jason and Adrian again, as one of the great benefits of the TSSJS show (at least, based on my impressions from last year) is that they do a great job bringing together some of the best thinkers in the Java space into one place. Some of the hallway conversations at last year’s TSSJS were just phenomenal; I in particular remember one with Rod Johnson over Spring’s role in the Grand Scheme of Things, and Rod’s frank admittal that they weren’t trying to completely replace EJB, just provide something other than bazookas to kill roaches (my words, not his). 🙂

Almost as soon as I get to Vegas, I’m off to Milwaukee for the first of this year’s No Fluff Just Stuff shows, which promises to be another whirlwind ride. I’ve tentatively committed to doing every(!) NFJS show on the schedule this year, which should be an interesting ride. Of more interest to most readers of this weblog is that this year marks a significant change to the NFJS format: we’ve added a .NET track, shepherded by yours truly, which will include talks from myself, some of the existing NFJS speakers (Justin Gehtland and Venkat Subramian, to name two), and two new NFJS players: Cathi Gero and Rocky Lhotka. It should make the NFJS speaker panels that much more interesting, since .NET brings a different perspective on architecture than the traditional Java viewpoint, and Rocky in particular has been on a tear to talk about architectural models for a while now. As for me, I’m talking about Effective Enterprise Java again, but also some new .NET taks on Indigo and C-Omega, among others. I’m looking forward to both a lot, particularly C-omega, since I think XML and relational integration into the language is the Next Big Thing for programming languages; should draw some interesting comparisons to Groovy.

Then, life gets doubled up for a bit as both the patterns and practices Summit West and SD West 2005 are held in the same week in roughly the same location, in Mountain View. I’m speaking at both: I’m doing a half-day Effective Enterprise Java tutorial and a 90-minute "Web Services without the bleeding edge" talk at SD West, then a "Communication Design Patterns" talk at the Summit along with a "Patterns Workshop" with Gregor Hohpe and Ward Cunningham, which has got me all tingly–we’re going to workshop a pattern and add it to PatternShare, the online patterns repository that Ward’s championed at Microsoft. (How much do I love this job? How often do you get a chance to workshop a pattern with two of the Big Patterns Guys? Wow.) Coupled with a Birds-of-a-Feather session on Web services with Michele Bustamente and Elliott Rusty Harold at SD West, the week should be chock-full of all kinds of opportunities for me to get brought up short on all kinds of things. 🙂

And that’s just March. 🙂

Christian starts chatting up Indigo

OK, so Indigo is officially out of the closet…. somewhat. And Christian Weyer, who’s been a part of the Indigo SDR team for about as long as I have (perhaps longer, dating back to TechEd of last year), has started commenting on the Indigo programming approach, starting with the explicitness of Indigo’s contract model. In it, he says:

    What really annoys me is that nearly everybody seems to ignore the interoperability aspect of services (and Web Services…) nowadays. If you (try to) do serious interop in your daily work, you won’t even bother starting with C# or VB or Java… and yes: I know that you might want to see C# interfaces and some square-bracketed attributes as one valid way to write down your service’s interface description.

As he points out (quoting Steve Vinoski as he does), going at an interoperability problem from the perspective of a programming language inevitably leads to failure, because few programmers have the discipline to ignore the trap that Steve mentions:

    When you start with the code rather than the contract, you are almost certainly going to slip up and allow style or notions or idioms particular to that programming language into your service contract. You might not notice it, or you might notice it but not care. However, the guy on the other side trying to consume your service from a different implementation language for which your style or notions or idioms don’t work so well will care.

But unfortunately, it gets worse.

I’ve come to recognize that we’re still looking to XSD and WSDL to be a sort of "IDL for interoperability", and the problem is that both Schema and WSDL allow for things to be defined that don’t fit our existing programming languages. Facets on schema types, in particular, create a particular brand of Hell for the Schema-to-class automatic translators, and fact is, most tools (xsd.exe and all of the Java-based toolkits I’ve tried thus far) basically punt on them. Which leads us to a nasty discipline problem all over again: Not only do you have to be disciplined enough to avoid the styles or notions or idioms of a particular programming language into your service, but you have to be disciplined enough to avoid the styles or notions or idioms of the wire-level representation, as well, or the consumers of the message on both sides will find it awkward to work with.

How do we avoid this? At the moment, there’s less concern for the latter than the former, but as people start to go with a more "contract-first" approach, we either have to bury the angle brackets behind a subset of what schema can offer (and thereby piss off the angle-brackety crowd), or else we have to just publish mounds of texts and patterns [1] to describe to people what to avoid and when.

Neither one sounds truly appealing, which leaves a third option, one far more radical but infinitely more satisfying: just ratify and stamp with approval the current programming practices, and introduce XML and schema types as first-class citizens into our programming languages.

[1] I categorically refuse to use the term ‘best practices’ because THERE ARE NO SUCH THINGS! Anyone who talks about ‘best practices’ is basically saying they’ve never considered the alternatives or why a particular approach is good or bad. A given solution never has all-good or all-bad consequences, but a mix of both, and how can you judge which consequences are good or bad when you take the solution out of the context of a problem that needs to be solved? DEATH TO BEST PRACTICES!

 

Dare points out the object-hierarchical impedance mismatch

No sooner do I finish ranting about the discipline required in going with a contract-first approach, than I start going through my RSS feeds and find that Dare has already done so. He also points out (which I think is *truly* significant):

    This isn’t theoretical, more than once while I was the program manager for XML Schema technologies in the .NET Framework I had to take conference calls with customers who’d been converted to the ‘contract first’ religion only to find out that toolkits simply couldn’t handle a lot of the constructs they were putting in their schemas.Those conversations were never easy.

Having done a few of those contract-first conversions of customers myself, I can attest to the uneasy conversation. You know that this is by far the best approach to Real Interop between platforms, but you still feel somewhat slimy: "Doing this yields the best interoperability, but you gotta be careful not to do this… or this… or this… and for God’s sake, don’t do that… or that… or this, either… Oh, and when we codegen all this, all the schema rules we put into place will probably be stripped off by the code-generating utilities we run the schema through, so we’ll probably need to do some validation by hand…." I’m always worried that a client is going to look me in the eye and ask if I’m just trying to create more work for them so they have to keep me around longer as a consultant. Ouch.

    The main thing people fail to realize when they go down the ‘contract first’ route is that it is quite likely that they have also gone down the ‘XML first’ route which most of them don’t actually want to take. Folks like Tim Ewald don’t mind the fact that sometimes going ‘contract first’ may mean they can’t use traditional XML Web Service toolkits but instead have to resort to SAX, DOM and XSLT. However for many XML Web Service developers this is actually a problem instead of a solution.

Now, I’m not sure I agree 100% with Dare that this is a problem instead of a solution–I think this is a challenge, to be met with the right tools for the job, and again, that tool being the integration of XML into our programming languages. If XML becomes a first-class construct in the language(s), then it becomes MUCH more approachable to take a XML-in/XML-out approach to doing this whole Web service (or whatever we’re going to call it in five years, because I’m pretty sure "Web services" as a term is done) thing.

Sleeper hit of the year

No, it’s not the new John Grisham book (there’s always a new John Grisham book), nor the dare-I-hope-this-one-doesn’t-suck Star Wars III film. No, this one is the book I just picked up at the local bookstore after watching it get raffled away at VSLive! a few weeks ago: Customizing the Microsoft .NET Framework Common Language Runtime, by Steven Pratschner. Whether you are a .NET developer or Java developer, you should read this.

One of the sleeper features of .NET 2.0 has always been the incredible degree of "hook points" they baked into the 2.0 CLR, some of which were needed by (for example) the Yukon team in order to bake the CLR into the database, much as Oracle and DB/2 did with the JVM a few years ago. (In fact, IBM has a version of DB/2, code-named "Stinger", that’s going to embed the CLR in much the same way as Yukon and the JVM-embedded databases do, as well.) SQL Server has always kept a tight grip around memory management, for example, and needed to exert tight control over how assemblies get loaded into the database, in order to support traditional database semantics. The baseline feature set of the CLR just wasn’t gonna cut it.

So the CLR team paused, sat down, and wrote a smashing set of COM interfaces (I know, I know, COM is dead, yadda yadda yadda, but the reality is that it will remain the underpining of the CLR forever, just as the underpining of the JVM is C++) that allow a CLR host–that is, any unmanaged code that wants to call CorBindToRuntimeEx and obtain an ICLRRuntimeHost*–to effectively take over parts of how the runtime functions, including (but not limited to):

    assembly loading: Don’t like the way the current assembly loading scheme works? Write your own
    failure policy: How the CLR handles failures, such as exceptions
    memory: ‘Nuff said
    threading: Ditto
    thread pool manager: Just about everybody I’ve taught asynchronous delegates to immediately asks, "Is there any way I can control the size of the system thread pool?"
    garbage collection: force GCs, collect statistics about GCs, and so on. No, this is not a replacement for finalizers (but it might help)
    debugging
    CLR events: "information about various events happening in the CLR, such as the unloading of an application domain"

As a self-proclaimed "plumbing guy", I’m already smacking my lips.

But why do you care?

    You may have a system that turns out to want a wee bit more control over how thread-switches take place. (Games, for example, want some more precise control over quanta, I’m told.) Create a threading manager that uses cooperative fibers instead of raw Windows threads to do that.
    You may want to have something that just "listens" to the application and reports failures and/or other events to a management interface without requiring intrusive coding inside the app. Write a "host" process that kicks off the code in the usual way after establishing an "events host" that listents to the events in question, and potentially discards the old app domain and starts up a new one every 24 hours (if necessary).
    The classic one: you want to replicate the "-ms" option of the JVM, which tells the JVM to start with "x" amounts of memory to begin with, rather than have to go through a series of "I’m-out-time-to-ask-the-OS-for-memory" cycles that the CLR normally goes through when spinning up apps that consume, say, 512M or 1GB of heap. (I know a financial firm in San Francisco that wanted this two years ago.)

And so on, and so on, and so on.

Why should you care if you’re a Java guy? Because these kinds of hooks are necessary if Java is to keep up, number one, but also because these kinds of hooks would allow for commoditization of the JVM itself, creating a market for graduate students and entrepreneurs to create customized memory management algorithms, for example, that right now require those same entrepreneurs to create an entire JVM. The open-source world would have a field day, creating all sorts of vertical plug-ins at the JVM level that we could pick-and-choose, selecting whichever ones happen to fit our needs best–including a very-real, very-credible memory allocator that just keeps allocating until we run out of room in the heap (for short one-shot gotta-run-as-fast-as-frickin-possible batch jobs, for example).

You want to read this book. And in the meantime, I can’t wait ’til Rotor Whidbey ships, because this is the kinds of stuff I want to rip apart. 🙂

BTW: One very reasonable possibility for a custom host is a java-like launcher (clr.exe?) that allows you to pass some customization options on the command-line, just as java.exe does. We would use it solely in cases like -ms/-mx, to enable an otherwise normal .NET app to have some of its environment customized via the command-line rather than having to write our own managed host. Hmm…. On the surface, this wouldn’t represent too much work to toss off in a weekend….

One noticeable thing missing from the list above, though: JIT compilation, and I think I know why. (Because in the CLR, JIT compilation and bytecode verification are tightly interwoven and would be almost impossible to tease apart into a pluggable manner. Still, I’d love to see it, guys….)

 

Rocky thinks I’ve gone over the edge

In his imitable style, Rocky weighs in on my recent web services rants:

    I think this is all wrong-headed. The idea of being loosely coupled and the idea of being bound by a contract are in direct conflict with each other. If your service only accepts messages conforming to a strict contract (XSD, C#, VB or whatever) then it is impossible to be loosely coupled. Clients that can’t conform to the contract can’t play, and the service can never ever change the contract so it becomes locked in time.

    Contract-based thinking was all the rage with COM, and look where it got us. Cool things like DoWork(), DoWork2(), DoWorkEx(), DoWorkEx2() and more.
    Is this really the future of services? You gotta be kidding!

Loose coupling isn’t about loosely-bound calls, but there is some common ground, and Rocky, I hate to say it, but if we want to avoid the tight-coupling nature of RPCs, then yes, this is the future of services.

Look, in the beginning, there was a maxim that drove a good deal of how the Internet itself evolved: "Be strict in what you send, be liberal in what you accept." It’s attributed (as best I can ascertain) to the late Jon Postel, who was saying that if we really want this world-wide network thing to work, really work, then we need to get off the lawyering-prone nit-picking your-stuff-put-an-extra-semicolon-where-there’s-not-supposed-to-be-one soapbox and start thinking about ways we can accept data even if it’s not quite perfect.

Look at it this way; dspite th obvius errs n ths sentnc, you stll gt th idea, ya? This is because the human brain is wired in exactly that way: liberal in what it accepts, logically trying to piece together parts into a coherent whole that fits. Is it easier to simply force a strict validation on incoming data? Sure it is, and that’s the default mode issued by most Web services toolkits–"You conform to this schema or the call fails." But what happens when you want to evolve that format later, because one of your clients wants to add some new data that your other clients/partners aren’t prepared to handle? "Um, guys….? Would you mind recompiling against a new WSDL contract, please?" Or maybe, "OK, so we’ve got one endpoint for each of our clients, and now two of them want 7 features, 5 of which are identical but the last two are different. Do we cross-wire URL endpoints back and forth between them, or duplicate the code across two different endpoints, or…?"

Just send me an XML message that (more or less) conforms to a format–whether described by Schema, RelaxNG, or just plain-old-whiteboard-chicken-scratch–and I’ll figure out what to do with it from there. It’s worked thus far for the Internet, and I’m betting that it’s going to take us a lot farther than if we try to create One True Schema that everybody who cares about this service must obey. (Because, as anybody who’s ever tried to develop said schema amongst a collection of partners and vendors knows, it’s almost impossible to get that right the first time, and even more impossible to revise it later.)