Wednesday, May 30, 2007

Google Mapplet of the Carbon Counter...

Okay now that was ridiculously easy.... about 30 mins of work while doing my expenses (in other words the 30 mins that kept me sane through the process) and I'd turned the Maps piece into a mapplet piece, which is nice. For those that don't know the "Mapplet" piece is one of the two big releases from Google this week, the other being the truly amazing "Street View".

Anyway a quick bit of hacking around and thanks to having the backend reasonably well done up popped the mapplet
All this takes is a simple Carbon Counter Mapplet XML file. Feel free to give it a go.

Technorati Tags: ,

Monday, May 28, 2007

SOA and the Swiss Railways - or another reason why processes aren't everywhere

I've just come back from a very pleasant little trip to Switzerland with the family (family friendly doesn't sum up the country... even the TRAINS have play areas!). The Swiss train system is rightly world renown for its punctuality and co-ordination, we had 4 trains to catch to cross the country (Google Earth map) and the changes between the trains were 10-15 minutes at each of the stations.

Why is this relevant for SOA? Well here is an example of an incredibly efficient batch system which is based around time synchronisation and which therefore is able to move large numbers of people successfully around. A "real time" process based solution would be the car. Start at A, do lots of steps, arrive at B. But with the railway system, which is much more event based, is able to achieve the same end game (indeed there are parts of Switzerland where cars cannot reach) but do so in a way that reduces stress and increases efficiency.

So when you are looking at replacing those old Batch processes which look so inefficient, ask yourself whether they are batch processes because that is all that could be done, or whether you have a Swiss Railway implementation that you are about to replace with a butt ugly SUV.

Technorati Tags: ,

Tuesday, May 22, 2007

Google Maps and caching RSS feeds

I just decided to update the simple trip to have all the train stops between Geneva and Wengen and while the RSS feed now shows the new entries the Google Map is still showing the entry prior to the change. This might explain the slightly slower initial load time and the subsequent speed, it appears that Google are caching the documents rather than reapplying it everytime, indeed they aren't even checking that the document has been updated.

Which raises an interesting question about using Atom and RSS and indeed HTTP, because while you can indeed set a cache request (or no-cache request) on the document there is nothing to stop the other end making an arbitary decision. Now this certainly makes Google quicker to respond to subsequent requests, but it doesn't make the information more reliable. Now clearly the folks at Google have made a smart trade off between performance and information latency. But by making that trade off its made my service less responsive.

What a cracking example of the problems of the networked world... easier my arse.


Technorati Tags: ,

Creating a KMZ from KML in Java using Zip

This is a very quick post just because I didn't find something when searching that did exactly what I wanted.

Scenario: I have an XML doc (that I have converted to a String) which represents the KML file and I want to return a Zipped version of that file to save on bandwidth and the like. First of all you need to have somewhere to get the KML file from (in this case counter.getKML().


response.setContentType("application/vnd.google-earth.kmz");
ZipEntry entry = new ZipEntry("geoblog.kml");
ZipOutputStream zipOutputStream = new ZipOutputStream(response.getOutputStream());
zipOutputStream.setLevel(9);
zipOutputStream.putNextEntry(entry);
zipOutputStream.write(counter.getKML().getBytes("UTF-8"));
zipOutputStream.closeEntry();
zipOutputStream.close();


And that is it. Mind bogglingly simple.

Technorati Tags: ,

Mashup my blog - from blog to maps

A while back I was playing with Yahoo Pipes after Geo-ripping Wikipedia. This was all part of my on going fight against just doing Word, Powerpoint and whiteboard consulting and generally making sure that I still understand what I am talking about.

So the next challenge was obvious, take the RSS feed and put it up on a Google Map, hey its a mashup and we all know that they MUST use Google Maps :) For this next challenge I decided to abandon Yahoo Pipes for a while (mainly because it absolutely NAILS the server if you are looping through a big blog feed, and do it in the old "traditional" way of processing data in the single lump it comes in (the RSS feed).

My task was as follows
  • Put the Geo tagged feed onto a map with the Google Maps API
  • Generate a KML file for Google Earth
  • Use Ajax in someway
I then added in another one (why not) which was to use the Google Maps API to actually do a search on those places I hadn't managed to rip from Wikipedia. Interestingly Google Maps don't enable you to search on the UK so having the Wikipedia information actually helped there.

So anyway after a few days of "downtime" working and lots of cursing about javascript I managed to get it all up and running. The Map your blog page needs an RSS or Atom feed that consists of posts that have (at this stage) just the placename in the title. It will then (after a shortish wait) display the Map and the summary of the trip information (distance between blog items) and the carbon footprint of that based on the mode of travel (again a single tag for the transport type at this stage). The final challenge I set myself was that all of the display elements would be done in a single page, no round-trip to the server.... no real reason I just wanted to try it.

To get you going try this simple UK to Wengen trip or there is a much messier one that goes all over the globe, that later one takes quite a while for Google to render. If you don't see it after 30 seconds just hit the button, I haven't done a "waiting for Google" bit yet, if you get bored just hit the button again.... Over on the right are a bunch of links including one to the KMZ file which should pop up directly in Google Earth (if you haven't got it download it).

I have to say it was really rather easy, apart from the Javascript issues. I used the Yahoo Javascript libraries to do the initially minor bit of Ajax in there, and Google Maps is fine. The paticular piece I liked was turning a KML file into a KMZ file in Java, worth a quick post on its own because of how easy it was.

Now I feel I can safely attack powerpoint for another couple of weeks.

Technorati Tags: ,

Sunday, May 20, 2007

Javascript, like C but without the rigour

I've been trying out some of the AJAX pieces and adding in some Google Maps functionality (of course) to the geo-ripping code base which has required a smallish bit of Javascript coding.

It really is like stepping back in time, and I've had my comments already about static v dynamic languages and these last few days of writing Javascript has brought back to me all of the reasons why I disliked C. I loathed C as a programming language (and note this is not through lack of use, I was an XWindows/Motif/PEX programmer for five+ years) it was bad enough hunting down obscure errors in my code at the best of times, but what I really hated was the "tick tick BOOM" problem of C. Code wouldn't fail with a nice exception like in Ada, it would just die with a great big core dump. This meant lots of "printf" statements or using a textual debugger (and I'm not talking using gcc and its tools here) which tended to introduce different behaviour because of the changes in memory allocation.

Then there was the code from the muppets the people who cut and paste code from one place to another and you got slammed because now you were calling THEIR function instead of your own when it gets deployed into system test.

Well Javascript is all that an so much more. I've been playing about with various different scripting libraries and finding oh-so interesting cases where names of functions clash, leading to the age old shouting at the monitoring say "It is exactly the same as in the F&@KING example", but of course how do you know WHICH script you are calling?

Then we get the slight syntax error breaking everything problem... nice, reminds me of the old "make" scripts.

I know there are libraries out there to take this stuff away, and I sure as hell will be using them next time, because quite frankly the idea of coding in Javascript makes XSLT coding seem almost benign in comparison.

I was against scripting being added by default into Java SE 6 and this brief experience with Javascript has just reaffirmed my reasons. Sure scripting can help occasionally, but it should be a long way from that to considering it as the default. The move towards treating Javascript as a compiler target is a good one IMO, and long may those tools continue to developer, otherwise Web 2.0 will be DOA as soon as it hits support.

Coding in Javascript is worse than coding in C... there are not many languages that can make that claim.

Technorati Tags: ,

Friday, May 18, 2007

I want a magic mouse

Nothing to do with SOA, but for about the millionth time in my career I've just been annoyed by the inability of mice to transfer "cut and paste" between two machines.

Basically I have my big monitor on one computer which does the searches and my Gmail etc and I have the dev station where I'm doing some coding to stop my brain going to mush while I am a bit ill (Shingles). The scenario is simple. I search on one machine, I find a good link and a good page and I want to cut and paste the link from one computer to another.

Yes I know that the mouse is a "physical" device, but my mind thinks that its the thing that controls the pointer of the screen I am currently looking at therefore why should it matter about cut and paste? Now back in the old XWindows days I would often have 2 machines or more on the go but I would have Emacs running on both boxes at the same time so there was never a problem.

I want a magic mouse that can link to two computers, respond based on where I am looking and just work like I want.


Technorati Tags:

Tuesday, May 15, 2007

CRUD is CRAP

  • Create
  • Read
  • Update
  • Delete

The single most depressing thing for me in IT is how many applications are really just Mainframe data processing solutions with better screens. Applications for which the word "intelligence" is limited to data validation and there is no actual algorithmic or interactional element to the system beyond just data being lobbed into storage.

Looking at the latest raft of .NET, Ruby, Java and the like CRUD "tools" really is pretty depressing, not so much that they are bad (they aren't) but because people seem to be still insisting on coding this dull and uninteresting crap and looking for yet more ways to "optimise" their code for a task that should be tooled.

Sure there are occasions where you can't tool the CRUD bit because it won't fit into the rest of the application, but are you really sure? Or is it just that it would look a bit "ugly"? Worst of all is the fact that many tools still can't handle "complex" elements like foreign keys and want to do single tables.

CRUD is dull, boring and uninteresting. Can we please just get this stuff tooled and move on to the interesting stuff.

Technorati Tags: ,

Google and semantics without meaning

Okay when I wrote the post the other day questioning whether the current Semantic Web approach was the way to go I didn't know about this. But thanks to El Reg I've now got a nice reference point to what I mean. The rather smart chaps at Google claim to have the best human language translators around, even for languages that they don't understand. Now in comparison with matching an address line to another address line the task of doing human to human seems to me (with my one double unit at university on semantics, linguistics and computer representation of knowledge) to be a much bigger challenge.

So are formal ontologies the way to go, or smart contextual searches? Me I'm backing the searches.

Technorati Tags: ,

Monday, May 14, 2007

Is there a problem for the Semantic Web to solve?

Now I've blogged before about the "semantic" web not really being semantic for web services but I've been thinking even more about the problem that the semantic web with its descriptions of information tries to solve and I'm really not convinced that from a business scenario this is something for anyone to be worrying about today. Sure the concept of "automatic" consumption and transformation sounds beguiling at first, but isn't this pretty much the same vision that was promoted around UDDI at the start for Web Services?

What I mean here is that the thing that this tries to solve is people's understanding of information and automate that process. From some reviews I've done recently, and a conference or two I've attended, the accuracy of these transformations are still pretty ropey and are more about helping people at design time than being something you would rely on in a production runtime.

So really here we have a way of adding "hints" in to people about what a given field means so it can help them understand what it maps to and maybe make a suggestion that might, or might not, be accepted. But is RDF/OWL and the like really the way to go about this? Or should we think more in terms of the sort of free form association that Google gives us? What I mean here is think about the way Google maps works "Hotels near London" where it looks for the term "hotel" and a geo location that is around London (another inference), in effect they create a semantic tree for those terms based on the probability that this is what you meant.

Now I've never used an RDF file to help me describe "Hotel" or "London" to Google, I'm just relying on it having built up a contextual reference that means it takes a good guess at the answer.

So are RDF and OWL really required? Or is the solution to have a Google contextual search?

Technorati Tags: ,

Thursday, May 10, 2007

Engineering v Art the challenge of the masses v the talent

There has been more discussion recently around the contracts v late validation argument and Stefan Tilkov has a brief position around how the REST v WS links in and that thought that maybe these really were two different worlds that would never agree. I'm with Stefan, who is on the late validation side, that this is a big divide and is indeed something that has gone on in IT since I can remember.

Where I do disagree though is whether this is a good or a bad thing to have these camps. Now I'm clearly biased as I'm on the contract side but I thought I'd put the case as to why contract based and enforcement, and static languages, represent the engineering approach to IT delivery while dynamic languages and late validation is the approach taken by those who consider IT to be an art. This doesn't make the later inherently wrong, but it does mean that it is hugely predicated on the talent of the person doing the job.

This for me is the problem. When I used to do a lot of "proper" User Interface design there was often talk about the challenges of WILI v KISS. Everyone knows KISS, everyone recites it as an empty mantra, but most people end up doing WILI. WILI is "Well I like it" and is used as the rationalisation of lots of bad decisions, with UI design this was added to by the "well Microsoft do that" school of justification (which is fine if you are doing a word extension, but not if you are doing radar displays).

The same WILI and "appeal to authority" approach is part of the great Technology Delusion that runs straight through IT, part of the problem is simply one of talent. If I could have a team in support of Stefan and Mark Baker I'd have no trouble whatsoever doing REST, if I could have Larry Wall in support I'd be reasonably happy to have PERL code there and so the list of the truly talented goes on. Give me Dan Creswell in support and that Jini idea looks fantastic, hell I'd even say go for a massive project using Spring 1.0 if Rod Johnson had to maintain the XML files.

The appeal of dynamic languages to people who genuinely understand how computer systems should work is clearly appealing, as is pushing the bounds of technology and creating new and exciting ways that are just a little bit better than what came before. These are the people to whom IT is an art form, something that requires imagination and craft and to whom that improvement is important as they are trying to push the boundaries.

I've used dynamic languages, I mess around with them on my personal projects, but I never tend to recommend them on projects and indeed actively ban them when I am the architect.

So why do I choose to have strict contracts, static languages, early validation of everything and extremely rigorous standards applied to code, build and test? The answer was nicely highlighted by Bill Roth on his post around JavaOne Day 1, there are SIX MILLION Java developers out there. This is a clearly staggering number and means a few things.
  1. There are lots of jobs out there in Java
  2. Lots of those people are going to be rubbish or at least below average
But I think this Java groups is a good cross section of IT, it ranges from the truly talented at the stratosphere of IT down to the muppets who can't even read the APIs so write their own functions to do something like "isEmpty" on a string.

The quality of dynamic language people out there takes a similar profile (as a quick trawl of the groups will show) and here in lies the problem with IT as art. If you have Da Vinci, Monet or even someone half-way decent who trained at the Royal College of Art then the art is pretty damned fine. But would you put a paintbrush in the hand of someone who you have to tell which way round it goes and still expect the same results?

IT as art works for the talented, which means it works as long as the talented are involved. As soon as the average developer comes in it quickly degrades and turns into a hype cycle with no progress and a huge amount of bugs. The top talented people are extremely rare in application support, that is where the average people live and if you are lucky a couple of the "alright" ones.

This is why the engineering and "I don't trust you" approach to IT works well when you consider the full lifecycle of a solution or service. I want enforced contracts because I expect people to do something dumb maybe not on my project but certainly when it goes into support. I want static languages with extra-enforcement because I want to catch the stupidity as quickly as possible.

But most of all I restrict choices because the number of people talented enough to make them is vanishingly small I count myself lucky in my career because I've worked with some pretty spectacular people, but even in the best of cases it was around 10% of the project and in none of the cases did those people more into support. People will no doubt bleat that dynamic interfaces give some sort of increased flexibility, my experience however is that it just leads to a right pain in the arse which is a bitch to debug.

Unfortunately in IT the level of self-perception is not strong so far too many people think they are at the top table when in reality they should be left in the sandpit. This leads to people taking on dynamic languages, late validation, multi-threading, async and the like with a single minded belief that either
  1. they will do this because Expert X said it was the right thing to do and they worship at the altar of Expert X and even when it sucks they will say "but X says its right so you are wrong"
  2. They will do something because it looks easy and not understand the consequences
This is the reason I am against dynamic languages and all of the fan-boy parts of IT. IT is no-longer a hackers paradise populated only by people with a background in Computer Science, it is a discipline for the masses and as such the IT technologies and standards that we adopt should recognise that stopping the majority doing something stupid is the goal, because the smart guys can always cope.

IT as art can be beautiful, but IT as engineering will last.

Technorati Tags: ,

Friday, May 04, 2007

SOA isn't about technology

I've been having various discussions recently around SOA where a certain mindset is coming over loud and clear. That mindset is that SOA is just a technology thing, its about the specific technical end points, and that above all these end-points are relatively fine grained, from a business perspective, and are there to enable "BPM" to shine through as the one true way to work with the business.

I've said it before and I'll say it again... business process isn't everything. Right now this focus on BPM is driven by one thing and one thing alone, the fact that every single vendor's stack tops out at business process. Now most of these don't even have a decent way of handling services (i.e. one interface = one process = what a load of crap interfaces you have) but that is beside the point. What they are proposing is that the IT/Business model looks like this


Its a simple stack based view of the world, business at the top, techy IT at the bottom and BPM as the medium for communication "at the business level", people who talk about this view tend to talk "bottom up" with "services exposing legacy and BPM orchestrating services". Its pretty amazing how this view just happens to match the product vendors stacks, this means that either
  1. This is the end of IT product development we have fixed it all, we are done
  2. Its bollocks
Now I'm backing option 2 on this list because I've worked with lots of businesses in various sectors and the most successful ones were those who used processes as required but who were primarily focused on goals and objectives with the actual implementation being flexible.

This is where SOA really earns its keep, not as the bit that delivers the solution but as the contextual framework within which that delivery can sit. SOA, and in particular a business service architecture, is all about understanding the various different "blobs" of the enterprise, how and why they interact and then choosing the right delivery approach for that service.
My view of the world has BSA being important, but as a contextual framework. When you get down to implementation you are still going to think about the specifics of the requirements or demands on an area and this means you will still have to speak to the business. The differences is that the BSA means you are talking within a business context where it has been decided that BPM/Technical SOA/GDA/EDA/People/Flying Monkeys/etc is the best way to solve that problem.

One size doesn't fit all, BPM is not the culmination of all IT. The challenge in IT and business remains the same, namely getting a contextual framework within which the problem domain can be understood and then choosing the right way to solve that problem. BSA isn't a hammer, its the plan that helps you decide when you use the hammer or when you use the saw.

BPM as the language of business is, IMO, snakeoil. I've heard many CEOs report on how their business is doing, I've heard sales directors report on sales... and I've never heard any of them step through a process of their business to describe where they are at.

Think first, plan first, then decide the way to go. Starting with BPM is as silly as starting with WSDL.

Technorati Tags: ,,