Thursday, July 27, 2006

SOA, BPEL, EDA, Java SE 6, Web 2.0 and the rise of async

There is one bit of all the current buzzwords that seems to be overlooked, like when people talk about Ruby being so productive for Web 2.0 style projects. It was part of my objection for the introduction of JAX-WS and a Web Server into Java SE 6 as well. Its the rise of async coding models.


Callbacks in JAX-WS are a great example of this, but so are lots of the dynamic elements being done in Web 2.0 and the eventing models of EDA. They require people to code in a non-linear model and require an environment that can handle threads sensibly and efficiently. Async isn't just a matter of "waiting for a request" because then you are blocking, its a matter of actually being able to trigger off a related piece of code when you get the request and then tie it back to what was originally happening and may still be happening.

Async is hard and yet all I've read so far on the web is lots of people going on about how great this new Async model is, without a mention of thread safety around. The rise of Async (and of technologies like BPEL that have Async baked into them) also makes an interesting question around the quality of developers and platforms required.

Ruby et al are all out for this world for professionals until they are thread safe, if you can't even get to that level then how on earth are you going to build ontop of it? So yup you can build something really quickly, then lob in a nice bit of async and you "sometimes" get random behaviour... nice Groovy on Rails could be a migration path for those folks of course as you get the benefits of a thread safe platform. Saying "it was okay when I tested it" isn't enough.

Async testing frameworks are going to be needed and greater use of languages (like BPEL) which include a way to clearly express async behaviour. Currently there isn't really much out there that can do the testing of async applications, paticularly around the Web 2.0 Javascript style model (oooo multi-threading Javascript... nice!). As SOA and Web 2.0 kick-off in a big way and people start using eventing and messaging models for their service interaction and then lob an async interface ontop of it these will cease to be seen as "cool" things and start becoming the biggest code mess IT has ever had the displeasure to create.

Remember Meyfroidt's Law, and start planning now for how you are going to make async easy for the masses.



Technorati Tags: , , ,,

Tuesday, July 25, 2006

Please let IT continue playing with the toys...

Not picking on Sam, its just the place where I picked up a follow-up to something that the folks at MWD had blogged about around IT/Biz alignment, all three make good points about not having IT enslaved to business cycles and providing innovation to the organisation. The problem is that neither of them admit the reality that most IT departments have not delivered any such benefits to their companies.

The plantative plea of IT not to be "wedded" to these slow business cycles isn't to actually deliver innovation its to be left alone to play with its toys. Richard Veryard spectacularly misses this point and appears to be blind to successive generations of IT, the big EAI waste of money, the ERP customisation that failed spectacularly, the 80% of the money spent on just keeping the lights on with the current systems. His plea to the "layered" and "separated" architecture is in many cases exactly what has caused the current mess.

It shouldn't be wedded to business cycles, but it must be focused on delivering what it's customer (the business) wants and in delivering value to that customer in the manner which that customer wants. This means delivering IT that looks like the business and not like a series of feudal technology serfdoms.

I'm with Sam in that considering IT as a service organisation is a good idea, it needs to think on its feet and it needs to be responsive to its customer. Like any good service company it also needs competition, otherwise it just becomes a monopoly supplier and that just isn't healthy. But I do disagree that every organisation should be looking at the "value add" and the full cycle. For many IT organisations they should look to deliver what the business actually wants first, then start considering what else they can do.

Before organisations start prattling on about IT strategy and "value add" you have to pass the entrance example. This is simple ... did you deliver what they wanted, when they wanted and in the way they wanted? Is your IT estate in a condition that means it changes in the way the business wants? Are you spending a decent proportion of the IT budget on new things rather than old things? And most importantly does the IT estate actually make sense to the business?

When you can answer those questions correctly then its time to consider IT as an equal partner able to deliver innovation and change. Until then any claim to be kept separate just sounds like my kids asking for ice-cream when they haven't finished dinner.

Pretending that IT isn't broken doesn't help move us forwards, and I'll stick by my statement that there is no such thing as IT strategy.

Technorati Tags:

Using Services to give Process agility


There are an awful lot of pieces out there that are equating processes with agility, so I thought I'd come up with a simple example that shows how Contract First helps ensure a much greater degree of agility than just trying to do it just with process, and how process can get complex in a hurry.

So to demonstrate that I'm going to use the Burger Flippers guide to SOA. First off the Burger Flippers decide to model the processes using BPEL. Which I've lobbed on the left hand side here. Like all "good" BPEL techies the person has sat down and worked through the process front to back and kept adding in steps until the job was done (I've at least let them call a sub-process for Fries, but believe me when I say I've seen that done inline as well).

So here we have a nice linear process that runs through and delivers the wonder of high fat foods back to the customer.

The trouble is that the person doing the selling bit isn't the person doing the burger flipping so they begin to get confused and decide that they need to model this using BPMN, because Swim lanes will make it much easier.
So now they've made it much easier to see what everyone should be doing and the world of the Burger Flippers is a happy place.

Then a change in business strategy means that instead of making every burger on demand they are going to have a "bin" which will maintain an active stock level in order to get customers in and out as quickly as possible. The bin is therefore responsible for ensuring the stock level and making the order of the burger. So as they've already got the BPMN they elect to do what any good process person does... namely extend it.
So now we've got a nice complex BPMN diagram and we haven't even got to ordering drinks or having more than one type of burger!

Now if we'd of take a contract first approach and split the business up into its different domains, namely customer service, fries and burgers then we'd have had the following set of contracts.

Inside the first version of Burger Service we'd have had only one service to worry about, the flipper so any requests to Burger Service would have been passed directly through.
With the new version however we'd have added in a new service to manage the demand and the bin, but the external interface wouldn't have changed at all.

In this model each of the services would have its own, independent process models and the changes and optimisations in one, that don't impact the interface, wouldn't reflect on the others. You could still generate a single BPMN if you wanted, but a great deal of the flexibility comes from being able to view each of the different areas separate.

This was a very trivial example, but the BPMN diagram is already getting complex with all of the different sub-processes and links. By having to modify in this way it quickly becomes cluttered and we start relying on process to process links rather than service to service links.

Technorati Tags: , , , ,

Wednesday, July 19, 2006

When Open Source meets procurement

One of the challenges that I've seen growing over the last few years is that of Open Source technology and the traditional company procurement cycle. I'm not talking here about Open Source companies such as Redhat featuring JBoss but about those Open Source projects like Hibernate, DROOLS, Tomcat or even things like Log4J, small scale projects like GroovyRules, and even those smaller open source companies out there. These are all technologies that would meet a large number of business requirements, but they don't get used as often as they should and more "feature rich" commercial products are used as the industrial steam powered hammers to crack every nut.

So why is this? The main reason is, IMO, the procurement department which has failed in most companies to react to the changing software market. Companies tend to have "approved" lists of suppliers for software and these lists are arrived at in one of two ways

  1. Projects had a need for a specific type of software that needed them to buy something
  2. The company wanted to rationalise its suppliers for a certain type of software

When a project wants to use Open Source it will tend to just use it, the key here is that this information is not fed back into procurement so the product can go onto the approved software list. This is done for two reasons

  1. Its too much effort
  2. If procurement knows they might try and block it as it isn't already on the list

If however there might be a product purchase, money, involved then procurement will go through a standard process to select the approved software, this will involve putting out a Request for Information (RFI) to a long list of suppliers with a standard set of questions. These responses will then be collated and graded with the long list being reduced down to a short list of a few vendors who will then be set a Request for Proposal (RFP), vendors will then respond again and a winner will be selected and contract negotiations will begin. Key factors will be :

  1. How well the product meets the requirements
  2. Financial stability of the company
  3. How cheap is it and what discount did they get on list
  4. Fit with current suppliers (less suppliers are easier to manage)

The problem with this approach is that it is fundamentally aimed at traditional software vendors who can invest the time, and money, into the response process and who have a recognised brand backing them up with the accounts to match. This approach just doesn't work when Open Source technology should be considered. The first problem is who do you send the RFI to?, most of these Open Source projects are federated, almost all don't have a sales team, so there is just nowhere for the procurement department to send the RFI to, which means Open Source has already "qualified out" of the process.

Lets assume that they do find out who to send the RFI to, the next problem is getting someone to respond to it, Open Source developers aren't going to get paid any money if the company selects their product so why on earth would they spend time responding to the RFI? Again this means the Open Source project is unlikely to respond to the RFI and will again be qualified out. Maybe they do respond and the procurement department goes through the checklist, it doesn't look good for Open Source as you couldn't produce any accounts for the last 3 years proving you are a stable company. Failed again!

So you think that is bad don't you? Procurement should make more effort. But how can they make that effort? You see here is the problem, the objective of the procurement cycle is to do it as cheaply as possible (what is the point in spending $10k to determine the right product for a max spend of $100k) this means pushing work out onto the supplier as much as possible. To evaluate Open Source means the company would have to respond itself to the RFI/RFP which brings in lots of compliance and competition issues and also means that the company would have to approve budget towards funding the response process for that Open Source product, in otherwords instead of pushing cost downwards Open Source actually means an increase in cost for the company during the evaluation process. Sure you can argue it might save money in the long run, but today that is $10k the company doesn't have to spend if it doesn't want to.

How do I get to $10k? Well lets assume that the "internal" billable rate is $500, this is the number the accountants say that a person costs the company. You can't just say "we'll do the RFI response" you have to think that you are going to do the full response (after all if you don't get to do the RFP then as internal people you should be shot as you know what is needed) so lets say 2 people 3 days for the RFI response, RFP to include a bit of a demo so 2 people for 10 days for the RFP, 26 days in all, which at $500 would be $13,000, hence calling it $10k because you'd have to keep it to that mark to get it signed off.

Now lets assume a miracle occurs, and an Open Source product gets through to the RFP process and even get all the effort to required respond. The procurement department is now faced with a question of how it gets support for the product, how it contracts for the product, what the upgrade cycle should be, how it matches feature for feature against the competition etc etc. The trouble is that this turns into a "My bucket is bigger than your bucket" competition. The products that win aren't always the ones that meet the requirements the best, they are often the ones that offer the most "value add" aka "bells and whistles". Here again Open Source comes a cropper as the product are normally, GUIs in Linux and emacs aside, built for purpose rather than for a sales campaign where these extras can be spun as being "free" or "included" in the price. Very few of these assessments actually involve building demos, beyond showing what the vendor already has, to prove that the technology works for your requirements and are often just a run off of analyst assessments and powerpoints of features.

The financial assessment is a key one, most Open Source projects don't have a company and if they do the company is pretty small and won't have the clout of a big player (imagine someone in procurement comparing Spring with .NET and looking at Interface21 v Microsoft in the assessment). If you are doing this internally you have to convince people that you will be able to do the support, claiming that the resources on the internet, the community of developers will help doesn't wash with procurement, why would anyone do anything for free? They will ask, this is the finance department remember. You need to be clear about the cost of support, examples of where you've used Log4J for instance.

The procurement process was hasn't changed in well over 10 years, back then Open Source technologies were few and far between and rarely competed with their commercial equivalents, today however we need a new procurement process and this needs to be one that aims not to buy the "ultimate" solution but to deliver products that do what the business wants. This means thinking how products work in the small, medium and large projects or at different levels of complexity. It also means a different set of criteria for evaluation.

What procurement needs to do is start looking at fit for purpose and actively having that proven, to include Open Source in this means that the company will actually have to invest its people's time into that process. Procurement will also need to actively look at free alternatives to the commercial products and work out how to assess the solution. This leads to a simple conclusion.

For Open Source to be included in a procurement process the IT organisation must be responsible for the assessment and recommendations. If an Open Source product is selected then there is no reason for procurement to ever be involved. The IT organisation should assess based on fit for purpose and on its ability to develop and work with that technology. If support is going to be needed then again the options need to be assessed with Commercial software becoming stronger in this area. Procurement should put down the basic criteria, but it should not be governing or leading the process. If it comes down to contractual negotiations then procurement should become actively involved in securing the best deal.

The key question here is how you secure that $10k to fund your assessment of the Open Source equivalents, and how you keep that assessment honest (most vendors would qualify out if they feel they just an external stalking horse). The only real way to do this is to have people on all of the product assessments working actively with the vendors, this way you can compare your internal team with a properly supported person from a commercial vendor, this gives both sides something and makes it a fairer fight. Trouble is this costs even more money and time from people who could be on more business valuable work elsewhere.

Building the business case for this sort of approach isn't simple, linking it to a project and having each team (no more than 3) build the same project functionality can help but still gives redundancy of effort. It really requires an IT department to understand the value of getting the right tool in, and making sure that the tool works. The first thing you are going to need is data for your case, do you have examples in your company where a commercial product (spent money) was replaced by an Open Source equivalent? Do you have examples where the procurement process selected and purchased a product which didn't perform on the project and causes issues? Do you have examples where picking a specific product made it difficult to get people with the right skills which meant getting in expensive contractors?

Gather this information and work out the cost over the last 12-24 months to your company of the current procurement approach. This then gives you the basis to argue for a more interactive procurement cycle and one that includes more hands on work. They key is that you are not proposing to allow Open Source in, but to be more rigorous on all vendors to make sure the company doesn't make the same mistakes again. The Open Source angle just becomes a happy side effect.

Open Source doesn't currently get used as much as it should, and a major reason for that is that its not available to businesses via their normal procurement process. So either procurement has to change or Open Source needs to think about how it responds. Changing procurement means you need to think in their language, money, and be able to demonstrate that the current process has cost the company money, thus justifying the investment to fix it. Changing Open Source would be like boiling the ocean.



Technorati Tags:

Tuesday, July 18, 2006

How SOA helps you hit the agile sweet spot

Over at InfoQ there is an article on what the "ideal teamsize" should be now this isn't exactly news to anyone who has read the world's best book on Software Development (if you haven't read it you shouldn't be allowed into work tomorrow) or indeed who has tried to work in a large team it is one of the key ways that SOA can really help a development project.


Firstly lets face the reality that there are certain things that 4 to 6 people can't build on their own, an Air Traffic Control system, the entire Tax Processing system of a large country, Rome that sort of thing. And lets also face the reality that in fact lots of large business system deliveries could be done with less people that are currently on them, but again probably not 4 to 6.

This is where SOA really can help as you kick-off the project. I had a quick burst on this a while back but I thought I'd go over the basics again.
  1. When you are doing a project, work out what the services are upfront.
  2. Clearly define the interactions between these services
  3. Assign separate teams to each service to formalise the interfaces
  4. Within each service look to break down further to create a set of implementation services that
    1. Can be developed behind clearly defined interfaces
    2. Can be developed in around a month by a few people
    3. Has a clearly defined unit & system test suite
  5. Formalise the interfaces and map the dependencies between services
  6. Plan a series of small projects within a larger programme of work
Here the Service Architecture's structure will help you to have a clear communication structure and by having clear Mock, Unit and System Test suites for the services you don't have to rely on honesty to track how things are going. At each level the teams are small, but overall the number of people able to work effectively is very high.

This fights at one of the accepted wisdoms of IT, namely that more communication is a good thing. It isn't, what is required is effective and directed communication, which is what a decent Service Architecture can help to provide a project.

Again this isn't new stuff, its been around for many many years in research around IT projects, and been implemented by people who certainly wouldn't have used the trendy term "agile" to describe how they did their projects. But would have thought on day one a about how to break up the project, some of then used processes (CICS people for instance) and now we should be using Services, the approach and aim however is the same, its easier to manage 4 people, or even 4 teams of 4 people than it is to manage 16 people directly.

Technorati Tags: , ,

Friday, July 14, 2006

Consumers view of service description

Something happened today that made me think of how we normally approach the task of describing a service. For about 12 or 13 years I had "non-conformist" hair, it went from short, to phenomenally long before settling down about 8 years ago to what I liked to think of as the "cool surfer" look. So when people described me one of the first phrases used was normally "ginger bloke with long hair".

Time takes its toll on all of us and as it began to look a bit "Bobby Charlton" I elected to get it cut.

Now here comes the shock. What is the objective of the look (discoverable interface) I elected to put forwards of the world? Its to attract women, there is a bit of confidence in thyself thing (but I've never been accused of being the introverted shy retiring type) but lets face it the main objective is so women notice you and want to go out with you.

It turns out I was engaging in the worst sort of interface design (WILI - Well I like it), the interface description I'd chosen was far from being the best option from the perspective of the consumers who I wanted to use the interface. As soon as I got the haircut the decision was unanimously positive. To be honest though that washed over me as just one of those things.

Then today as I came through Schipol Airport the woman behind the passport counter takes my passport and goes "you've changed a lot", followed as she handed back my passport with "much better now, much better". When people in passport control comment then you know you got it spectacularly wrong.

Now I was clearly spectacularly lucky that when I met my wife in Paris she was able to look past this second rate interface description to see the capabilities beneath. This however made me think about how we think about defining interfaces for our services.

One of the elements the Reference Model talks about is that a service must be discoverable, otherwise its pointless and not really a service. Today's events made me realise that this simple concept actually has a profound impact if you think about it logically, it means that the primary goal of a service description is not to describe the capabilities it represents, but to make its self easy to discover so it can then describe its capabilities to potential consumers. So when you design you interfaces make them look attractive to potential consumers because that gives you a greater chance of being able to describe what the service actually provides.

Now there would be a specialist consultancy gig : Service Interface Marketing :)

Technorati Tags: ,

Thursday, July 13, 2006

WS-RM, WS-RX, Reliable Messaging which is what?

I've just been in yet another meeting where the wonder that is the confusion around Web Service Reliable Messaging cause a nice "deer in the headlights" moment with a vendor.


The converstation goes like this :
Vendor : "we are looking at using WS-RM in our next release"
Customer : "do you mean the work of the WS-RM OASIS group specification?"
Vendor : "Yes"
Customer : "So you mean you do WS-Reliability?"
Vendor : "Is that another name? I didn't know that I just thought it was WS-RM"

Much fun can then be had. The problem is simple to describe (sort of) the WS-RM (entitled Web Services Reliable Messaging) expert group, led by Coastin and Fujitsu this produced a specification (in 2004) called WS-Reliability, there was then another group called WS-RX (entitled Web Services Reliable Exchange) started led by IBM (now WS02 after Paul Fremantle left), SAP and... Coastin. This second group as produced two specifications called (wait for it...) WS - Reliable Messaging and WS - Reliable Messaging Policy Assertion, which are abbreviated to WS-RM and WS-RM Policy respectively.

The WS-RX/WS-RM specification actually only mentions that it really is the WS-RM specification on line 121 with the wonderful line

[WS-Addressing], then the action IRI MUST consist of the WS-RM namespace URI concatenated with a
'/', followed by the message element name. For example:
http://docs.oasis-open.org/ws-rx/wsrm/200602/SequenceAcknowledgement

Which is pretty cunning.

So to be clear, WS-RX is the group that is defining what Reliable Messaging will be in the WS-* universe. So when people say WS-RM they should be talking about the specification released by this group. But you never know there might be some vendor out there who has completely muppetted it so its always best to check.

So WS-RX = WS-RM, except where it doesn't, and where it doesn't the person doing it is wrong.

So next time you have a vendor in and you want a bit of sport, just remember the WS-RM mantra and you never know you might find out that they have done the wrong one, but you'll certainly find out how much the guy saying "We are 100% behind the standards" knows what he is talking about.

Technorati Tags: , ,,,,,

Monday, July 10, 2006

Why Service != operation

There still appears to be some debate about whether a service is an operation or a collection of operations.

I have no idea why because it really is pretty straight-forward, a service has to be thought of as a collection, or maybe more accurately as a container.

A service is a mechanism to enable access to one or more capabilities
So says the OASIS SOA Reference Model and there is a very good reason for it. If capabilities were 1-to-1 matched to service then there would be no point having services, we'd all just go straight to the capabilities. Ah ha! I hear the people wearing the pointed hats say, so we don't need services.

No that isn't what it says, what it says is that there are lots of capabilities and these need to be presented in a sensible way that enables them to be consumed. This is not likely if they are just sand on the beach and you've got to find the right grain. By enabling a service to provide access to a set of capabilities provides those elements with some degree of cohesion, this makes understanding them much simpler and enables you to put in place consistent policies for defined sets of capabilities.

The other power that this approach gives you is that a capability could be accessed via more than one service. This means that if its sensible for a given capability to be exposed in two places... that is okay as long as you are aware of the versioning problems etc. As an example of this, booking a British Airways airline ticket is a capability provided BA.com, their call-centre and via expedia, travelocity and the rest. One capability here has been surfaced via multiple services (thanks in no small part to standardisation in the services that live behind them). This gives us another bonus, the service you access the capability via might not own the capability. Again that is okay, in fact it can be one of the ways of getting a flexible and powerful system.

All of this goes out the window if operation = service, then you have millions of services all the same, no aggregation of perspective just aggregation of function.

Operation = Service is 100% the same as CICS, but with out the power and sophistication.

A service enables access to one or more capabilities. End of discussion.

Technorati Tags: ,

Friday, July 07, 2006

Well Accenture don't get SOA

Via the folks over at Infoq and their article on defining the ESB I came across a link to an interview from Accenture's CTO. Now if there was a company I'd expect to understand that SOA as a technology approach won't work it would be the people who used to be accountants. But it turns out if you turn accountants into IT professionals then they become technology geeks.

Infoq's summary of Accenture's position is

  1. Use of eXtensible Markup Language (XML) to use application interfaces in a more standard way.
  2. Taking some business processes and turning them into web services.
  3. Introduction and full use of the enterprise service bus.
  4. The generation of Business Process Execution Language(BPEL) --the ability through business processing modelling tools and BPEL to create different application behaviour without changing the software

Good god, did this man STUDY to become this wrong? Yes folks, SOA isn't about thinking differently, or architecture or anything actually different its just about using XML, whacking on some Web Services, lobbing in a Bus and then if you are REALLY brave using BPEL.... oh and guess what folks, according to the CTO at Accenture changing BPEL doesn't mean you change the software, because BPEL is of course some sort of magic language.

Now from one side this of course makes me amazingly happy, but from another its just depression. Some people have already done all of these steps and found out that the technology doesn't matter half as much as the practice.

If SOA is just about adopting XML, lobbing web-services on processes, using an ESB and then trying to use BPEL then its not going to deliver on the promises. If you deliver in the same way, thinking about the problem in the same way and only change the technology then all you are doing is putting lipstick on the pig, and not even at the front where it will have some impact.

It does stun me how people, particularly people at an SI, can talk about SOA and fail to mention three things

  1. Services
  2. Architecture
  3. The Business View

Instead they concentrate on the technologies that exist today and how people can use them in the future. Even for a technology view this is brain dead. FIVE YEARS before people are using BPEL? Don't you think just for a second that something else will have been introduced above BPEL that will become the next hot thing? Taking a technology view of today's products and assuming it will remain pretty static for five years is 100% against what history has told us.

SOA has to be about the business and changing the way we think. IT for the sake of technology is what got us in this mess in the first place.

Technorati Tags: , ,

Thursday, July 06, 2006

Contract first BPEL to be in eclipse...

Ask and ye shall receive....
Yes, this is definitely in the plan for a 1.0 release. I'll leave this defect
open to track it. I'll move it to BPEL UI instead of BPEL Model, as all of the
changes that need to take place are model-related. - James Moody
Amazing reaction time from the Eclipse BPEL team, and great news on the functionality that is due to be in there.

Technorati Tags: , ,

Should I use the Logo?

I was playing with POVRay (my favourite SOA renderer) the other day trying to create a picture that represented my view on SOA, and the logo at the top of the page was the outcome. Crystal Ball is about the important bits and all the technology pieces are behind along with the other bits that SOA impacts.

Now I'm aware that a 40k image might be irritating and I'm a big fan myself of clean interfaces. So if anyone has any objections to the logo let me know and I'll take it down.

Technorati Tags: ,

Tuesday, July 04, 2006

Pharmaceutical Mashups

A few days ago Nigel Green commented that much of this "new" thinking actually goes back to general systems theory and the concept of value systems (not value chains which are a different thing completely). Which got me thinking more about how people keep ignoring the lessons of the past as if ignorance and optimisim are a design pattern.


Drinking a bit the other night with Nigel and Sam Lowe (if Sam is the Cliff Richard of SOA then Nigel is Keith Richards, been there done it, but sometimes needs to be told not to climb trees :) we got talking about all these new trendy terms like "Mashup" that are being bandied about. It really is amazing how people who haven't read (or experienced) the different challenges and solutions that have been applied over the years can suddenly leap upon something as being the silver bullet without understanding the challenges. You know the type, its all about Mashups and "Web 2.0" and guess what folks.. yup the old ways are dead long live the new. Same as when in 2000 all those bricks and mortar businesses were going to be destroyed by WebVan, Boo, Clickmango and the like. Its all become simple now.


So the question went up, why isn't everyone just moving towards Mashups in their enterprise, just dynamically assembling and deploying new applications and then seeing what survives. Its the next great hope of IT... Mashups can solve everything, its about how businesses work isn't it?


Err no it isn't, take a pharmaceutical company and apply the model. Don't worry about the FDA or all that regulation and process its just making you less agile, what you need to do is start rapidly assembling new drugs from the pieces you've got lying around, put them out on the market and then what works will be successful. Hell you never know they might start finding new drugs, or at least popular ones, much quicker this way, but yes there is the slight overhead of accidentally killing people on a regular basis, that doesn't matter though its all part of the process.


Stupid idea eh? But why should parts of IT be any different? Do you want your medical equipment to be lobbed together in a Mashup? Or an Air Traffic Control system, flight control system or the boring old HR bit that pays your salary?


People proposing Mashups and such techniques as another "ultimate" solution for IT are just simpletons. They are viewing the world through their own blinkered existence and pretending that all of the complexity that exists is just a result of people not following this great new approach and doing it in some old and fuddy duddy way. One size doesn't fit all, that is one of the great powers of SOA, it aims to help you use different approaches where appropriate. So hell use a Mashup to get that new Marketing Campaign onto the website, but don't use it to process payments from customers or create new drugs.


There is no Silver Bullet, and anyone who hasn't read that book should stop having an opinion till they have



Technorati Tags: , ,

Contract first for BPEL

Okay so I've started having a play with the new BPEL 2.0 tools from Eclipse and Netbeans, and the phrase early alpha certain comes to mind! But it is good to see that the two places where we are seeing these modellers is in the Open Source arena, and they are being developed by people who really know what they are talking about, Edwin over at Oracle in Eclipse and Charles (and Todd) over at Sun doing Netbeans.... it does make me wonder what they could produce together though.


Any how, one of the things that was annoying me is that I've become a big fan of doing contracts early on in projects. This is an incredibly new and trendy approach, designing and defining the contracts before you actually start future development, its so new and trendy that there is a paper called "Design by Contract" that was.... errr written in the early 90s. Don't you love it when people think they've discovered the holy grail that everyone has been missing for all these years? So anyway its a good practice, design the interface, in otherwords the WSDL which should have the operations and the datatypes, but unfortunately unlike Eiffel, the programming language from the guy who wrote Design by Contract, we can't define the formal contract very easily.


So anyway lets suppose that we have a new service doing Campaign Management, its a simple service that aims to automate the links between existing systems and facilitate the approval, release and withdrawl processes. So first off we define the WSDL.... which I won't cut and paste but here is the picture.



Now this is where we have the first set of problems with the current BPEL tools... I can't start with a WSDL. This is plainly rubbish and there really isn't a very big reason for it. So that is the first request: I want to be able to start from a WSDL then implement with BPEL.


Now this leads straight into problem number two. One BPEL = One WSDL but this isn't what makes sense from a services perspective, my CampaignService needs to deliver five capabilities, so I need five BPEL = one WSDL. This makes partner links easier to handle and by grouping these elements together you reduce the complexity and the clutter and provide consistency between your BPEL and non-BPEL implementations (lets face it if you had to deliver this WSDL in C# or Java you really wouldn't be looking to do it in five different classes. This separation of BPEL implemented things from OO implemented things forces more information onto the consumer that they just don't want to know about



This also helps you when you are build the BPEL as you can clearly identify calls to related elements (CampaignManagement) and calls to external things (IdentityManagement), and in the actual BPEL itself it makes no difference whatsoever as partner links are just calls to a WSDL anyway.



So here we have the two processes, one of which (create) calls the other (approve) to deliver on the capabilities of the service.


From a design, development, management, publication and versioning perspective it makes it much easier if you can have a single WSDL to describe what the service should be. Implementation languages like BPEL shouldn't be forcing us to break our service architectures because they can't cope with the idea of a service providing access to multiple capabilities. Which is the reason I raise the enhancement request to the Eclipse BPEL project. It would be fantastic if good practices like Design by Contract could be applied to all types of Web Service implementations, and not just the ones being built in OO langauges. If BPEL really wants to play well in the SOA world then its implementors need to start thinking of the service question rather than the process one.


Technorati Tags: , ,

Monday, July 03, 2006

Selling SOA Again - be clear

Okay I talked a while ago about selling SOA just being about selling, its not magic and all that.


Well I'd like to add another element to the approach...

Understand what the hell you are trying to sell first. Don't start with a vague idea about what the business wants (they probably know) and then talk in vague terms about SOA as being important.

Understand the problem, link SOA directly to the problem and most importantly don't talk about any crap that the business just doesn't care about. So what if you use Web Services, REST, Pink Pixies or a bunch a smack addict monkeys to do it... if you meet their goals then they just won't care. And don't talk about all of the possibilities... just talk about the one that you think they should do.

Be clear, be concise, be definite.

Technorati Tags: ,