Monday, January 31, 2011

JavaSE 7 review - a missed opportunity

JavaSE 7 fails to address the problems with JavaSE 6 and the challenge of creating an open platform for enterprise developers and organisations to innovate around. The single biggest challenge originally facing JavaSE 7, Modularization, has been de-scoped by the Sun/Oracle Java leadership despite having 5 years to deliver it. Feature complete? No very much incomplete for what it needs to be and a missed opportunity for Oracle and the Java community. Its sad to see such a poor improvement having such a detrimental impact on what was once the most vibrant, friendly and huge development community on earth.

So JavaSE 7 has been "approved" by the JCP even though its not actually been through a proper JCP process so the specification has actually been defined by the OpenJDK development team... So is JavaSE 7 any better from an enterprise or agile/lightweight development perspective than the Joe Sixpack focused JavaSE 6? Which wasn't IMO focused on these core Java markets at all.

So what is in JavaSE 7?... well from the OpenJDK website

vmJSR 292: Support for
dynamically-typed languages (InvokeDynamic)

Strict class-file checking [NEW]
langJSR 334: Small language
enhancements (Project Coin)
coreUpgrade class-loader
architecture

Method to close a URLClassLoader

Concurrency and collections updates
(jsr166y)
i18nUnicode 6.0

Locale enhancement

Separate user locale and user-interface
locale
ionetJSR 203: More new I/O APIs for
the Java platform (NIO.2)

NIO.2 filesystem provider for zip/jar
archives

SCTP (Stream Control Transmission
Protocol)

SDP (Sockets Direct Protocol)

Use the Windows Vista IPv6 stack

TLS 1.2
secElliptic-curve cryptography
(ECC)
jdbcJDBC 4.1
clientXRender pipeline for Java
2D

Create new platform APIs for 6u10 graphics
features

Nimbus look-and-feel for Swing

Swing JLayer component
webUpdate the XML stack
mgmtEnhanced JMX Agent and
MBeans
[NEW]

So lets go through these from the perspective of what most people do with Java, namely the delivery of enterprise quality applications which are primarily delivered on servers, more and more in a virtualised environment. So yes JEE is more for those folks than SE but lots of smart folks deliver Server side apps, or desktop agents, which aren't about delivering a Swing GUI. They might use the eclipse approach if they are doing desktop for instance. The point is that these folks, who are the vast majority of Java's developer base, are folks who choose their frameworks and libraries for their specific problem. So how does this new feature set help us?
  1. Non-dynamic languages - irrelevant for Java folks, but it makes sense in the VM, might even support some clever language changes in the future but its not for the Java folks so sets say - Negligible improvement
  2. Strict class file checking - Okay needed and part of the specs - Minor improvement
  3. Small language enhancements - as it says on the tin, minor but nice stuff - Minor improvement
  4. Class loader stuff - needed but again its minor - Minor improvement
  5. Concurrency and Collections stuff - okay I'm biased here but if Doug says we need it, we need it. Actually things like the Hashmap are great additions and with the rise of multi-core this is important stuff - Improvement but not a full "point" release
  6. Internationalization - Required but again - Minor improvement
  7. IO improvements - Good stuff and for certain applications great to have, impact wise its the same as Concurrency - Improvement but not a full "point" release
  8. Cryto - now for me we are into the world of "its a library" so yes its good that we can all use ECC crypto... ummm but how many people actually do? Would a dynamic class loader via URL or a build compliance piece not be better? - Developer decision, bloat risk
  9. Client - All of this is a waste of time for enterprise developers, great I've got a new 2D render pipeline, that really helps the performance of my server side app and just kicks ass with the extra weight this adds to provisioning a VM. - Desktop
  10. Web - how miss named is this one? Its actually XML and again this is the sort of thing that would be much better left to developers and deployments. We've seen before that having JAX-WS hard-coded in JavaSE was a dumb idea and this applies generally to the XML pieces. Lets be clear here, people choose their own XML encoders and approaches on a regular basis, having a standard Java library for these things and at the JavaEE level enforcing these as standards makes sense. It does NOT make sense for these to be part of the JavaSE platform as people who don't want to use JavaEE are liable to either not want to do WS or are using a framework that bundles a later version or uses different XML libraries. - Pointless upgrade to bloat
  11. Management upgrade - I like MBeans, I've written lots of fun stuff with MBeans that made Tivoli users smart but did I care whether it was in JavaSE or not? No I did not. If I'm deploying JMX to manage my Java App I'd like to bundle it in the deployment. It even says that if you were a JRockit user you already had this stuff - Library upgrade
So what have the Sun, now Oracle, team given us for these 5 years of work for their whole team in a world where they got to set the agenda? Well I'm going to drop out the Doug Lea stuff as that is Doug Lea FFS and he has quite the JCP. So what are we left with that as an enterprise developer of Java we look forward to? A few patches to Class loaders, VM, Internationalisation and the language. A bit of improvement for those doing file and socket handling at the lowest levels. The rest? Basically libraries that we are already either using (e.g. Crypto) or really wish weren't in there at all (JAX-WS).

And what about the sort of things I thought would be good in JavaSE 7? Well a quick look at the deferred stuff speaks volumes...
  • JSR 294: Language and VM support for modular programming
  • JSR 308: Annotations on Java types
  • JSR TBD: Project Lambda
  • Modularization (Project Jigsaw)
These were the things that were talked about for JavaSE 7 over 5 years ago. A fully funded, fully fledged team has been working on these elements for over 5 years and has deferred them until the next release. Modularization in particular was discussed during JavaSE 6 but was felt to be too aggressive in that release. Its rather hard to believe when we look at the rise of languages like Erlang, support pieces like Hudson and the creation of entire new business models that a well understood task like Modularization (something that people have been doing beyond the SE platform for years (Ivy, Maven) and inside it with Java Web Start) is such a mammoth task. To put it in context, J2SE 1.4 (assert, regular expressions, exception chaining, IPv6) to Java SE 5 (generics, annotations, auto-boxing, vaargs, concurrency utils) was 2.5 years, 5 to 6 was 2 years.

So is JavaSE 7 a great leap forwards that addresses the enterprise challenges of Java and leaves developers able to innovate and drive the industry forwards?

Well we all know the answer to that. No it isn't. The posting of this spec on the "Open"JDK site is also rather telling, the message appears clear to Apache and others that "innovation" is that defined and constrained within a very small and very slow moving mindset, one that is less driven by the industry and enterprise demands and more by local Silicon Valley views on what might be cool.

If JavaSE 7 was tagged as JavaSE 6.5 it might be a bit more accurate but it looks like we will have to wait until JavaSE 8 (late 2012) before the things that should have been done back in 2006 are done. The original JSRs behind this kicked off in 2004/5.

I can't think of something that better highlights the problems with the current approach to the evolution of Java than modularisation. In 2005 everyone knew it was a problem, in 2006 it was pushed from JavaSE as it wasn't quite formalised and thanks to the mentality, management and approach of the Java platform we'll be lucky if its ready 7 years later and very lucky if the solution is widely applauded, being clear this was the ONE big thing that the Java team had to do and it was originally scheduled for JavaSE 7.

Conclusion: Java has lost 5 years of innovation and needs a significant kick up the arse to change its direction and pace of innovation.

A key part of this comes down to how Apache are treated and how much control is released in order to drive the market size, which will benefit Oracle who can commercialise way better than Sun. As Bill Joy used to say "The smartest people don't all work at Sun" and the shift of smart people away from Java as it has stagnated is the biggest single issue facing the platform today. Something needs to change

Technorati Tags: ,

Data Services are bogus, Information services are real

One of the questions that used to be asked, or proposed as fact, in old school SOA was the idea of "Data Services" these were effectively CRUD wrappers on Database tables and the idea was that they were reusable across the enterprise. I've said many times this is a dumb idea as the importance is actually about the information, which means context and governance.

Now the other day when I was talking about MDM a bright spark pointed out that I hated data services but wasn't MDM just about data services?

Its a good challenge, mainly because that is the problem about how many people view MDM. MDM is, when done well is about the M and the M not the D, i.e. its more about Mastery and Management than it is simply about Data. What does that mean?

Well lets take everybody's favourite MDM example "Customer" a Data driven approach would give us a Service of
Service Customer
  • Capability: Create
  • Capability: Update
  • Capability: Delete
  • Capability: Read

Now this is the "D" approach to MDM and SOA, also known as the Dunce's approach, its about Data Services and viewing the world as a set of data objects.

The smart approach is to view MDM as an information challenge and delivering information services so instead of the data centric approach we get
Service Customer
  • Capability: Establish Prospect
  • Capability: Establish Customer
  • Capability: Modify
  • Capability: Change Status
  • Capability: Find
  • Capability: Archive Customer
  • Capability: Validate Customer
  • Capability: Merge Customers
  • Capability: Split Customer

Here we start exposing the customer service and make clear two things
  1. We are talking about the customer in context
  2. We reserve the right to say "no" to a request

So this is where customer genuinely can be used from a single service across the enterprise. This service takes on the responsibility to authorise the customer, or at least ensure that authorisation is done, it sets the quality standards and governance standards and doesn't allow people to do what ever they want with it. It includes the MDM processes elements around customer management and provides a standardised way of managing a customer through its lifecycle.

This is fundamentally the difference between a business SOA driven approach which concentrates on the services in their business context and with their business governance and a technical driven approach which looks to expose technical elements as via a technical wrapper.

Technorati Tags: ,

Saturday, January 29, 2011

Rightscale - cloud provision as a commodity

I made a comment about cloud providers not being good long term investments well having a drink with Simon Plant of Rightscale it became clear that actually its already pretty much a cooked goose in the cloud space. Rightscale do the VM creation and provisioning stuff across most of the public cloud providers as well as folks like VM in the "private" cloud space. What does this mean?

Well simply put it means that you can have Rightscale create the VM image for you in a way that means you can deploy it to pretty much any cloud you want. This means you can start doing SLA/price arbitrage across providers and reduce any potential lock-in from the cloud provider. I like to think of this as an "iPhone strategy" as before the iPhone it was the carrier who would specify what the phones did and would put network specific cruft on them. Apple came along with the iPhone and said "nope, our phone, exactly the same, every network, managed by us", Rightscale is effectively the iPhone and iTunes for your cloud provisioning. By using an intermediary approach you get to control not just the standard stuff like number of VMs, CPUs, Storage etc but the more important stuff like which actual cloud you are deploying to. If you want to shift it in-house from an external provider then you can, if you want to shift between providers then you can, and if you want to start off internally and shift it externally when demand spikes or when it makes financial or security sense then you can.

So Rightscale are doing to clouds what clouds have done for tin... commoditising them. This means cloud providers are in a volume business with retail style metrics and margins. Effectively this means that Rightscale are achieving commercially what Open Cloud has so far failed to do publicly.

So in the same way as you wouldn't consider an Intel/AMD box where your software could only ever run on Dell (for example) why choose an approach to clouds that means you can only choose one provider?


Oh and I bought the drinks BTW.

Technorati Tags: ,

Wednesday, January 26, 2011

What Oracle need to do with Java

Okay so after the "Java is dead" post there has been a reasoned follow-up by Forrester. Now I've made the point that its Not Oracle's fault and suggested that the first thing that needs to change is the current Java leadership from Sun who have been retained at Oracle.

But what should be done with Java. Well a few of the things really are rather obvious and are based on the shift around mobility. J2ME was hugely successful in low powered devices but Android and iOS have really shown a new way forwards. Java needs to recognise that JavaSE is pretty much irrelevant these days and start planning accordingly.
Stop viewing JavaSE as the base platform, the desktop is irrelevant

So what does this mean? Well it means that we need to break Java free from the cruft of JavaSE, yes for compatibility reasons there should still be a JavaSE (lets call it Java Desktop Edition) this is something I talked about in 2006 on what should be done for Java SE 7 but I'd say its now even more relevant. We need a base Java platform that people can innovate from "write once, run anywhere" was a good slogan but its time has past, or rather its now more specific its "write once, run on any compatible platform" so if you build for Java Desktop it works for Java Desktop, if you build for Android it works for Android, if you build for Java Enterprise Edition then it works on... yup Java Enterprise Edition.

What all of these things need however is a much smaller and tighter base platform and a standard way of specifying the libraries that are, and can be, loaded to support an application. The base platform needs to be small and tight to enable it to be an base for smartphones, standard phones and embedded devices. Moore's Law sort of indicates that even in the embedded space we are getting towards the level where we don't need the KVM or more restricted VMs. That is the second piece
The base platform is VM + the minimum libraries + a dynamic loading approach

There is quite a radical shift in those two pieces but its about really developing Java as a language and allowing innovation around its platforms. Having a core where people can innovate on the basic language elements, and have all areas leverage that, is much better than having a world where putting something like JAX-WS into a desktop profile is considered a good idea. This brings us down to the third piece which is about control.

There is a need to "bless" certain platforms and enable commercial and open source vendors to have their solutions certified against those platforms, they also need to be able to control the core VM (not the libraries) and the syntax of the language, using the term "Java" should be restricted to those who use the standard VM and the standard syntax. Other companies could also choose to create their own platforms (e.g. Android) and have certification elements but these would have to have certification that they are "Java" in terms of VM and syntax.
Stop viewing other platforms (Apache, Android) as competition and see them for what they are - a massive increase in the available market of and for Java developers

So how to "bless" these elements? Well clearly the JCP has been subverted by, IMO, the myopic Sun (now Oracle) leadership who railroaded through their own opinions based on completely flawed views of the market. So should the JCP be ditched? No it shouldn't but what it should be is revolutionised. I suggest that there be a core Java board which is focused purely on the platform and has a remit that covers just the JVM and language syntax.

Next up there is an "libraries" board which covers standards which are considered to be library, or library loader, elements which should be standardised. Pieces like JAX-WS, Midi support et al would go here. This board and its standards teams focus purely on getting the standard right.

Next we have 4 "profile" boards - Mobile, Desktop, Web, Enterprise. These are the equivalent of the JavaSE and JavaEE groups but their only role is in agreeing what goes into a profile not in driving those profiles. This allows people to create the biggest bloated Desktop profile that they want but not impact EE (which does not have to be based on Desktop). Web is a new profile that is aimed at addressing the challenges of browser deployments, it might be too late for Java to get involved in this battle but I think its worth considering
JCP to be reorganised around Core, Libraries and Platforms

The main thing though that Oracle need is a leader to take on Java in its new Commercial age, I presented on this in 2008 and it really hasn't changed. Java needs to grow up but still allow itself to be a playground for creativity. Sun were a great creative company but they didn't have the brains to make the commercial shift required. Oracle do have the commercial brains but they haven't yet applied them to Java in the way they have in other areas, most notably middleware.

So that is what I think Oracle should do
1) Change the leadership
2) Recognise that JavaSE is an anachronism
3) Split between Core, Libraries and Platforms

Its this that I think will help Java support the huge legacy base that it has and enable the innovation that is badly needed. But as with all good companies this change in direction needs to come from the top which means Java needs a "suited James Gosling" who can take a pragmatic commercial view on how to create the biggest possible Java market because ultimately that is how Oracle, one of the world's largest Java vendors, will make the most money.

Technorati Tags: ,

Tuesday, January 25, 2011

Cloud providers and software vendors aren't a great long term bet

I'm noticing a bunch of cloud providers attracting massive numbers for funding and people are talking about mega-billion industries and everyone getting hugely rich.

I'd like to sound a note of caution, not on the concept that cloud is important or not going to happen but on the concept that there are loads of companies that are going to make loads of money on it. Let me tell you a quick story about a company that believed in Telecoms in the late 20th Century. The company was called GEC and was one of the giants of UK industry. A GE of the UK with a very strong defence arm. The company had billions in the bank and was one of the most solid stocks in the FTSE 100. Now this company had some new leaders who loved the idea of Telecoms and its "better multiples" and wanted to get out of that boring, profitable, defence industry and go heavy into Telecoms. In 5 years from 1997 till 2001 these new leaders invested all of the cash pile, sold off the defence arm and turned a once towering industrial into a bankrupt shell.

How about another? Lets take Vodafone and their stock chart across this Telecom bubble.


Want another? Alcatel Lucent. Note here I'm talking about two companies who survived the bubble as well as one huge company that bit the bullet as a result of it. One that never recovered would be Nortel a company that during the bubble was at one stage worth 1/3 of the total value of Canadian companies! Startups like Winstar were allegedly worth over $4bn but went pop within a year. Throw in AOL's merger with Time Warner and the picture is pretty complete of massive over investment in infrastructure providers and technologies with a view that the market was basically infinite.

This isn't the first time that an infrastructure play has fundamentally failed to make long term money. Roads, Rail and even Canals had their own booms and bust as it became clear that it was too expensive to build all that infrastructure which people fundamentally didn't want. This is really true in something like Telco, and the cloud, where fundamentally the cost of provision is being driven relentlessly downwards. Investing $10bn today in IT infrastructure is like investing $2.5bn in 4 years time, in other words your investment is worth 1/4 of its retail value in 4 years. Even today with the boom in Mobile Internet you could argue that the large providers aren't massive growth stocks but instead are acting as traditional infrastructure providers and many aren't back to their peak of ten years ago.

So what does this mean for cloud? Well this is another infrastructure play. SaaS and end user provider pieces like Facebook are different types of companies but cloud companies are fundamentally about infrastructure so there are a couple of things to note

1) Its probably too late to get in at the ground floor with startups, although a few will do a spectacular growth and pop
2) Its still worth getting into a cloud startup
3) Start looking for the exits when you compare your company with a "dull" company and think "hell we could be worth as much as Wallmart soon"... that is the time to jump

Stock and investment wise its fine to ride the wave that these companies represent as we should never avoid making money from the up-curve of a bubble.

In the long term its a Telco model ala Vodafone or AT&T so expect the big investments from Microsoft, IBM and Amazon to yield minor returns initially but provide a long term steady income but at the sort of levels that would make people just hold onto the cash if they sat back and thought about it.

Technorati Tags: ,

Integrating the cloud with an incremental backup solution

Okay its time for another "can someone please just build this" type of request. So lets get a few things straight

1) I know that there are cloud backup solutions out there
2) Yes I know that theoretically you could set up rsync to do this.

Now here is the problem

On my computers I have three basic sets of data

1) Work related information - Needs to be backed up securely and if I lose it then its a pain but anything decent will be in my email archive
2) Personally highlighted information - e.g. my flagged photos, stuff that is irreplaceable
3) Stuff I'd prefer not to lose - e.g. the rest of the photos and videos

Now effectively this could give 3 different backups but instead I'd say that actually its only two sets but they need to work together.

1) a Local disk backup
2) an occasional cloud backup

Now to people who run data centres this is all old hat and is basically the "ship the tapes off" part, but I think we can make it a little smoother. So what do we need?

1) A way to specify what is in the backup
2) A way to specify what is backed-up further into the cloud
3) A way to specify the security applied to the backup files

I'm going to deal with 3 first. Now if you have an encrypted hard-drive or password protected elements then clearly the default on the backup needs to be at least that strong, this presents a bit of an issue as it means you have to be able to decrypt to determine the deltas so in other words an approach which is linked to the security profile of the user makes sense and where its hard-drive encrypted its easier as once you are in you are in.

So now to the other pieces, I won't cover item 1 as that is pretty standard in most backup solutions but I'll go onto the second instead.

What we need is some way of specifying which elements we want sent to the cloud for offsite backup. As an example in iPhoto you might decide the flagged photos need to go to the cloud or you might decide that all photos go. These elements are automatically added to the local disk backup but would then be added, for instance daily, to a cloud backup.

Now the surprise here is why TimeMachine doesn't have this sort of facility in conjunction with MobileMe or another Apple branded cloud provision or why Microsoft's massive spend on the cloud hasn't produced something similar. It really is an obvious idea which is basically that key sections (or all if you have a fast enough connection and cash) should float from your local storage onto the cloud.

Technorati Tags: ,

Tuesday, January 18, 2011

Public Cloud is temporary, virtual cloud will move compute to the information

This is another of my "prior art" patent ideas, its something I've talked about before but reading pieces around increasing data volumes its made me think about it more and more.

The big problem with public cloud is that the amount of data that needs to move around is getting exponentially higher. This doesn't mean that public cloud is wrong, it just means that we will need to look more and more about what needs to be moved. At the moment a public cloud solution consists of storage + processing and its the storage that we move around. In that I mean that we ship data to the cloud and back down again. Amazon have recognised the challenge so you can actually physically ship storage to them for large volume pieces, there is however with the continuing rise of Moore's Law and virtualisation another option.

Your organisation has lots of desktops, servers, mobiles and other pieces. The information is created and stored fairly close to these things. The data centre also will contain lots of unused capacity (it always does) so why don't we view it differently? Rather than shipping storage we ship processing? You virtually provision a grid/hadoop/etc infrastructure across your desktop/server/mobile estate as close as possible to the bulk data.

This is when it really gets cloudy as you now move compute to where it can most efficiently process information (Jini folks can now say "told you so") rather than shifting storage to cloud.

The principle here is that the amount of spare capacity in a corporate desktop environment will outstrip that in a public cloud (on a cost/power ratio) and due to its faster network connections to the raw data will be able to more efficiently process the information.

So I predict that in future people will develop technologies that deploy VMs and specific process pieces (I've talked about this with BPEL for years) to the point where it can most efficiently process information.

Public clouds are just new data centre solutions, they don't solve the data movement problem. A truly cloud based processing solution would shift the lightest thing (the processing units) to the data rather than moving the data to the processing units. The spare capacity in desktop and mobile estates could well be the target environment for these virtual clouds.

Technorati Tags: ,

Sunday, January 16, 2011

Using REST and the cloud to meet unexpected and unusual demand

I’m writing this because its something that I recommended to a client about 3 years ago and I know they haven’t adopted it because they’ve suffered a number of outages since then. The scenario is simple, you’ve designed your website to cope with a certain level of demand and you’ve given yourself about 50% leeway to cope with any spikes. Arguably this means that you are spending 1/3 more on hardware and licenses than you need to but realistically its probably a decent way of capacity planning without getting to complex.

Now comes the problem though. Every so often this business gets unexpected spikes, these spikes aren’t a result of increased volume through the standard transactions but are a result of a peak on specific parts of their site, often new parts of the site related to (for instance) sales or problem resolution. The challenge is that these spikes are anything from 300% to 1000% over their expected peak and the site just can’t handle it.

So what is the solution? The answer is to use the power of HTTP and in particular the power of the redirect. I’m saying that this is REST but its something I’d done before I knew about REST but I’m not one to let a bit of reality to get in the way of marketing ;) When I’d done it previously it was prior to cloud but the architecture was basically the same.

First you split your infrastructure architecture into two parts

  1. The redirecting part (hosted in the cloud, or at least on a separately scalable part of your infrastructure)
  2. The bit that does the work


The redirect just sends an HTTP redirect (code 307 so it isn’t cached) to the new site, so lets say http://example.com goes to http://example.com/home its important to not here that this is the only page we are redirecting, its not a case that every page has this just the main page because when there is a mega-spike it tends to come via the homepage.

Now I’m always one to talk about being chatting but the wonder of a redirect is that the user sees a URL flicker in their browser and then the normal page loads. This is certainly an overhead of a single call but from experience this isn’t a big deal in modern sites where you have a page made up of multiple fragments, the additional redirect doesn’t add a significant amount and its only on the initial page load which now takes two network hits rather than one… its an increase in latency for that homepage but not much of an increase in terms of load time.

Now lets wander into the world of cloud, what does this get us then and why is it worth adding this overhead?

Well when you have an extraordinary event you should really think about creating new pages for it rather than just tacking pages onto your normal site, if you are in a scenario where 70-98% of your visitors are looking a specific piece of content then you are much better of thinking in terms of a microsite rather than adding it to your normal site.

All of the old URIs that go beyond the main page should still go to their old places but the home page needs to be redirected to your new microsite. Now some people will be screaming “just use a load balancer” and they have a bit of a point but I’ve always been a bit of a fan of offloading processing onto the client and this is exactly what the redirect does.

So now the redirect site uses the same template as the home site in case of CSS and key navigation but it doesn’t include all of the dynamic bits and fragments that were on the old front page it includes two things

  1. The information directly related to the extraordinary event
  2. Links off to the normal site

So now our original redirection goes from http://example.com to http://example.com/event and we scale the event part to our new demand. If its truly extraordinary then you are better off doing it as static pages and having people making the modifications (even if updates are ever 5 minutes then its cost wise a lot less than call centre staff). The point is simple you are scaling the extraordinary using the cloud.

So spotted the big point here? Its something that you can do with a traditional infrastructure and then make the shift to cloud for what cloud is real good at - handling spikes. You don’t have to redesign your current site to scale dynamically you just have to use a very simple policy and have a cloud solution that you can rapidly put up in the event of an massive spike.

A couple of hints:

  1. Have the images for the spike ready to go and monitor at the redirect level to automatically kick-in the spike protector
  2. Have an automatic process to dump a holding page onto the spike protector which tells people that more information is coming soon, they’ll tend to refresh rather than go to the rest of the site

You don’t need the normal commercial licenses as you can do it via static uploads (the normal site can do its dynamic magic on your old infrastructure) or a temporary OSS solution.

I'm often confused as to why people try and scale to meet extraordinary demand on a normal architecture, people seem to not realise that most spikes aren’t a result of your core business getting 500% more popular over night, its normally a result of a specific promotion or problem and its that specific area which needs scaling. If its a promotion you need to scale the people hitting that promotion and then look at either scaling the payment piece, putting in place a temporary process or throttling the requests through that part of the process. If its an issue then treat the site like a news site and statically publish updates.

So there you go by using the power of a simple command - “redirect” - you can take advantage of cloud quickly and effectively and if you never get the extraordinary event it doesn’t cost you much, if anything.

So get on with the power of redirect and link it to the power of the cloud because that is when technical things are actually interesting, when they can simply be used to solve a problem cheaply that previously was too expensive to solve.

Technorati Tags: ,

Friday, January 14, 2011

The business don't care about technology that works

I had a great conversation with someone today talking about a project where the business really doesn't care about the technology. Why don't they care?
Because they are getting the business outcome they want

That is the key point on why they don't care. Because the technology is providing them with what they want. They couldn't care less whether its a new shiny implementation using REST or some clunky old solution using string, spit and a bloke called Dave with hammers.

All too often IT folks make the mistake of trying to develop and "operationally perfect" solution from a technical perspective but which takes longer to get there from a business perspective, thus meaning that its perceived by the business as a failure and all they are hearing from IT is a list of wonderful technologies which are the reason why they aren't getting their business outcome.

The same goes when the programme goes over budget due to the customisations of the package or the unexpected complexities of the cloud solution. The excuses are almost always technical but all the business cares about is the outcome.

So next time you are looking at the architecture and thinking REST, WS, Dynamically scalable et al just ask yourself this simple question
If I made it simpler and didn't aim for technical perfection would I deliver the business outcome quicker?

Because the business doesn't care how you do it... just that it is done

Technorati Tags: ,

Monday, January 03, 2011

I don't care about cloud because I don't care about tin

I'll start with an admission, I've worked with cloud providers for quite a few years now and the reason is not because I'm excited about elastic scalability of compute and storage... its exactly because I don't care about elastic scalability of compute and storage. I've said before that Tin Huggers are a major blocker to cloud adoption... but now I wonder if cloud itself is actually part of a broader problem...

Cloud is just tin, virtual tin it doesn't actually have a point, it doesn't actually do anything...

Great cloud services such as Amazon AWS are great not because they provide a bunch of tin, but because they provide a set of services which enable the virtualisation of tin. You aren't buying tin its the service, but all the service provides is virtual tin.

SaaS however is different because SaaS provides you with some business capabilities, you are buying a set of business services and you are buying them "as is". Cloud is in one sense just a revision of current IT models where you are building your stuff on virtualised infrastructure, sure its a bit more dynamic but at the end of the day you are still building your own stuff the only difference is that rather than it being hosted in a data centre you never visit but on tin you've bought its hosted in a data centre you don't even know where it is on tin you are renting.

So my point is that cloud is in one sense dull, its the same reason I don't care about the telephone infrastructure, sure I make phone calls and I'm glad its all there, but its the phone and the services I care about, the infrastructure can go hang. This doesn't mean cloud or the phone infrastructure aren't important building blocks for lots of things and clearly SaaS builds on cloud, but equally clearly SaaS is the point that (in the words of the OASIS SOA RM) delivers the real world effect while Cloud just forms part of the execution context.

Lots of cloud marketing and words out there is really just old style IT with a minor bit of lip gloss applied by using "cloud", but does that actually deliver a better service to the business? Sure sometimes the dynamism is good, but sometimes you'd be better off just buying a SaaS service and people are using Cloud interchangeably with SaaS to deliberately muddy the waters and pretend that by doing Cloud they are in fact really doing SaaS and being more business centric.

Cloud is IT centric, SaaS is business centric.

And that is why I care about SaaS and don't care about Cloud. I want to know what services the business can run not how "dynamic" or "scalable" the tin is, I've heard those conversations all my career and they've always bored me. Software is scalable, tin just gets bigger (horizontally or vertically). Cloud is a diversion, sometimes its a successful diversion but in 80%+ of cases SaaS is the true revolution and confusing it with virtual tin isn't helping move us forwards.

Clouds are boring, because Clouds do nothing, its what you run on Clouds that counts and most of the time SaaS is better than old style custom build but on a shinier set of tin.

Technorati Tags: ,

Saturday, January 01, 2011

Cloud media backup store - not a patent

I'm writing this because its an idea I've written about before but I just want to make sure that if there is some idiot or idiot company out there who tries to patent this then I've already written it down in a way that is internationally accessible so arguably could be considered as prior art.

Amazon's Kindle store is a bit like this already in that the purchases that you own are held by Amazon for syncing to multiple devices from their cloud but this doesn't go far enough hence the idea... bloody obvious in my mind but that hasn't stopped people previously applying for patents.

When you buy something from either a physical or virtual channel it comes with a form of identified (e.g. DVDs and CDs have identifying information which can be used to uniquely identify its content independently of that content... its a title of the CD in other words) this identifier can then be looked up in a database to prove that you own the content whether via a physical channel purchase or a virtual store....

Now for the cloud bit....

The cloud service holds the identifiers to which you have content access rights and provides you with future access to that information and the associated content from any device which you own or have access rights to. The cloud service provides a library of approved content from the original distributor and copyright owner. The access identifiers that are purchased (by physical or virtual channel) then act as access tokens to this library and enable the information to be copied to devices that you own and streamed to those that you use.

This service therefore provides a single copy of each file rather than a file per user and maintains the access to that information and content.


Or put it simply, its like having iTunes remember what you have bought and allowing you to download it all again with the added advantage of it recognising that you have already bought a specific CD in the real world and allowing you to download the digital version because you have registered your ownership of the CD via iTunes.

This is for me an obvious addition to MobileMe for Apple and the sort of thing that Amazon, Google or Microsoft could also do. It is also the sort of thing that the record industry should do as it would provide them with a degree of control while providing obvious benefits to consumers.

I don't want to patent this and I don't think software and architectural elements should be patentable... so this is prior art.

Technorati Tags: ,