Alex Bligh's blog

Alex Bligh's personal blog

Nominet‘s ill-thought out proposals for .uk (since apparently expunged from their web site) appear to have been scrapped. The highlights:

Following our Board meeting yesterday, we are not proceeding with our original proposal on ‘direct.uk’ … It was clear from the feedback that there was not a consensus of support for the direct.uk proposals as presented, with some concerns cutting across different stakeholder groups.

Many of the defects they list were ones that I pointed out here and here – no particular credit for that claimed, as everyone to whom I have spoken made much the same points.

A little ominously Nominet continue:

As a result, we are going to explore whether it is possible to present a revised proposal that meets the principles of increasing trust and security and maintaining the relevance of the .uk proposition in a changing landscape. The Board plans to review progress at their June meeting, where they would decide whether there is an alternative option that addresses the concerns raised in the consultation.  This would be subject to further consultation prior to any final decision being made.

Let’s hope that if a new proposal is presented, it is truly new rather than a warmed up version of the last proposal. In particular, let’s hope that increasing trust in .uk is dealt with separately from opening direct registrations under .uk as the two issues are entirely orthogonal. There were some small nuggets of goodness buried deep within the original proposal (support for DNSSEC for instance), so I’ve no objection to a well thought out replacement proposal (or better pair of proposals) being presented.

A couple of days ago I wrote about Nominet’s plans to allow registrations in .uk. Apparently it’s not just me that thinks the ideas are misguided. Every person I’ve spoken to who has any significant dealings with Nominet doesn’t think much of them. Even the UK chapter of the Internet Society (ISOC), hardly known for rocking the boat, agrees.

I was asked on a mailing list to make public some of the highlights of Nominet’s proposals, so here goes:

  • Permit direct registrations in .uk, but with no priority for those with existing .co.uk domain names;
  • Maintain (almost certainly wrongly) that registrations directly in .uk are more secure, more trustworthy and safer to deal with, so everyone with a .co.uk feels they need a .uk as their misguided customer base things they are somehow shifty;
  • Send out a paper form with every .uk registration to verify the registration address – remember certificates & reply forms anyone?
  • Give trademark holders (even non-UK trademark holders) priority over those without trademarks who legitimately use names;
  • Auction domains for which there are competition;
  • Auction domains names that expire;
  • Only permit registrations through a subsection of registrars; and
  • Virus scan ‘domains’ (I think they mean web sites) and award those that pass a ‘trustmark’ apparently proving the site is ok.

In my view this not just a very silly idea, but an attempt to adopt a ‘registry knows best’ model, with Nominet as policeman of what should and should not be trusted, and gives a huge boost to the international trademark lobby. If implemented, it will forever change the nature of .uk and Nominet (mostly in the latter case by making them look stupid, I suspect).

I wrote a longer blog article on this, and submitted an even longer full consultation response. But if you’ve got this far, and agree this is a stupid proposal, please fill in Nominet’s survey here even if you just go to the end and click ‘disagree with everything’. You’ve got until Monday close of business on Monday, I think.

Nominet announced a little while ago a consultation on allowing domains to be registered directly within .uk rather than in .co.uk. So, for instance, you can register example.uk rather than example.co.uk. In itself this is an interesting proposal worthy of consideration; I think the arguments for and against are pretty balanced. But Nominet has mixed it up with so much other stuff in a rather misguided attempt to improve internet security that this probably counts as one of their sillier ideas. In their current form, my view is that the proposals are seriously flawed.

My full consultation response can be found here. You can reply to the consultation online here (hats off to Nominet for making it easy to do). But hurry hurry hurry! It closes on Monday.

I’ve put a version of the executive summary of my response and my counterproposals below.

Links:


 

Summary

This is consultation document is one of the least well thought-out proposals I have yet to read from Nominet. Whilst there are a number of problems with the detail of the proposal, there are two significant and overarching problems: conflation of purpose, and naivety as to the proposed mechanisms.

Conflation of purpose

The first problem is that it conflates two entirely separate issues:

  • The question of whether direct registrations should be capable of being made at the .uk level; and
  • The question of whether Nominet should encourage more ‘secure’ registrations (validated contact address details, virus checks, DNSSEC and so on), and if so under what circumstances and under what commercial terms.

Nowhere in the consultation document does Nominet adequately explain why registrants within the existing subdomains should not be able to avail them of the ‘high security’ registrations, and why it is thus in the interests of Nominet’s stakeholders to require such registrants to re-register another domain within .uk, at considerable cost to them. As such costs involve not payments to Nominet and/or the registrar concerned, but also the far larger costs of re-branding, it seems perverse not to provide such ‘high security’ registrations where possible in .uk. A cynic might suggest this was simply a revenue or empire building exercise.

Equally, nowhere in the consultation document does Nominet adequately explain why the first-come first-served light-weight registration model which has served Nominet well from inception should not be available within direct registrations in .uk (assuming opening up .uk for third party registrations is a good idea). Nominet proposes that .uk be a domain with enhanced checking of registration details (including the rather quaint idea of sending letters by post). Nominet has already tried this model with (e.g.) ltd.uk and plc.uk. Whilst I cannot find current information on Nominet’s web site, I believe these subdomains are less than 1% of the size of co.uk and considerably smaller than (say) org.uk.

The only purported link is the one set out at the head of the next section, which is in my opinion laughably naïve.

Naivety of mechanism

The only arguable link between the two issues set out in the section above is that consumers will somehow draw a link between the fact that the web site they visit or email they receive has the domain name ‘example.co.uk’ or ‘example.plc.uk’ and conclude that is insecure (being registered as third level domains within existing SLDs), but also know that ‘example.bt.uk’, ‘example.pcl.uk’ (sic) or ‘exampleplc.uk’ are secure (being registered as second level domains within the .uk subdomain). This seems fantastically unlikely unless Nominet embarks on a world wide education program of its own domain registration structure.

Nominet appears to be around 15 years out of date in this area. Consumers increasingly do not recognise domain names at all, but rather use search engines. The domain name is becoming increasingly less relevant (despite Nominet’s research) as consumers are educated to ‘look for the green bar’ or ‘padlock’. Whilst SSL certification has many weakness in proving security, it is by no means as poor a solution as the solution Nominet proposes to replace it.

Recommendations

I make the following recommendations:

  1. Nominet should abandon its current proposals in their entirety.
  2. Nominet should disaggregate the issue of registrations within .uk and the issue of how to help build trust in .uk in general. Nominet should run a separate consultation for opening up .uk, as a simple open domain with the same rules as co.uk. There are plenty of arguments for and against this, but the current consultation confuses them with issues around consumer trust. Whilst consumer trust and so forth are important, they are orthogonal to this issue.
  3. Nominet should remember that a core constituency of its stakeholders are those who have registered domain names. If new registrations are introduced (permitting registration in .uk for instance), Nominet should be sensitive to the fact that these registrants will feel compelled to reregister if only to protect their intellectual property. Putting such pressure and expense on businesses to reregister is one thing (and a matter on which subject ICANN received much criticism in the new gTLD debate); pressurising them to reregister and rebrand by marketing their existing co.uk registration as somehow inferior is beyond the pale (for instance marketing as ‘less secure’ as proposed here). Any revised proposal for opening up .uk should avoid this.
  4. Nominet should recognise that there is no silver bullet (save perhaps one used for shooting oneself in the foot) for the consumer trust problem, and hence it will have to be approached incrementally.
  5. Nominet should be more imaginative and reacquaint itself with developments in technology and the domain market place. Nominet’s attempt to associate a particular aspect of consumer trust with a domain name is akin to attempting to reinvent the wheel, but this time with three sides. Rather, Nominet should be looking at how to work with existing technologies. For instance, if Nominet was really interested in providing enhanced security, it could issue wildcard domain validated SSL certificates for every registration to all registrants; given Nominet already has the technology to comprehensively validate who has a domain name, such certificates could be issued cheaply or for free (and automatically). This might make Nominet instantly the largest certificate issuer in the world. If Nominet wanted to further validate users, it could issue EV certificates. And it could work with emerging technologies such as DANE to free users from the grip of the current overpriced SSL market.

I plan to make this the first of an occasional (for which read ‘when I can be bothered’) series on poor science journalism. Let’s look at this slow-news-day article from the BBC News website, entitled “Alcohol calories ‘too often ignored’”. All quotes are from that article. I’m going to presume there was some scientific research behind the story, though in reality I suspect this is a regurgitated press-release for a hung-over second of January deadline.

It starts:

People watching their weight should pay closer attention to how much alcohol they drink

No argument yet, but then:

since it is second only to fat in terms of calorie content, say experts.

Hang on a second. Can it be really true that a drinker’s main source of calorific content is fat, and second alcohol? The recommend daily intake for fat is about 65g per day for an adult, as opposed to around 300g of carbohydrate and 50g of protein. As fat yields about 9 kcal/g, ethanol 7 kcal/g, and carbohydrates around 4kcal/g, it’s pretty obvious a normal adult will in fact derive most of their calorific intake from carbohydrates. Perhaps they mean ‘second most calorific content per gram’? Well, that too would be misleading as people neither eat raw fat nor drink raw alcohol. Weight for weight, a most food is going to contribute more calories per hundred grams than a drink, simply because most of your drink is water which has no calorific content. 100g of doughnut is going to be between 250kcal and 350kcal. 100g of apple is going to be about 50kcal. 100g (i.e. approx 100ml) of red wine is going to be about 68kcal. And 100g of beer is going to be about 32kcal; clearly on this basis we should all be drinking beer rather than eating apples. Of course they could have written “Ethanol delivers more kilocalories per gram than any macronutrient other than fat”, but that would have revealed why the statement is largely specious.

According to the World Cancer Research Fund, alcohol makes up nearly 10% of total calorie intake among drinkers.

This certainly can’t be true for every person who drinks. I find that statement surprising, particularly if averaged across everyone who drinks. Without the definition of what a ‘drinker’ is, or whether this is a mean or a median, this statement is meaningless.

Having a large glass of wine will cost you the same 178 calories as eating two chocolate digestive biscuits.

Firstly, neither food costs you calories. Going for a run costs you calories, as you expend calories running. Eating and drinking gains you calories.

Secondly, if I believed this article, these chocolate biscuits are going to be very tiny, as “Protein and carbohydrates contain 4kcal/g and fibre 2kcal/g.” (as it says later), hence the chocolate biscuits must weigh less than a gram – impressive dieting! Of course what they’ve done here is confused Calories and kcal. A Calorie (normally written with a capital ‘C’, and abbreviated ‘Cal’) means a kilocalorie, i.e. 1,000 calories (small ‘c’, gram calories). Yes, it’s confusing (which isn’t the journalist’s fault) but using ‘kcal’ and ‘calorie’ within the same article is an artificial distinction that is going to lead the reader to think ‘but the author must mean something different by the terms, so the first is 1,000 times the second’.

Eating or drinking too many calories on a regular basis can lead to weight gain. But unlike food, alcoholic drinks have very little or no nutritional value.

Now I’m very confused. How can we gain energy (and thus weight) from alcoholic drinks (which often contain sugar as well as alcohol), but the drink have ‘no nutritional value’? The nutritional value of food is the quantity and type of macronutrients (water, carbohydrate, fat and protein), and micronutrients (vitamins, minerals, and possibly some other bits and bobs) found within the food. Clearly alcoholic drunks have nutritional value as they contain water, sugar and ethanol which (as the author has already pointed out) has high calorific content. Let’s also ignore all the previous articles about antioxidants in wine (those would fall into the ‘bits and bobs’ category). Certainly it would be a bad plan to obtain the majority of your nutrition from alcoholic drinks. I would suggest the main danger would be to your liver. To my knowledge, not many alcoholics die of malnutrition.

The ‘empty calories’ in drinks are often forgotten or ignored by dieters, says the WCRF.

I bet they didn’t say that, at least not without defining ‘empty calories’. Without the word ‘empty’, this sentence would make sense.

Kate Mendoza, head of health information at WCRF, said: “Recent reports have shown that people are unaware of calories in drinks and don’t include them when calculating their daily consumption.”

Ah, some sense at last. Did you notice that this, along with another quote from Ms Mendoza which also makes sense, were the only parts of the article written by WCRF rather than the journalist? Unfortunately, access to the remainder of Ms Mendoza’s wisdom is prohibited by a lack of a link to the original research publication (we wouldn’t want the public to be educated after all), and the fact that WCRF’s web site is appallingly slow.

The scientific component of the story in whole consists of ‘people who drink alcoholic drinks regularly often forget that the drinks have a significant calorific value’ (drink enough and you’ll forget other things too). As this is no more counterintuitive than the defecatory habitats of the Ursidae, the text must then end with some standard padding about safe drinking with no relevance to this at all.

You would have thought the BBC could do better than this example of sloppy journalism, but all too often science journalism is pretty awful.

People wanting Apple Mail to support subscriptions might like to look at:

https://github.com/abligh/imapmboxfilter

(warning: beta quality).

In a future version, I hope to make the ‘omit’ list optionally and automatically contain to a list of folders not subscribed to (i.e. you can use it to automatically track subscriptions rather than do it manually).

I’ve put my current work on the Apache vnc/tcp proxy (explanation here) on github, as people were (quite rightly) complaining a tarball was not particularly helpful. I’ve also added initial support for the guacamole protocol, though this is in need of optimisation. What’s on github is actually a clone of self.disconnect’s repo of apache-websocket, with my stuff in the vncproxy directory. It’s rough at the edges at the moment (don’t expect makefiles, substantial documentation etc.)

There is an occasional SEGV with guacamole (after 30 minutes of watching YouTube over it) which is, I think, due to me doing something as yet unknown which is not thread safe.

Comments welcome.

This blog post tells you how to add an emacs style for programming apache httpd and its modules. This is one of those things that is difficult to google for, as you’ll end up finding emacs modes for editing httpd.conf, or various Java-based apache projects.

Nice and simple. Add this to your ~/.emacs file:

(c-add-style "apache"
             '((inclass . ++)
	       (indent-tabs-mode . nil)
               (defun-block-intro . ++)
               (statement-block-intro . ++)
               (substatement . ++)
               (brace-list-intro . ++)
               (statement-case-intro . ++)
               (inextern-lang . 0)
               ))

Then when you want to use it, in emacs do:

C-c . apache

 
Or if you like typing:

M-x c-set-style apache

 
I don’t claim to be an emacs expert, but I can confirm this works.

A while ago, following a line fault I wrote about why BT TotalCare is a total waste of money. I’m sad to say, following a line fault, the situation has not improved. This time, they fixed the fault in what I suppose is a reasonable period of time – that, you would have thought, was the hard part. But their customer communication (which you would have thought was the easy bit) was dreadful.

Once again, here’s the promise from BT’s own web site on TotalCare (with my emphasis):

Have you considered how much it would cost your business if your Business Phone Lines went down even for a short time? In lost customers, lost revenue, customer dis-satisfaction? … BT will respond within 4 hours of receiving your fault report and if the fault is not cleared during this period, we will advise you of progress.

And from another BT site (again my emphasis):

We guarantee to resolve a “Service Failure” in line with the care/service level you have chosen. For Total Care, this means 24 Hours after you report the fault, unless you have requested specific appointment date. The Total Care working week is 7 Days a week, 52 weeks per year.

What actually happened is:

  • I reported the fault at 17:55 yesterday (and, to be clear, the woman who took the call was helpful).
  • BT’s web based fault tracker never indicated any change until the fault was fixed, but suggested I rang BT for an update.
  • When I rang BT, I was effectively told off, and told (wrongly) that this was a 24 hour response service, not a 4 hour response service.
  • I received only one update from BT, and that was 16 hours and 50 minutes after the fault was reported, and was out of date.

A little digging suggests that whilst the 24 hour ‘guarantee’ is encoded in the contractual documentation (albeit with more holes in than a colander), the four hour response ‘promise’ is nowhere to be found. So, potential buyers of BT TotalCare, beware:

  • The marketing material might say you get a 4 hour response time, but contractually this appears to mean nothing (corrections from BT welcome).
  • As far as I can tell on any recent fault I’ve had, nothing substantive has ever happened out of business hours, even when the fault is at the exchange.
  • Don’t think you will actually get any useful fault updates from BT at all. That’s not to say they won’t fix the line (they did, and within the 24 hours, unlike last time when they claimed they couldn’t work in the dark), just in their own good time. I mean it’s not as if you’re paying extra for this, is it? Oh. Yes you are.

If it wasn’t for the fact BT owns very nearly all the copper infrastructure in the UK, I’d change providers. As it is, I’m stuck with them.

I’ve started keeping diaries of BT faults (obsessional? me?), so anyone really interested in the gory details can click on ‘Page 2′ below.

Here is a really interesting article from GigaOM.

I’m going to quote two paragraphs:

I was a loyal (and repeat) Dell customer. Like clockwork, I would buy a new Dell desktop or laptop, mostly to keep up with Microsoft’s Windows OS. And I never really had a problem with Dell machines — they were solid and lasted forever. Except when Apple launched the Titanium Powerbook, I switched and never looked back. I think it was frustration with Windows more than Dell.

They are tied at the hip with Microsoft and its operating systems and as a result they cannot look beyond Microsoft. The fact is that both Dell and HP have offered consumers pretty much nothing in terms of innovation when it comes to PCs. Compare that with Apple and Samsung and you start to see that these two PC giants have been essentially twiddling their thumbs.

The lesson here is that tying your company’s future to a megalith in a way that makes it impossible to produce a truly differentiated product is a recipe for allowing competitors that don’t adopt this strategy to overtake you. PC hardware is commoditised and undifferentiated. Is a Dell laptop significantly different from a Lenovo or an Acer? No. It’s the software on it that makes the difference. And if your strategy is to produce a laptop with slightly different moulding that runs the same software as everyone else, then you are not going to be able to differentiate (at least not on product). That’s not to say producing computers that run Windows or are compatible Windows software is dumb (after all, the Powerbook will run Windows too, under a VMware Fusion, under Bootcamp) – in fact compatibility is probably a sensible strategy if there is an ecosystem to exploit. But making your product do no more than that means in practical terms it does no more than any of your competitors.

Agree? If so, cloud folks, substitute ‘Windows’ with ‘EC-2′, and ‘Windows compatible’ with ‘EC-2 compatible’.

Last week, Flexiant posted a whitepaper I had written, proclaiming that private cloud was ultimately dead. Jeremiah Dooley (who works in the office of the CTO at VCE) posted a thoughtful riposte yesterday. Firstly, I’d like to thank Jeremiah for taking the time to read the paper before commenting (not everyone did that), and most of all for taking the time to reply. I said on Twitter I would answer, so here it is.

Apologies for the length, but some of the criticisms raised are down to the fact the original paper was condensed to make it readable by those with short attention spans, and that necessarily means some points got lost.

Purpose of Paper

Let’s put this one to rest first. Whilst (hard as it might be to believe given the consequent twitter storm) this wasn’t originally written by me as marketing piece but as a ranty blog post, I should be clear that it was subsequently adapted by our marketing department to form one. Adapting in this case means making it less long winded (see the length of this blog post for why this is necessary) and easier to digest (which means taking some material out), and making it look pretty. It is marketing collateral. Its audience is service providers, and specifically people in service providers who are selling to their potential customers. That’s not the same thing as an academic paper or a paper written for a cloud conference.

So, is this just a sales pitch for Flexiant? Well, no – not least as it doesn’t once suggest anyone buys anything. We’ve found whilst all service providers have got the message ‘cloud is hot’, many are ill equipped to sell it, as they often don’t understand the issues and the objections that their customers raise. Their customers’ objections to cloud technology are similar in many cases to objections we saw to virtualisation and objections to outsourcing; whilst some are valid, more often than not at least some are based more on fear, self-protection and specious argument than solid facts. We’re a European company and our customers are predominantly European, so perhaps that applies more here in than in the US (though I’ve had plenty of US people agree with me on this point). So what the white paper is designed to do is to provide ammunition to service providers who need to make their case for service provider run clouds, rather than solutions that keep IT in the enterprise. I’d say that’s because we believe the long-term future of IT is in the Service Provider, not the organisation’s own data center; I explain why in the article, and not one of the reasons I give is that ‘Flexiant’s software is great’. I can understand why, given Flexiant sells Service Provider focused software, that might come over as self-serving, but when I load up VCE’s home page, the ‘title’ element says ‘Private Cloud Computing’, so perhaps we all have our biases.

Am I doing anything radically new here? Are my terms particularly controversial? I don’t think so. Here’s an article by Simon Wardley using yet another definition of private and public cloud. And here’s one saying private clouds are a transitional phase whereas public clouds are the long term economic model. Perhaps ‘private clouds are ultimately dead’ is just a bit of plain speaking too far, but it means the same thing. I could look for other sources, but having sat through Simon’s (by his own admission often rather similar) presentations sufficient times without a single person arguing, I’m guessing his views are hardly heretical.

Pet Peeves

Jeremiah doesn’t like having to register to read papers on the web. You know what? Neither do I. In fact I’ll go further: I hate it. I hate it enough that 99% of the time it will stop me reading them. I also hate emails not written in plain text and wrapped at 76 characters, top posted replies, and my iPhone for making it hard for me to avoid either. But I’m not the audience, and this is marketing collateral. Our marketing guys like measuring the effectiveness and distribution of these materials, and asking for registration is hardly unknown – for instance one of VCE’s shareholder’s here. Given the number of cartoon characters we now appear to know have both memorable phone numbers and a surprising interest in cloud, and given we emailed it to anyone who wanted it, there would appear not to be that high a bar to work around for those who don’t want to give personal details.

And as far as opening PDFs opening full screen, I’m almost sure that’s a product of the web browser. On Safari and Chrome the default (for all PDFs) is to load them in browser. This infuriates me, so I turned it off. All PDF downloads (including Flexiant’s) now appear as downloads.

Anyway, this reply is in badly formatted HTML on my personal website, unadulterated by marketing, and with no registration required, so I am thus hoping it is peeve free.

Definitions

Jeremiah notes that I didn’t use the NIST definitions. I have a few nits to pick with the NIST definitions (in particular the use of the phrase ‘general public’ as opposed to ‘customers in general’), but none that are particularly serious. So let’s have a look at the differences between NIST and what I used, so we can ascertain whether this makes any practical difference:

NIST says:

Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.

Community cloud. The cloud infrastructure is provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be owned, managed, and operated by one or more of the organizations in the community, a third party, or some combination of them, and it may exist on or off premises.

Public cloud. The cloud infrastructure is provisioned for open use by the general public. It may be owned, managed, and operated by a business, academic, or government organization, or some combination of them. It exists on the premises of the cloud provider.

Hybrid cloud. The cloud infrastructure is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

I wrote:

Public cloud – a bank of cloud resources managed by a service provider, located in its datacenter and shared between customers.

Private cloud – a bank of cloud resources managed by an organisation, and used only by that organisation. It is irrelevant whether it is ‘on premises’ or not. This isn’t a distinction of who owns what building, but of who manages what. Just as the organisation might not own its head office building, the physical datacenter might belong to someone else, but the management and operation is done by the organisation concerned.

Hybrid cloud – a particularly ill-defined term that has many definitions. In general it means something on the spectrum between public and private clouds.

So, there are three main differences:

  1. I don’t mention community clouds. I think they are special cases of public clouds, just with a limited ‘public’. Given how little attention these get these days, I think this is a forgivable omission, and Jeremiah doesn’t pick me up on this one.
  2. Our definitions of private cloud are different. I say a private cloud is managed by the organisation to whom the resources are dedicated. NIST say it can be managed by anyone. More on this one immediately below.
  3. I say ‘hybrid cloud’ is an ill-defined term. I stand by that, and more on this under ‘Hybrid Clouds’ below.

Does who manages a private cloud matter?

So let’s look at point 2. My core argument re private clouds is that save in the case of the largest of clouds, the economics of public clouds are likely to be better. Those economics are likely in most cases to outweigh other factors, particularly as many of those other factors are less relevant than they might first appear, or are spurious. The reason why I believe the economics of public clouds are superior to private clouds are derived from scale, and scale comes in the main from multi-tenancy. I could perhaps have said ‘big clouds are better than small clouds’.

So, if by ‘manage’, we are looking solely at the economics of the hardware, I agree it doesn’t much matter who manages it. I agree that a private cloud doesn’t gain some sort of magic technical efficiency because the people with console access are employed by someone other than the organisation using it. However, management also includes the organisational resources to set the cloud up and to provide ongoing support. Those aren’t skills every organisation has, and there are economies of scale to be found here too. But I’d argue in the absence of any form of multi-tenancy, those economies are in general small.

The distinction I was making in my definitions was that it doesn’t matter whether the private cloud is on-premise or not. Put simply, putting your stuff in someone else’s datacenter does not make it either a cloud, or necessarily more efficient. Or (with the intended audience in mind) ‘don’t go selling datacenter space to people and pretend that you’re selling them cloud’. Something more than the location of the equipment needs to change whether it is scale (through multi-tenancy) or something else (perhaps management).

I agree that my definition has a ‘blackspot’ between my definitions of public and private cloud, which that a cloud dedicated to one organisation and where no resources are shared (i.e. is entirely single tenant) but which is managed by another organisation doesn’t fit into any category; such a cloud falls between two stools. NIST would call this a private cloud. Jeremiah says my definition is ‘self-serving’. I agree that there are some limited economies to be gleaned from having a single tenant cloud managed by a third party, and I agree that I ignored this case, but I don’t see how this makes the argument self-serving. As Jeremiah says, ‘How an enterprise pays for or maintains ownership of those solutions doesn’t fundamentally change what is going on’. To the extent that’s correct, then a private cloud managed in-house and a private cloud managed by a service provider should be similar, so not a lot would appear to turn upon the fact I didn’t deal with the latter case.

That said, given not a lot turns on it, and given as an industry we spend too much time talking about definitions, it would have been preferable to use the NIST definition of private cloud and avoid the debate. If I revise the paper I shall use that instead.

Hybrid clouds

Jeremiah says:

The dismissal of the Hybrid classification is expected, but disappointing.  Being able to manage resources that have been purchased from public clouds alongside infrastructure that is dedicated is important.  Most of the hypervisor providers have been, or are moving towards tooling that can handle this kind of use case.  If you are a round peg, it’s easy to see the Hybrid use case as a square hole and move on, but customers and service providers are increasingly seeing a need for this.

So here we disagree. Yes, I do slate the classification of private cloud, in the sense that I point out it is an ill-defined term. I don’t, however, slate hybrid clouds in general. In fact I say there are three models, two of which have their uses.

Re the definition, NIST says (ignoring community clouds) that hybrid cloud is a composition of private and public cloud infrastructures that remain unique entities but are bound together. I say the term is used to mean something on the spectrum between public and private cloud, and then later (starting on page 7) decompose this into three modes, the first two of which fit directly into the NIST definition (a composition of public and private clouds, with the composition being done in different ways), and the third of which describes a single cloud where some resources are dedicated and some are shared. Whatever the NIST definition says, this usage (meaning a hybrid between a public and private cloud) is used. I agree it’s confusing, and I make that point myself.

So, why do I draw a distinction between the first two hybrid cloud models? Simply because one makes sense, and the other is (in my opinion) likely to be a nonsense.

I have a suspicion that Jeremiah might actually agree with me here, as his statement ‘In the next paragraph the phrase “cloud-bursting” was used, and so I skipped the entire rest of the Hybrid Cloud section in protest’ rather suggests he thinks as little of it as I do. However, like it or not, we still hear people talking about it, and it thus needs to be addressed. I suspect skipping the rest of the section where I talk about the more useful types of hybrid cloud might explain why Jeremiah thinks I’m slating the whole thing.

So, what are these two models. In the first model, a given service component uses both private and public cloud simultaneously (at least under some circumstances) – for instance the service might ‘burst’ to use the cloud when running over a baseload capacity. In the second model, some services or service components are on a private cloud, some on a public cloud.

Let’s look at that diagrammatically, considering two services A and B, and colour representing whether the workload is performed on a public or a private cloud.

Hybrid Cloud Models

The first model’s problems are in essence that the very things that (by assumption) prevent a service going entirely on public cloud make it unsuitable to go on both public and private cloud. You also have data synchronisation issues if (like most compute tasks) each compute node cannot carry out its tasks completely independently. Or, to put it another way, introducing tens or hundreds of milliseconds of latency between some parts of an infrastructure and others may cause it to perform badly. As far as I’m concerned, apart from a few limited applications, this model – and cloud bursting in particular – is smoke and mirrors. Perhaps someone will pop up and prove me wrong on this, with an example of a significant enterprise application where cloud bursting using a hybrid cloud works better than putting the lot on public cloud. For now, I think it’s all hand-waving.

The second model doesn’t suffer from these problems. Some services or service components go on public cloud, some on private cloud. There’s no dynamic placement lunacy. I agree here common management is useful. But the model itself isn’t generating an additional returns beyond the returns you’re getting from putting one set of services on public cloud, and putting another on private cloud. And you’ve got a few extra challenges. For instance, you may have increased latency between the public and private elements. Assuming the two halves are closely connected, you’ve imported the security concerns of public cloud and consequent need to take additional precautions into the private cloud bit too. So whilst to an extent you have the best of both worlds, also you can have the worst. It’s not a panacea. What I don’t see it does is produce substantial advantages beyond those of its constituent components. I do however, in the concluding paragraph to this section (the title to which has sadly gone awry) suggest this is a useful approach for enterprises to take, along with the third model (virtual private clouds).

On Commodity Clouds

Jeremiah says:

On the public cloud side, I would wager that the majority of the customers referenced in the GiagOM research regarding pricing being a driver were talking about AWS, not cloud services hosted by traditional service providers.  Using them as the bar, especially on cost, isn’t very useful.

I agree with this. However, that was by way of interesting introductory statistic rather than a basis for my entire argument. I am (and Flexiant is) a strong believer in the ability of service providers to differentiate on things other than price. Service and Service Level is one such area ripe for differentiation as I mention on page 10 – have you ever tried speaking to AWS on the phone? If I thought price was the only important factor, I’d conclude that only service providers approximating the size of AWS would have any significant market share (i.e. that there would be a very high concentration ratio, and Flexiant would have very few customers). However that is not what has happened in any many other service provider market, and not what I believe will happen.

Currently, as an industry we suffer from a problem where cloud products are poorly defined, not portable, and not fungible. Put simply, comparing them is an apples-and-oranges exercise. Lack of substitutability means we have poor price competition, which means excess profit for certain suppliers. We have one licensee with a cloud much smaller than AWS that prices their product significantly cheaper, sells a commodity product, and still makes money, and I know of at least one other provided (not using our software) who does similarly. Thus I’m pretty sure AWS operates at a very healthy margin.

So, given that I think price isn’t everything, why am I banging on about economic drivers? Two reasons: firstly, the law of diminishing returns, and secondly because I am talking about the long term, not the market situation now. On the first point, most enterprises are sub-scale. They will get very significant economic advantages by deploying their technology on a larger platform than the platform they themselves need. There are exceptions – for instance enterprises with IT requirements the size of a medium size service provider are likely to be able to attain a similar cost profile to that sort of service provider. But the larger the service provider gets, the lower the marginal economies of scale. I am betting a service provider half the size of AWS, going to the same hardware vendor, can purchase hardware within a fraction of the percentage point of cost difference. And re the second point, the current lack of comparability between cloud products will disappear as the market and the products within it are commoditised. That doesn’t mean every product will become the same, rather that it will become well defined and the buyer will be better able to select products that are similar or substitute effectively. And what that means is that where two products have similar characteristics, price will become an increasingly important factor.

Let’s pick a real world example from another industry segment: internet access pricing. When I started in that business in the early nineties, prices and products varied hugely. You could buy purportedly the same service for one price, or elsewhere for one tenth of that price. Knowing which to buy was difficult. Pricing was incomprehensible and illogical. Margins varied hugely (though I seem to remember most of them were negative). Now, there are still different types of internet access product, and those have different prices. But the price of a particular type is reasonable consistent across all suppliers.

We’re still in that first stage for cloud. I’ve heard several organisations say that right now they are building private clouds because ‘they can do it cheaper than Amazon’ even for comparatively small clouds. What they mean by this is ‘they can build it more cheaply than Amazon’s current output pricing’. I expect that for commodity cloud (i.e. the market AWS is aimed at) in time margin compression from competition will end that for clouds of such small scale. It’s at this stage that economic drivers will kick in. But the point about price comparability is equally applicable to the choice between similar products or solution provided on public or private cloud infrastructures.

Let’s consider for a moment the alternative hypothesis: cloud will grow hugely, but despite this there will never be effective competition. This sounds desperately unlikely.

Public Cloud Drivers

Jeremiah says:

As an aside, I don’t agree with including “multi-tenancy” and “commodity” in a list of reasons why customers should choose a public cloud.  Both are things that customers will have to swallow hard and deal with in order to take advantage of the larger value, but I don’t think either is a selling point to the customer.

I agree. That’s why I didn’t list them as demand side drivers. They are supply side drivers (see top of page 5), in that the supplier wants them, because it reduces the cost of providing the services and thereby gives statistical gain. The customer may (or may not) benefit from consequent cheaper pricing.

Private Cloud Drivers

I’m less clear on the basis of Jeremiah’s objections here. Let’s take two things in particular.

Jeremiah writes:

There are TONS of valid reasons why the vast majority of enterprises of all sizes continue to maintain their own infrastructure, or pay a provider to maintain it on their behalf, and almost none of them include elasticity or utility billing.  I’d argue that very, very, few organizations, no matter what the size, do actual utility billing internally.

I’m not sure I understand the point here. I would have thought the point that the enterprise for many applications wants both elasticity (ability to flex resources) and utility billing (i.e. billing on a usage related OpEx basis rather than a CapEx basis) is relatively uncontroversial. Yes, I agree that they do not in general get this if they maintain their own infrastructure or pay someone to manage it on their behalf; that’s rather my point. The economic drivers that push people toward public cloud do not apply to the same extent to private cloud (whoever maintains it). Frankly I would have thought that was pretty uncontroversial. This does not mean there are not other factors that might militate in favour of private cloud solutions. But what it does mean is that the magic ‘cloud’ word does not deliver the same benefits regardless of deployment model.

And he continues:

Here’s a hint: it’s a completely different product, so there is different value realized and different costs involved.  Comparing a company that wants a dedicated infrastructure managed by CSC to a customer who wants to pull VMs from a public cloud provider and migrate their enterprise apps to that model is silly.  They don’t have the same drivers, they don’t have the same expectation of cost, they don’t want the same value.

I’m beginning to think part of the problem here is definitional. My comparison is not between AWS and CSC. It’s between the customer that wants a dedicated infrastructure (whether managed by CSC or the customer, that’s a private cloud) and the customer that wants a managed infrastructure provided by CSC which is not dedicated to one customer but rather to all of CSC’s customers, i.e. is multi-tenant. In my book the latter is public cloud. Under NIST’s terminology I think it is too, as it certainly doesn’t fit within the NIST private, hybrid or community cloud definitions.

Addressing Buyer Objections

Again, I think most of these are misunderstandings of what I wrote (or perhaps my failure to express them adequately).

He writes:

Objection 1 – Public cloud has inadequate SLAs – (paraphrased) “Sure they do, and even if they don’t you don’t really need an SLA anyway, but you need a terms of services contract.”  … VCE customers have, on average, 0.5 infrastructure incidents a year, leading to 83X better availability according to IDC, reducing productivity losses by more than $9,000/yr per 100 users.  There’s an implicit level of accountability with the internal IT teams responsible for those metrics (um, continued employment), but if I want that same level of accountability from AWS (2.0 incidents/year, 72X better availability) I’m asking the wrong questions?  Red herring at best.

Well, I’m not in a position to argue with IDC’s figures, so let’s take them at face value. If so, using VCE gives 83 times better availability than AWS (which is built on commodity hardware). No doubt it costs rather a lot more too. What we learn is that expensive hardware with built in redundancy has higher availability than cheap hardware that doesn’t. No surprises there.

But that’s a difference in technology, not a difference in deployment model. If VCE’s hardware were deployed in a public cloud, would its availability suddenly drop to the same as AWS? I’m guessing not.

What I argue is not that public clouds all have fantastic SLAs. Rather that you get what you pay for. If you want an enterprise-grade SLA (for some value of ‘enterprise-grade’), you aren’t going to get it from AWS. Perhaps you need to go to a cloud built using VCE’s hardware, or one with a different configuration. Undoubtedly, it will have a different price point. However, just because commodity public clouds often do not have such SLAs, that does not mean that the fact that a cloud is public prevents it from having a meaningful SLA. And an internally managed private cloud often has no SLA at all.

Jeremiah writes:

Objection 2 – Public cloud has inadequate security – (paraphrased) “Most problems here are actually your fault, and anything that IS an issue with public clouds is also an issue with private clouds, except public cloud providers are smarter than you.”, Well.  OK then.  Once again, the shitty definition of “private cloud” used plays into the failure of this statement to reflect reality, but this isn’t an actual rebuttal of the objection, it’s just pointing the finger somewhere else.:

Let’s quote in full what I actually said, as it’s shorter that Jeremiah’s comment.

Most security problems are not down to technical failings, but are instead due to poor organisational practice. For example, compare the number of security breaches originating in software bugs to those originating from configuration errors. Whilst cloud does present security challenges, many of these are common to private clouds. Service providers have in-house cloud-focused security expertise, whereas enterprises in general do not.

I’ll leave it to the reader to determine whether Jeremiah’s paraphrasing is a fair summary. As I say, cloud does present security challenge, and some (but not all) are common to all deployment models. Approaching private cloud as if it contains no security challenges would be foolhardy.

He writes:

Objection 3 – For regulatory reasons we cannot use public cloud – (paraphrased) “You don’t really know what those regulations mean.  Hybrid cloud may help here, just ignore that we ragged on it in the previous objection.  You should go get a deeper understanding of the fundamental regulatory oversight your company is in.”

Again, I don’t say what Jeremiah seems to think I said. Firstly, it was only one model of hybrid cloud I slated (though as Jeremiah skipped the section, he missed that). Secondly, I do not say the company doesn’t know what the regulations mean, I say “it is necessary for service providers to get a deep understanding of what the regulatory restrictions actually are, rather than accept the statement that regulation bans them at face value”. Knowing your customer’s industry is vital and the parameters in which they operate is surely important.

Conclusion

I pretty much stand by what I said in the original white paper. In retrospect it would have better to use the NIST definitions straight, perhaps at the expense of some more words, but I don’t think it changes the thrust of the argument. Given this reply is already longer than the original whitepaper, and given its intended audience, there’s a limit to the amount of information I could reasonably put therein. However, thanks to Jeremiah for helping draw out these points.