Thursday, October 9, 2008

Good service = profitable customers

John Seddon has written some wonderful stuff on the application of Lean principles to service organisations. Of particular note is the link among efficient process, customer experience and cost of operation. It's a "cake and eat it" situation: efficient processes make for a good customer experience and low operational costs. Good service also contributes to top line performance: i.e. good service makes for profitable customers.

I had a perfect counter-example with my ISP this week. I'm with Virgin, who are generally OK as long as the system is running. Email has been intermittent recently, and went down again 2 days ago. I checked the online status page - which informed me everything was OK. I called the freephone status line which told the same story. My internet access was up, I could still access gmail, just couldn't get my virgin mail. So, I thought I'd let them know they had a problem. I called the reporting line, navigated the IVR forest, eventually arriving at the usual musical entertainment. After 5 minutes I gave that up and decided to look for electronic submission instead. Sure enough there's a form to submit, the contents of which are mind-numbing. Nevertheless, I persevered.

I got the usual immediate, automated response telling me they'd look in to my query. 2 days later I got the following message:

Thanks for getting in touch with the Virgin Media Support team.

We're sorry that you are having problems with your e-mails. To help with this we need to get your issue resolved by our colleagues at Virgin.net

In order for your support query to be dealt with efficiently, could you please click on the following link:

http://www.virgin.net/customers/contactus/

This will ensure that the correct team will receive your form.

The link points to another page from whence one may submit a query (quite how virgin.net and virgin media differ I have no idea - and to be frank, don't really care). So it took 2 days to register my complaint, generate a unique ID in their tracking system, and invite me to re-submit my issue. Why? I can't say for sure, but I'd stake the combined value of the UK banking system (or the loose change in my pocket, whichever is greater) on the following:
  • Virgin's customer service team have targets for responding to customer queries (the website states 48 hours)
  • My query was edging towards the target, so someone (or something) decided a response was needed.
  • So they sent me another stock email that didn't solve my problem, but hey it met the target - so that's good, right?
Well, no. It's not actually. At the start of the debacle I was mildly inconvenienced by the service outage. Not a major problem. But now I'm really irritated. Not only was the submission form ridiculously over-complicated, I've had 2 pointless, valueless emails from which the only message is "we haven't looked at your mail, but please feel free to re-submit again".

In other words: "this is a service call. There's no money to be made from service, so we won't really give it much priority".

Of course, had I wanted to upgrade my package, I could have called any of the numbers emblazoned on the web site and spoken to someone instantly (I tried, just to make sure). What Virgin don't seem to get - and they're by no means in a minority - is the influence of service on the customer relationship. Sales are key: cold calling, outbound marketing, lead generation systems, the list goes on. Service? That's a necessary inconvenience.

How wrong they are. Good service is a - maybe even the - key factor in a customer's propensity to terminate a relationship. Companies would save a lot more money by keeping their existing customers instead of spending vast resources attracting new ones. I was reasonably happy with Virgin before this debacle. I'm now researching alternative ISPs.

Update 3/2/09: I'm no longer with Virgin. I found another ISP who responded quickly and personally to my initial inquiries and were knowledgeable and professional. They managed the swap without fuss or problem and have been similarly responsive with a couple of service-related queries since. So good in fact, I recommended them to a friend who has similarly just moved there - from Virgin. Enough said. What's more, they're more expensive than Virgin were - considerably more than the reduced rate the Virgin adviser promised me if I stayed put.

(And final joy of joys: BBC iPlayer now works flawlessly. Wall to wall Top Gear!)

Saturday, September 6, 2008

Erlang

Just bought Joe Armstrong's book on Erlang and have started working through it. Erlang's an interesting language: although it's been around for years it's recently seen a sudden rise in popularity. Why the sudden interest? Concurrent programming. Moore's law is disintegrating for single core processors, and multi core/multi-processor machines are seen as the way forward. For those used to programming in traditional imperative languages (C, C#, Java, ...) multi-threading has always been difficult and error prone. Erlang was conceived from the beginning to support hundreds, thousands, even millions of concurrent processes, communicating with each other by passing messages. Armstrong contends this makes concurrency much, much simpler than trying to force it on top of the inherently sequential, shared memory model underpinning imperative languages.

Erlang is a functional language which, among other things, means it doesn't support mutable state. 'Variables' aren't; once assigned a value, it's not possible to re-assign a variable to a new value. So variables would be more accurately described as 'immutable single assignment references', although that's admittedly a bit less catchy.

As someone schooled in mutable state since Amstrad CPC464 basic, that's a violent, disruptive change to the mental model. In fact, it's a disruptive change for anyone used to observing the world around them as entities with properties that change over time: bank account balances that vary from day to day, people whose age increases every birthday.

It's a practical problem for functional languages too, since for most any program to be useful it needs to persist or change the state of the world around it. (That's why Haskell has its monad system).

Erlang supports programs that need to store, update and use mutable state; it has its own relational database system (Mnesia) which - since it's written in Erlang - is fully ditributable, concurrent and fault tolerant. Oh, and it doesn't need an object-relational mapping layer either since it's built to work natively with Erlang's type system.

I'm looking forward to getting into Erlang. The concurrent, message passing model is very appealing conceptually and offers lots of possibilities in the new multi-core world. But more than anything else in the book, I'm looking forward to getting my head around marrying the functional model with mutable state.

Friday, August 22, 2008

Textual Input / Graphical Output - the best of both worlds?

Textual vs. Graphical representation was a recurring theme at this years Code Generation Conference. Thankfully, we seem to have moved beyond the “models are graphical, code is textual” misconception. The discussions were more focused on the relative strengths and weaknesses of each.

Arno Hasse and Sven Efftinge categorised it really nicely: graphics are best for visualisation, text is best for editing. There's been some work to integrate the two; for example, it's now possible in Eclipse to generate both graphical and textual editors that act on the same underlying model simultaneously. Marcus Volter has also demonstrated generating graphical visualisations automatically from models using a couple of auto-generation tools (graphviz and prefuse).

Outside of software development tools, this is nothing new: Autocad has for years allowed designers to create drawings with textual commands. Whilst users can draw directly on the canvas, they can also use a textual dsl enter commands - such as lineto(20,100) - with the results rendered graphically.

It would be good to think software tooling will catch on to these possibilities - and it looks like Openarchitectureware may be in the vanguard.

WebDSL and the missing abstractions

“drinking from the firehose” must be on everyone's buzzword bingo all-time greats list. Nevertheless, it's a very apt description of how I felt in Eelco Visser's talk on WebDSL during the code generation 2008 conference.

The core subject matter has been occupying my thoughts in the last few months; there are myriad lightweight web frameworks around now, all of which offer easy definition of the core domain objects and most of which will generate a standard CRUD GUI. All good stuff. But move beyond the auto-generated GUI into something custom and you're on your own. It's down to hacking some form of web template language (JSP/RHTML/Django templates/...) and patching up the link to the domain model through controllers.

I'd been thinking there must be a better, higher level set of abstractions for describing the UI – which could in turn be mapped onto the underlying technology. WebDSL addresses exactly that problem. At least I think it does. There was so much good stuff in Eelco's talk that I couldn't take it all in. I'll need to dig deeper – but right now it looks promising.

Friday, June 27, 2008

Editing Generated code - an anti-pattern?

In the 1970s, Ivor Tiefenbrun revolutionised the hi-fi world. Perceived wisdom held that, since the loudspeakers produced the sound, they were the most important part of the system. So, for a given budget, the largest portion should be assigned to the speakers.

Tiefenbrun disagreed. He believed the most important part was the input. Since the sound originated from the record (vinyl in those days), the most important component was therefore the turntable. So he invented the Linn Sondek, revolutionising the industry with a product that is still for sale today.

Tiefenbrun's lesson came to mind at the code generation 2008 conference in Cambridge – stimulating content in a fantastic setting.

During a discussion in Anneke Kleppe's session, a delegate asked about good practice for editing / modifying / augmenting generated code. The usual solutions came up (Generation Gap pattern, protected blocks) but for me the answer is clear: any kind of modification to the generated code is bad. It's an anti-pattern, there only because of deficiencies in the input and/or lack of customisation in the generator. It's often necessary, in many cases simply because the input language doesn't allow the full model to be expressed (this is true for most mainstream UML tools). But we should be striving to fix the real problem – insufficient or incorrect input – instead of trying to fix the symptom. We need to apply the 'Tiefenbrun principle'.

Monday, January 28, 2008

Agile and the pragmatists

I listened to the Naked Agilists latest podcast the other day.

Brian Marick's pitch in particular sparked my interest. He compares agile adoption against Geoffrey Moore's technology adoption curve and observes a discrepancy; namely the seeming dearth of pragmatists. As I listened, a fundamental question immediately sprung to mind:

- Do we really have a case where the standard curve doesn't fit, or
- Are we looking at the data incorrectly?

Moore's curve is of course statistical and hence variation is to be expected. Nevertheless, I can't help wonder if there's something else going on.

Software projects can be viewed in terms of 4 dimensions: cost, time, quality and function (per e.g. Kent Beck in "Extreme Programming"). The software industry can also be split roughly into IT and Product Development (PD) - the difference being that IT delivers solutions for use within a business; product development builds products that are sold to the market.

I've worked in and/or experienced a variety of both IT and PD companies. Despite both being ostensibly focused on software delivery, there are some notable differences. In particular, there seems to a different bias on the 4 project dimensions:

- IT seems to focus more on cost and schedule;
- PD tends to focus more on function and quality.

I've some theories about why that might be, but will save for later - it's not relevant here. The interesting question is whether this difference in any way explains Brian's observation.

Here's my thoughts:
  • Agile primarily manages delivery of working software --> focuses on function & quality over cost & schedule
  • IT projects are often implementations of purchased products
  • The majority of Agile use is associated with building systems rather than implementing products
  • IT is generally more conservative than PD
Therefore:

Agile's lack of penetration into the pragmatist market is a consequence of the increased IT representation in that segment.

Some observations arising from that assertion:
  • What would the curves look like if we separated IT & PD? My gut feeling is each would more or less follow Moore's model. Agile probably has crossed the chasm for PD; and the size of the segment is probably in line with Moore's curve. Agile in IT is however much earlier in the lifecycle, and crucially hasn't crossed the chasm yet.
  • For agile to cross the IT chasm, two things are required:
    1. There needs to be much more support for agile product implementation to complement that for agile system building
    2. It needs to be presented in a way that resonates with people whose primary focus is cost and schedule over function and quality. That means IT project/programme managers brought up on a diet of MS Project and interpreting PRINCE2 as a waterfall process.
I'm aware the assertions above are based on a very narrow - and therefore statistically insufficient - sample. I know, for example, of at least two IT organisations locally who've embraced agile principles. Thoughts and comments welcome.