Tag Archives: software development

Agile London at Thomas Cook

Agile London - worth attending if you work in software development
Agile London – worth attending if you work in software development

This evening I went to the latest Agile London event, hosted at Thomas Cook, at a supremely convenient location about quarter of a mile away from the Endava office.

Our host for the evening was Jesus Fernandez, the Development Manager at Thomas Cook. In a concise introduction he described how Thomas Cook has been consolidation pretty much everything – from its management team to the brands it was selling, to its technology platforms.

Thomas Cook is an £8bn ($13bn) public company which has recently gone through a Digital Transformation programme. Continue reading Agile London at Thomas Cook

AgileLondon at McKinsey Labs

This week I went to AgileLondon which was hosted at McKinsey. It was a really interesting MeetUp-style event with a format I’ve not seen before.

There were seven presentations and we all voted for two of them after a short elevator pitch from the presenters on why their presentation was worthy of being included. The other five were ‘eliminated’ and the audience provided a topic for those presenters to work on while the two were being presented.

Continue reading AgileLondon at McKinsey Labs

Payments International 2015 – Day 3 report #PayInt15

Today was the final day of the Payments International 2015 conference. Here are my notes. Again I apologise for any brevity, grammatical abominations and spelling errors – this post is a case of publishing speed versus comprehensiveness.

Keynote

Smart companies and dumb companies - according to Mark Stevenson
Smart companies and dumb companies – according to Mark Stevenson

Mark Stevenson was the keynote speech. Mark is clearly a Marmite presenter – people either like or dislike him. Personally I liked his approach, and during the session started following him immediately at @optimistontour.

His keynote on “Why Infrastructure We Have Now Can’t Survive” began with describing how core infrastructure and business models are soon going to be unfit for purpose.

Continue reading Payments International 2015 – Day 3 report #PayInt15

Digital Finance Masterclass London, 2014

Earlier today I gave a short presentation to the Digital Finance Masterclass in London. I only had ten minutes, followed by 8 sessions of pretty intense ‘Digital Surgeries’ – a great format, but quite tiring.

Before the event, I had been told that the Digital Surgeries were like speed dating – thankfully I got married before speed dating, because I can’t imagine going through that process in a relaxed, sociable setting.

With only ten minutes for the first presentation, to a varied audience across Financial Services, I focussed on the following topics, shown in the attached Slideshare presentation:

  • Putting the User First
  • Development
  • Cloud
  • Mobile first?
  • The future

Continue reading Digital Finance Masterclass London, 2014

Silently updating

5094412728_0f512006e51

One of the best features about Google Chrome is how it updates itself to provide new features.

If you look at the user experience of various desktop applications, on one end of the scale would be Google Chrome, and the other end would be Microsoft Windows, which relies on the user to configure that they want updates. In most organisations over 100 people, updates are disabled by system administrators. Other applications such as Spotify sit closer to the “Chrome end” because they automatically update however the user is still prompted during the process.

I’m excluding the stomach-churning “will make data survive this?” iPhone OS upgrades because you can’t compare a complete OS upgrade to an application upgrade.

Every so often, Google Chrome checks to see if you are using the latest version. If you aren’t, it automatically downloads the latest version and installs it. The next time you launch Chrome, you’ll be using the latest version – you won’t have clicked on anything to accept it or install it.

Microsoft have cottoned on to this and the next version of Internet Explorer will silently update the browser by default. You can already install an ‘Update blocker’ to prevent automatic updates if you wish.

This puts Microsoft in an interesting situation because they are still clearly focussed on business users rather than consumers. IT organisations aim to standardise programs on user’s computers so that it’s easier to support them en masse. By choosing such a high profile application to start doing automatic updates, it will be a steep learning curve for both IT organisations and Microsoft.

This all paves the way for staff in large organisations to move a step further along the consumerisation journey. As users [supposedly] get more tech-savvy, they don’t need huge IT service desks for application support. In ten years’ time we’ll be choosing our own technology – mobile phone and laptop, and perhaps even our own applications.

We’ll keep the documents centralised (in ‘The Cloud’) and access them via Google Docs, Office 365 or any other newcomers.

The version of the application we are using won’t make any difference whatsoever.

Photo courtesy of warrenski on Flickr.

 

Build vs buy and thick clients

Big_trend_logo1

On Wednesday I will be speaking at the Sitecore Trendspot event, specifically to discuss the Cadbury Spots V Stripes project.

At Endava we tend to use best of breed, off the shelf components as opposed to building components in a bespoke manner.

This philosophy is cyclical – the industry prefers bespoke software, then off the shelf, bespoke, off the shelf – and with each cycle the term is obfuscated. It’s the same with mainframes versus PCs.

At my first job I worked on a huge IBM mainframe at Coats Viyella (clothing manufacturer and retailer, now known as ‘Coats’). At my second job we developed a client-server application – where the program that ran on people’s computers was constantly asking for data – a bit like the mainframe, only it looked really nice and graphical because all the graphing was done on the users’ computers, similar to Microsoft Excel.

We then moved the architecture to a ‘thick client’. No, that’s not an offensive term to our customers, it’s an IT term to describe that all the data and the processing ran on users’ computers. Think of it like running Adobe Photoshop – all the processing and the data is done ‘locally’.

A friend of mine at the time worked for a huge estate agent and they bought into Sun’s ‘thin-client’ computers. Basically these were more like a mainframe – the computer didn’t even have a hard drive – it got all the information and how to format it from a central computer. At one point thin clients were marketed as the next big thing, which people in IT found hilarious because it was back to Mainframes.

Around this time, the Web really took off. The architecture of the Web was similar to mainframes – a huge central computer, sending a screenful of information at a time back to users (inside a browser). We really had gone full circle.

In the last 3-4 years technologies such as Ajax, JQuery, Flash (Flex and AIR included) and Silverlight have all appeared – moving the processing and nicer looking user interfaces back to the users’ computers.

So – can you see the trend? The same has happened with software, but for very different reasons.

In general we don’t produce bespoke software at Endava in the Digital Media space, because if we did, we’d end up building a competitive product today and have to invest at the same rate as competitors for the long term. And supporting the different product releases and all that difficult (and expensive stuff) that goes along with real software.

Also, with off the shelf software it’s possible to replace products and vendors as the industry changes, or client requirements change. Most of our clients have a Content Management System (actually they all have one of these), a social media platform, a media asset platform, database, web analytics package, and the list goes on. To produce bespoke versions of these (which let’s be honest, we’ve all tried doing at one point during the cycles or another), would have cost a fortune and the chances are the product would be well behind the curve in levels of features and performance. vsSo that brings me back to the Sitecore event. We use Sitecore on several of our clients’ websites, so if you are interested in going to the event, please register at . You can also follow the event on Twitter via #SitecoreDT10.

 

Change: the enemy of stability, sometimes

Demotivatorschange

I’m a great fan of change at work.

Sometimes I like change for the necessity of just changing something. As a small example, at work I recommend people keep moving desks a couple of times a year, to sit next to different people (for many reasons – spread knowledge, establish a good, deeper relationship with different people, get a different perspective, and so on). 

The one element of change at work that I don’t like is system changes. When I speak to friends outside of work, they are amazed at why organisations need such large IT organisations, or even why a website needs so many technical resources.

Changing a system always brings a level of risk. Always. No matter how much everyone thinks “nothing can go wrong” – and yes, I hear this from experienced people as much as junior people – it can always come back and bite.

Unfortunately, the only way that you can assess risks of change appropriately is to be burnt (aka “get it wrong”). And after being burnt, its important to act almost scared of it happening again.

Several years ago we made a small modification to a website on a Friday afternoon. You can tell what happened next – there was a problem, and we all ended up working late into the weekend. Since then, we have a blanket rule of no live rollouts after Friday lunchtime.

I spoke to a senior manager at Endava about this recently, and he said that whenever his Managed Services division engage with a new client having stability issues, the first thing they improve or implement if it doesn’t already exist is a full Change Request procedure. This immediately requires people to stop fire fighting and think about any changes. And it always reaps rapid improvements. 

Another example is that retail banks have a code freeze during the last quarter of the year, to prevent anything impacting Xmas sales. On some of our sports websites at work, Xmas can be the busiest period (e.g. football). However we insist on a system wide freeze well before the Xmas period, and this creates the highest level of stability of the year. Let me repeat – the busiest time of the year in the most stable!

People adapt to change well. Even if it requires some help during the initial change ‘shock’. However systems rarely respond to change well.

Software ages like people

Kevin

I spent some time in a meeting with some representatives of our test organisation, and a senior test manager from one of our finance clients.

He quoted an analogy between software applications and people:

  • When software is first relieved, there is a novelty period, where the organisation is happy (relieved) to have launched the application
  • After the novelty, a high number of issues are found in the application, and the organisation may have to change a number of processes and people to support the application
  • At 2 years old, the application may go through the ‘terrible twos’ – the application is still a little unstable, with further instability from original developers who have left the organisation, leading to a loss in experience
  • At 5 years, the application will be much more stable, with many new updates applied to the original application
  • By 40 years old (yes, there are a number of successful applications of this age, especially in finance), the application will be very mature, causing very little grief, and surprisingly, you can now apply a lot of change (very regular rollouts) to an application of this age.

I found the analogy quite interesting – especially considering that ‘Digital Media’ is some 15 years old. So do we need to be worried about the teenage years?