Blog

“NOT another dashboard!” and “No more alarms either!” – The case for visualisation in utility companies

Visualisation is a catch-all term for displaying data, usually via web or mobile, in a graphical, conversational and intuitive way. http://www.igi-global.com/dictionary/visualisation/31943

Nowhere have I seen a more urgent need for visualisation than in utilities. There are tens, or even hundreds of thousands of sensors in a typical water utility company’s network, and the increased need to understand their network better with the ever-closer promise of practical smart metering, means there is going to be an explosion in the number of sensors connected to their systems and consequently, the amount of data they’ll have to deal with.

Information overload

Water utility control rooms in most large utilities are typically handling well over a million alarms a year and someone is supposed to look at, and make a decision on, every one of them. If we ever get smart water meters, which promise to allow flow, temperature and pressure measurement at every customer, then the problem gets much bigger than it is now.  If the water network looks like branches of a tree, then they’ll be measuring down to leaf level. Will all this data help understand the network better? I really don’t think it will until the way network data presentation changes. The benefit of such granular data is clear, as it will enable the water companies to manage their networks in a more pro-active and efficient manner, but it carries the considerable downside of being “Data rich but information poor” and overloading the operations teams.

Better network knowledge

The three main operating costs for water companies are: electrical energy, chemicals, labour. Taking the largest cost item, electrical energy as an example; water is heavy, so water companies consume huge quantities of it to  transfer this essential liquid to our homes and industry. If average and peak pipe pressures could be reduced in order to save energy; there’ll also be less leaks, so a double benefit. Treatment of sewage also requires energy intensive processes to clean it to the high standards required before being discharged. This is an area where process improvements and optimisation can again reduce energy costs. If the behaviour of the network and treatment plants can be better understood and optimised in more detail, then the companies can reduce this major cost, and maybe bills will even go down. Everyone will be happy, especially customers and regulators. This is just one example of a concrete business case for visualisation. .

Knowledge is not systemised

However, there is an easy trap to fall into: just measuring more doesn’t make you understand better, in fact it can be quite the opposite, as there can often be quite contradictory measurements in a complex system like a water network. An operations manager will have an innate understanding of the system and how it is behaving – they balance, optimise and cope with emergencies; the model of the network behaviour seems to be hard wired in their brains, but even they cannot do justice to all the information available from the network sensors and optimise the network as a whole for the multitude of parameters required from temperature, pressure, flow, water quality and others.

Single system view

As yet more systems are brought in to monitor an ever-increasing range of parameters, more screens need to be reviewed, maybe with more alarms (certainly the case with IOT). A common cry from both IT and operations is – “Not another dashboard” or “Please, no more alarms”. What’s needed is an integrated, single graphical view that combines graphical information system views, system topography, disaster planning and many others into a single whole. This is not some massive systems integration project that will require huge investment and unable to show any benefit for years, but an over-the-top visualisation that draws data from current systems, but even more importantly, has the ability to learn rules from real people (eg operations and maintenance staff) and for itself (machine learning). Such an implementation would reduce the amount of alarms by a large factor. Couple this with predictive analytics, where, for example, weather predictions could be used to set the network up for maximum resilience to flooding, for instance pumping out all the wet wells (small reservoirs) in the affected area in advance of a major weather event, or to predict when garden sprinklers would be used and for how long.

More systems equals more silos

It’s a fact of life that information systems will be replaced and more of them will appear in the operations environment; each one of course adds value, but at the same time adds workload and another silo of information. But, the value of the whole is much greater than the sum of the parts if they could be seen as a single entity: “The network”. Visualisation of data is a good place to start, I have long held the belief that everyone in a water company should have access to a mobile app that would tell them the health of the network at least at summary level. I can think of a few operations directors who would shake my hand if I could give them an app for their smartphone that informed them, in real time, what was happening in their part of the network; showing outfalls, water quality issues, burst mains, unplanned outages, and other key network parameters.

It’s not so hard to start if you think “Agile”

So, what to do? I recommend starting with the users who have to make the most complex operational decisions and discuss in a workshop environment, how they’d like their dashboard to look. Then build a “wire frame” (non-working prototype), which should be completed in days, not weeks. Constantly iterate the design and build with the users involved every step of the way. This is known as agile development and is the de facto standard for implementing quality visualisation projects. I’d make sure some machine decision making capability, however basic, was also included, as rules development is a key skill. An agile approach to IT implementations is novel in the utilities space, but where I have helped implement it, it has created users who embrace these new systems because they feel ownership, want to make it work and know how to get it improved quickly. With some help, most utilities have the internal resources to be able to do this themselves, they just need to be shown how.

My Capstone Journey

It started with a holiday back home, seeing how differently the generations in my family decide what they will watch on TV, from my Granda that has a handwritten list of channels that he likes to watch, to my little sister with iPad in one hand and hours of recorded TV shows to catch up on. I came up with my initial idea for a new TV app — an app that would connect to your TV and help you manage all the options, from your favourite channels, to your recordings to being able to search with your mood as the main criteria.

Needfinding

With the idea in my head, I started out on my needfinding exercises. The best way to find out what a user does and what needs they might have is observations. In addition to spending some more time with my family, I reached out to some friends and colleagues — it was an easy sell, they just watch TV as normal, and I’ll bring the tea and biscuits. For my observations I sat with each person for 30 minutes to 1 hour, firstly just watching them interact with the technology that they use to select the TV shows that they want to watch, but asking them to also describe what they are doing — it seemed a bit of an eye opener for them as well as me, with a few “I don’t know why I do that moments” from everyone.

At the end of each session I spent another 15 minutes asking them questions about why they use the methods and technology that they do, and what they would like to see work better. I was surprised at points that some of the participants would skip over the inbuilt functionality that their TV Service provided, to carry out the tasks in a more ‘manual’ way — when I asked about this, for most it was a case that although the functionality existed, the process for either initially carrying it out, or the process to get the results had too many steps and especially for those less technology inclined it all seemed a bit difficult.

The ‘blue sky’ ideas were sometimes insightful; “…if I’ve recorded something, why would I want to see the Ads…”; and at others a bit fanciful; “…it should read my mind to know what I want to watch…”.

Ideation

Following the observation sessions, I had a large list of ‘things’ either that I picked up or that the participants had commented on as we carried out the sessions. The task now was to break these down into a manageable list of User Needs, as I started reviewing it became obvious that a lot of the participants had the same frustrations and similar ideas, all just expressed slightly differently. Lining these out give me a starting list of 17 User Needs:

  1. User needs a buttons to be large so that they are easy to see and select to avoid mistakes when selecting the options
  2. User needs to be able to see only the channels that they have access to view to reduce the frequency of selecting a channel that they do not subscribe to
  3. User needs a simple user journey for specifying the channels that they watch regularly to allow this to be done quickly and easily by users who are not tech-savvy
  4. User needs a way to watch a show from the start when the tune in late so that they do no miss the beginning of the show
  5. User needs to be informed of the next episode/showing of a programme they are watching so that they are aware of the next time that they can catch up and if they will not be able to watch the show they will be able to record it for later viewing
  6. User needs to be notified when a number episodes of the same show have been recorded so they do not get to a situation where there are numerous recordings that they have forgotten about.
  7. User needs a way to be reminded of available recordings when they are ‘channel surfing’ to give them the opportunity to catch up on recorded shows that they have not remembered about
  8. User needs to be able to search for shows based their current viewing mood to allow them to find something that they want to watch
  9. User needs a way to be informed of new programmes that they would be interested in to allow them to find shows that they were not previously aware of
  10. User needs a way to know if a recording is going to be deleted with some advance notice to give them the opportunity to watch unviewed episodes.

With my ideas in place, I needed to come up with the Point of View that I would use throughout the rest of the project. This is a statement that I would refer back to repeatedly during the project, to remind myself of what I was hoping to achieve and the user journeys that I wanted to accomplish:

People often watch television to relax and take great enjoyment from the shows that they watch all the time. With changes in technology the way that people watch television is changing, for some the simple task of finding a show is daunting with remotes having too many buttons and on screen menus having too many options. For some even knowing what channels that they have access to is a regular struggle and frustration with new channels becoming available every day and old favourites being taken off the air. For those with a better handle on new technology and functionality frustrations still exist with these improvements, from ending up with too many episodes of a show to catch up on, or services deleting old recordings before you have the opportunity to watch them. People want to be able to glance at the TV guides and decide what they want to watch, they want to be able to glance at the remote and easily select an option.

Finally I needed some additional inspiration — what colour palette should I use, what functionality had I seen work well elsewhere, any layouts that I liked.

Inspiration Board

  • Amazon works really well at recommendations, as well as basing recommendations on similar items it also considers what other users look at or buy. Having similar functionality for television watching would be great for those people that don’t plan their viewing. Being able to see what is related to what you have watched before is a good way to keep them viewing.
  • The Mood Radio app allows users to select the music they will listen to based on the type of mood that they are in. Peoples feelings and emotions have a huge impact on what they want to do, for example when tired you might want some relaxing music, and with television if you are tired it might suggest shorter shows. Likewise if someone is sad, they might not want to watch something that is very emotional. Being able to use your emotion to help select what you want to view could improve the viewing experience.
  • The Georgie app has been created for those who are visually impaired, and the layout on this is something that I think could really improve TV browsing and viewing for a lot of users. The number of available options and the simplicity of them is appealing. Having a really simple layout and small colour palate removes confusion and also ensures that a user wouldn’t feel overwhelmed.
  • Selection of remotes – For me the inspiration is to come away from these types of remotes and I’d love to see something more interactive and obvious. I never remember which ‘input’ I need to select to switch from my TV to by Blu-ray player, and the coloured buttons all mean something different depending on the channel or menu that you are in.
  • Whats on India – The dashboard that is shown here is nice and simplistic – with the grid based layout it is easy to glance across the options that are displayed. A similar dashboard, showing recording, your favourite shows and ‘shortcut’ buttons would be beneficial to some users, nice easy view and selections
  • Colour is very important, it can help to make us feel a certain way and influence or judgement. As in the image above, cool colours are relaxing, which is, I suppose, why so many TV guides made use of the blue palette. I always like the use of simple colour palettes, sticking with one colour and using varying shades, but to then have some contrasts to help emphasise the items or areas that you want to call to the users attention.

Prototypes

Next up, it was time to break out the pens — starting with the storyboard.

Storyboards allow you to think about the “Setting, Sequence and Satisfaction” that your app will achieve — where is this going to be used, who is involved and what are they trying to do; what makes someone decide to use your app, what is the sequence that they will use; what was the end result, what was the motivation for the user. This is a process I enjoy, not just because I love any excuse to break out the markers, but once again it helps you to really consider how your app will work:

Story Boarding

With the storyboards done, Paper Prototypes were up next. Paper Prototypes are the first stage of the design of the app, by creating these on paper, instead of creating a wireframe in a design tool, means that the first prototype can be created very quickly. It stops me from focusing too much on making every element line up exactly, or worrying about the style or finer layout details. The prototypes allowed for two tasks that tie to the Point of View created earlier in the process. For each prototype I created a couple of screens that would allow me to mimic the process that a user would go through to carry out the tasks. I find the Paper Prototyping to be a really useful step in the design — it gives me a chance to think about how to make the layout work, and how to make completing the task as easy as possible for the user.

Paper Prototypes

Heuristics

Once the prototypes were in place, my next task was to get some feedback on the work I’d done so far; after spending some time going through the paper prototypes, and making sure that the pages were in order, I asked a friend to walk through the prototypes for me. As the prototypes were paper based, a little bit of imagination was needed, as my friend ‘clicked’ and ‘swiped’ around my app, I was switching out pages and pop-ups (even employing the sound effects, that I’m not sure improved the experience). During and after the session I made notes of the feedback my friend gave, as they ‘played’ with the app. Once this session was over I spent time reviewing their comments and matched these up against the Nielsen Heuristics. The Heuristics are a list of 10 general principles for interaction design, which when applied to a design can assist in improving and making a better user journey — you can find more information on these heuristics at: https://www.nngroup.com/articles/ten-usability-heuristics/

In addition to getting the feedback from a friend, my paper prototypes were also reviewed by 2 peers completing this project. They provided me with additional feedback, again based on the Nielsen Heuristics, including a severity, which helped me set some priorities around the items that I would change as the app moved through future iterations.

Going through the full list of feedback I was able to consolidate a lot of it, while some of the feedback had been similar, people had rated it with different severity based on how they would expect it to work. Not only did I have a list of potential changes to make, but I also had gained some more user insight on how others would make use of my app idea.

The last part of this week’s task was to create the first wireframe for my app, I started with the Homepage — what the user would see after logging in:

Wireframe01 - Homepage2

A Plan and a Skeleton

With all the feedback received to this point, I was now in a position to really start working on a full prototype for my App, with 5 more weeks to go, I had my timeframe, but I still needed to make a full plan of how to achieve everything I wanted before this final deadline, while including the set tasks that also had to be included each week, bring on the excel file…

Into my plan I started with the key milestones (as set by the weekly assignments for the project) and then tried to plug in the changes I wanted to make to my prototypes from all the great feedback I’d received, and the estimates included for each task.

The Skeleton navigation for the app also needed to be mapped out; not just to give clarity to what I was going to try and achieve, but to also help map out the User Journeys — what screens would a user have to navigate through to complete each of the tasks available on the app. I found that this, which at first thought, can see like quite a simple task, can easily stretch and take a lot of time. You need to consider the points in the journey that the user might decide to change the task they are carrying out, or that they want more information. It’s this point where you have to think of the less obvious paths, the ones that you never expect a user to take (which in my experience ends up being the main path that all users use).

SiteMap

After the navigation I started to work more on the prototypes, I am already quite familiar with Visio, so I decided that I would create my wireframes there and would then use InVision to allow me to create something that became interactive for the user — they can click around the application, using the created hot-spots (this was the big ‘Wizard of Oz’ technique I employed for the project, InVision allows you to upload images, and link them together using hot-spots — these hot-spots can mimic user actions, such as clicking and swiping, and gives the user the feeling of the app actually working — all without having to plug in the code to make it really work).

I enjoy this part of the process, as you can really start to see your idea coming together, but it is a time consuming part, especially if like me you need everything to be properly aligned and to sit correctly together. By the end of the week I had created several of my key screens.

laurawf

Ready for Testing

For the next week I concentrated on creating the wireframes for the rest of the pages on the app, as well as continuing to make some of the improvements that had been taken from the Heuristic Evaluations. As the app started to take shape, I often found myself going back to previous pages to make little changes that tied in better with the design of the other pages I was working on. In addition to this I started to plug all the wireframes into InVision. Using the functionality of InVision my app started to feel more realistic, with the ability to create overlays, and specify how the pages changed, add swiping and scrolling. Even having some experience with InVision already, it was a time consuming task — changes to the wireframes would mean that the hot-spots needed to be moved to make sure that they continued to sit over the correct area of the screen, and making sure that the overlay items remained in the correct area of the screen.

During this week I also had to start thinking about the first round of User Testing — I had to define some tasks that I would be asking users to complete.

Test Your Prototype

This week my app was getting handed over to strangers for some testing, having spent the last week tidying up the wireframes and the InVision prototype, I was starting to feel the nerves.

To make sure that everything remained fair I created a ‘script’ that I would use during the tests. This script contained the overview of what I was hoping to achieve, and the tasks that I wanted each user to carry out. It also gave me some boundaries around what I would help the user with and what hints I could give when they were stuck. Each Participant would be asked to complete a consent form, giving their agreement for details of their session to be shared and allowing me to take their feedback to make improvements.

With everything prepared, I set out to find some users — there was a local food fair on, and I thought that would be a good place to start. Following my earlier success of tea for assistance, I knew that I could reward any participants for their time with a cup of tea and a tasty local treat. After approaching the first table of people I had 3 agreeing to participate in the test.

InVision allows you to text a code to someone’s phone, so that the prototype can be tried out in the expected environment, only two of my three volunteers had their phones with them, so I sent them both the prototype link. I sat with each one individually, so as not to allow one to gain extra knowledge by seeing the tasks completed by the first participant. During this time, the participant was asked to complete the 3 tasks I had set, while describing to me what they were doing and thinking as they navigated through the app.

User Testing

At the end of the session I had some new feedback, the participants had some more tea and cake, and I returned home ready to break out the pens again, to come up with some additional changes to the app.

The next action item for me was to start to plan the A/B tests — with the feedback from the live user testing I had a couple of scenarios to try out.

Results

The task for this week was to run the A/B testing, as per the terms of the assignment I used the UserTesting website to achieve this. It was a site that I has used in the past, so I had a couple of teething problems, but finally a working prototype to test out these final ideas. Within an hour I had my results — unfortunately they weren’t as clear as I would have liked. Out of the users that had picked up my prototypes none were familiar with InVision and how it worked, and because of this, the feedback wasn’t great. For each user, a lot of the time was spent working out what the ‘blue flashes’ (i.e. the hot-spots) meant, and how to get around.

User Testing2

I hadn’t foreseen this being an issue — but lesson learnt, next time make sure to provide a bit more detail and explanation on what the prototype is, where it is hosted and how to navigate around it.

Thankfully, the testers pressed on, and started to review the prototype, but from the original 4, only 2 managed to complete the tasks that were initially set. Despite not being able to fully analyse the A/B testing, each of the users did make the effort to give feedback on the pages that they got to, and from what they could see they were giving feedback on how they expected the application to work. And to my delight a lot of the feedback was positive.

The feedback meant a few more tweaks and changes to the prototypes. With these complete, the ‘TV App’ is now ready to move onto the next stage of development . You may see it on the App Stores in due time!

I hope you have enjoyed my capstone journey – if you did, please feel free to comment on it for further discussion, and share on your social channel of choice.

How many hats do you wear?

Projects using a traditional waterfall methodology follow a series of separate, sequential steps: requirements, design, implementation, verification and maintenance.  This tends to lead to team members also slotting into defined roles: business analyst, architect, developer, tester, project manager.  When teams first move to Agile it is common that people continue to follow their traditional roles and this allows time for everyone to find their Agile feet.

So how should the roles in an Agile team be allocated?  The Scrum guide says “Scrum Teams are self-organizing and cross-functional” and this is sometimes interpreted as meaning that each team member should be equally capable of filling every role in the team.  Great in theory, but in the real world that can often lead to individuals being “jack of all trades and master of none”.  Additonally, suggesting that everyone can do every task as well as a specialist, denigrates the skills of that specialist.  A more realistic approach is team members able to fill more than one role – a developer might also undertake some UI design, a tester could write unit tests to support the developers.

Whilst working on larger projects, the cross functional aspect can be overlooked as there is enough work to occupy each role full time.   However, over the last year, our teams have been working on a number of smaller projects which in turn has led to smaller teams.  This forces the team to look at how best they can organise themselves to ensure successful delivery.  Part of our approach has been to go back to the cross functional team by getting each person to fill multiple roles.

Another option would have been to retain the larger team of specialists but only involving individuals when required.  This is both a nightmare for staff scheduling and keeping the full team up to date, whilst they work on other projects, is extremely time consuming.

One benefit of this approach is a reduction in cost.  A large part of this comes from reducing the lines of communication – there are five times as many permutations for one to one communication with a team of eight vs a team of four so a reduction in team size can make a real difference here.  Other savings are from reducing the overhead of getting all team members up and running at the start of the project.

With Agile, the role of the Project Manager is often seen as unnecessary.  In Waterfall projects, they are responsible for managing cost, scope and quality; reporting to stakeholders and allocating tasks to the team.  With Agile, the Product Owner is responsible for scope and cost; the team are collectively responsible for quality; end of sprint demonstrations are a key means of communicating progress and the team self-organise the work.  With larger projects where there are multiple Agile teams or where there are other factors such as the needs of heavily regulated businesses eg finance or pharmaceutical industries, the Project Manager’s skills are still needed.  However, even in these industries, for smaller projects, a full time Project Manager is not required (nor commercially viable) and whilst a good PM can juggle multiple projects another option is for the PM to wear some other hats within the project.

Purists may dislike the thought of the Project Manager taking a more active role, but for a small Agile project, it allows us to reduce the team size.  In recent projects, in addition to handling the normal duties of the Project Manager of managing risk, issues, change, scope, resources, and communications, I have also been responsible for functional testing.  Being closer to the coalface gives me an excellent handle on progress, issues and quality without the need to constantly seek updates.  On larger projects, outside daily stand-ups (which still take place), I would often need to have catch ups with individual team members to get greater understanding of issues we are facing.  Being an active part of the team (and not just an overhead as the developers would see it!) allows me to get real insight into progress without the need for more meetings, calls, emails.

How many roles one individual is capable of will vary but knowing team member’s secondary skills and using them effectively in projects to manage the team size can significantly reduce project costs without sacrificing the quality of the output.

[IIoT Series] 3. How IoT can help UK water companies with the adoption of private pumping stations

New assets & lots of them

I’ve been working with some utilities companies along with experts in and outside the industry recently as there’s a new challenge on the horizon; namely the impending adoption of private pumping stations on 16th October this year.

But no visibility

Unlike a water company’s own sewage pumping stations, most of these stations have no SCADA, (a system which provides alarms and allows remote control), and therefore no warning at all of failure or impending failure. This is unlikely to be a satisfactory arrangement and I felt there must be an answer, so I asked our Internet of Things (IoT) team to see if they could provide a solution.

Can we fix it?

The mission statement was short and sweet: “Provide a simple to install, affordable and non-intrusive solution which will enable pump and level monitoring and not design out the ability to provide other monitoring services in the future.” Fixed line communications are not a given, so, transmitting data via mobile networks to the cloud is a requirement along with the capability of full integration to the utility’s internal dashboards and network operating centres. It should require no IT infrastructure at the water utility, other than admitting the data through their firewall. Because the data is in the cloud, the service should enable data enrichment by external cloud services like weather predictions and predictive analytics to pre-empt pumping station failure, flooding, outfalls and other operational and environmental impacts.

Yes, we can!

At some water utilities I am talking to, they are looking at a three-fold increase in the number of pumping stations being assigned from the private sector, so it is not going to be cost effective to install SCADA on all but a very small proportion of them. So, in order to get a quick take-up, it became clear that a “Sensor as a Service” concept would work best, where everything from sensor supply to cloud & managed service is operated on a “Pay As You Go” principle to allow utilities to pay for just what they need.

Our IoT team is running proof of concepts now, which provide the promise of:

  1. Quick and easy to fit with no infrastructure (telephone lines, broadband etc) required
  2. No IT for the water companies to run
  3. Low acquisition, installation and running costs
  4. Instant visibility and control of private pumping stations
  5. Can also be used for existing pumping stations to augment SCADA
  6. More effective use of maintenance & operations resources
  7. Predictive maintenance services in the cloud can be used to better respond to assets leading to less pump trips and outfall events

Killer ROI

When integrated to an enterprise asset management system, the return on investment seems to be in the order of less than 4 months, with the main payback being less emergency visits due to more targeted scheduled maintenance. The costs of Environment Agency fines is not included here, but if it was, the ROI would be even shorter.

IoT is clearly a new area for water companies, but initial results indicate that the benefits are there and the barriers to entry are low. I hope and expect to be busy in many other water companies…


Need something more visual? Download our graphic here:

pdf

waterjourney

[IIoT Series] 2. The Internet of Beer

As promised in our previous post, the first industry insights of our IIoT series that we want to share with you concern the Hospitality sector, specifically looking at the pub-industry.

Our research, consisting of primary and secondary market research, focused on service providers, specialising in the supply, installation, and maintenance of beer and soft drinks dispense systems for licensed and non-licensed premises across the UK. We wanted to understand the pain points and challenges of dispense systems providers, their end-clients (pubs), and breweries, and uncover any other opportunities that could help them building a foundation for their IoT strategy. (Disclaimer: Some beers were harmed in the process)

State & key challenges

 Reporting and automation

A lot of pubs do not have fault prediction systems for beer pumps in place which means that faults on the dispense system cannot be anticipated, nor automatically reported. The most common ‘smart things’ that were found to be in place for the purpose of fault prevention were coolant systems, monitoring minimum and maximum water bath temperatures via sensors, and adjusting the cooling for the beer pipes and other beverages on tap accordingly.

kegs final

Another issue found was that when pubs are running out of beer in a keg, in the majority of pubs there was no way of knowing when a keg is running low (except at the point that no beer/ foam is coming out), which means staff have to get the keg replaced – a lengthy process, and some customers may not be able to get served their favourite beer in time. Some pubs rely on beer sales being passed through the EPOS tills but this is often not reliable due to bar staff pulling pints for mates or acquaintances.

In the event of new kegs having to be ordered, the ordering process in most pubs was found to be a pretty manual one – there was no automation in place and no trends analysis that can be acted/ ordered upon automatically.

Data capture and distribution

Larger pub groups use flow monitoring systems to monitor premises which visiting field engineers use to extract data from in order to check up on conditions that the coolant system has been operating in. A lot of smaller pub groups or independent pubs were found to not have any system in place though.

The beer distribution for pubs starts at the respective breweries, moves to distributors and resellers, and finally ends up in the pubs. The issue with that is that beer brands (i.e. brands like Heineken, Carlsberg, Coors, etc) have only little, if any, visibility over the exact data such as in which pubs their beer is being sold, how much of it, nor how many pints from each beer tap/ font. That also leads to a major asset management problem – certain beer brands pay for a contract to get prime positions at the bar for their taps/ fonts, and currently they do not have any way of controlling this.

Contract breaches

Contracted brewers are unable to ensure distributors and resellers do not switch out kegs in pubs to non-contracted breweries. It is suspected this contributes significant costs to the dispense system providers and their contracted brewery businesses – not only from lost sales for brewers but also the administration and interest costs incurred from trying to recover this lost contracted revenue.

IIoT opportunities to address these pain points

Having had a closer look at the user journeys of some of the dispense system providers, pub managers, and breweries, the following opportunities have been identified to push the pub industry to the next level in IIoT:

  1. Implementation of sensors allowing measurement of coolant performance intermittent and provision of statistical analysis by temperature and failure anticipation, and front-end system for field engineers to assist with system audit
  2. Sensors to be inserted into each font, with flow monitor measuring how much is being poured/ how many pints/ half pints and of what type of beer
  3. Remote diagnostics and automation of field engineer scheduling to client pub if system requires repair or maintenance
  4. Implementation of sensors on beer kegs to anticipate kegs running low, and dashboard for staff to get notified well in advance as well as monitor trends (e.g. seasonal consumption, weekday trends, etc)
  5. Automation of keg ordering based on actual consumption and trends analysis (M2M)
  6. Dashboard for beer brands to understand usage, consumption trends and anticipated revenue, system maintenance and defects, and overall impact
  7. Dashboard apps can provide virtually real time data to bar management, owners of small chains of pubs, marketing teams, area managers etc
  8. Inventory and delivery management via barcodes, RFID, or sensors (depending on asset), monitored and tracked via a dashboard, enabling staff to manage pub and breweries more efficiently, and to prevent contract breaches

The first pre-defined step onto our IoT maturity curve starts with a Proof of Concept.  Using our IoT prototyping suite consisting of sensors, eMeters, apps, enterprise IoT cloud platform and much more, we can explore and demonstrate the power IoT could have to your organisation within hours.  Get in touch if you would like a demonstration, or to speak with a technology or strategy consultant.

Next time coming up: IIoT and water companies…

IoT – Power to the people

I’m David Hartwell, an IoT specialist at Tech Data. I hear a lot of references in the industry to “Edge to Enterprise” as a short hand for “All encompassing”. Sadly, I have seen this to really mean “The hard work’s done, sensors, communications and cloud are working well and here’s your data”; then the hard-pressed analysts in the organisation are tasked to do their clever stuff to make some sense of it. IoT has the potential to offer genuine insight, but rarely by simply presenting more sensor data. This can cause more harm than good, as it is very hard to know whether a particular sensor is showing an abnormal condition or not: you have to know context. So, a particular pump could show high vibration readings regularly, but after a quick chat with the operations director, he might say: “Yes, that pump often runs dry and cavitates, the design is not ideal, so I’m not surprised that one vibrates from time to time. It’s OK, we know about it.” This is context. Exact same model of one pump will vibrate, another one won’t, neither causing the operators any concerns.

Because of such complexities, the tendency is to present data to operations and maintenance staff without too much interpretation and let them apply their deep understanding of the plant and equipment along with their expertise. Unfortunately, this takes time and makes it difficult for senior management to act now. They have to use interpretation of historical data to inform them of system behaviour and use that to inform them of the right course of action. A typical scenario might be: “It’s OK, don’t send anyone out to that pump, it’s most likely running dry”. Operations folks make decisions like this all the time, that’s often the job – to make decisions based on a few basic pieces of data. Not surprising that it can go wrong, maybe that pump was not running dry, it was really pumping liquid at full power, but the bearings failed and now it’s broken down causing major process problems. Such is the life of operations and maintenance managers. IoT can easily be relegated to simple process control and provision of historical data. It’s true that such an IoT implementation might inform future behaviour, but often it is only really used to show failure or abnormal conditions that need remedial action now, in other words, there’s no distillation of the data to help with predictive analysis.

What if we could sense vibration, power, flow, temperature and anything else that might be relevant, wouldn’t that insight help us prevent another breakdown? “Well, yes” they’d say, “but, we’d be overloaded with data, we’d never see the wood for the trees”. It doesn’t have to be this way: I hope I’ve made the case for analytics and machine learning in an IoT implementation but even that is not enough in my opinion. Many IoT implementations start with putting sensors on equipment we want to know more about, and that sounds like a reasonable strategy, but I contend it really should be the other way. We should start at the screen: what the customer needs to know to act now, and then finish up at the sensor. So, rather than “Edge to Enterprise” it should be “Screen to Sensor”.

This is not a semantic distinction – it’s putting the user at the heart of the process. Like any new technology we all tend to get caught up in making it work, rather than making it work for us. That’s why when we start an IoT implementation – we start with understanding the people and what they want on their screens.

startup-photos.jpg

 

[IIoT Series] 1. The Internet of Things for industrial environments

What exactly is the IIoT, and why should we distinguish from the ‘classic’ IoT?

In recent years the focus of IoT has been very much on the consumer-end, thinking of the connected home, the connected car, and the connected self. Whilst those areas are continuing to evolve and the ever-growing data derived from connected devices is being used increasingly more relevantly and predictively, the next wave of IoT has begun to roll in: the IIoT (Industrial Internet of Things) – in essence IoT with shifted focus to industrial environments, requiring differing treatment and approach due to different conditions.

Similarly to the classic IoT concept, the IIoT refers to smart machines with sensors that share their data with other machines through the cloud, and autonomously act upon that data with help of programmed business logic and more importantly predictive logic that continuously evolves over time. This big data can help businesses to pick up on inefficiencies and problems early in the supply chain, raise the bar on quality control, support sustainable practices and energy-efficiencies, and overall back business intelligence efforts, with the ability to operate with little or no human intervention.

The area where IoT and IIoT may arguable differ, and why we think it’s important to define it separate from IoT, is that IIoT deals with connectivity in industrial networks, infrastructure, and building systems – often extreme environments that have special needs, which we will elaborate on in our industry examples in the next blog posts of this series.

What does the future hold for IIoT?

Whilst the number of sensors and devices for the IIoT has already reached tens of billions, their full potential has not nearly been reached yet. What is yet to come is to apply them effectively within organisations through entire supply chains and across multiple industries, and leveraging the produced big data autonomously to its full potential.

Today the IIoT is improving productivity, reducing operating costs, and enhancing safety in industrial environments, but the future and long-term potential involves the disruption of markets to generate new and re-invented revenue streams.

Accenture estimates that the IIoT could add US$14.2 trillion to the global economy by 2030, which means whole sectors will be reinvented as businesses shift from selling products to delivering measurable, guaranteed outcomes based on reliable prediction logic.

The Tech Data IIoT Series

Over the past months we have seen an increasing interest from our clients on how they can either enter the IIoT space or how they can improve the intelligence of their existing IIoT infrastructure. We found that there is still a lot of misconception in this space and a lot of questions around how to approach a digital strategy with focus on IIoT, so we thought we would share some of our client’s challenges  within a variety of industries with you, and how we helped them approach the IIoT.

In a series of exciting posts over the next two months we want to bring the IIoT closer to your (or your client’s) business by sharing relevant use cases across the Hospitality, Retail, Energy, Manufacturing, and Health sector, so you can jump on the next wave of IoT for your business.

Next week we will kick things off with an industry example within the Hospitality sector, specifically looking at the pub-industry, because let’s be honest – if we go to a pub and our favourite ale is just running out of stock, neither customer, nor business is happy! Stay tuned.

Your Digital Business Strategy should be Customer Experience first, Technology second

The newest trend in IT isn’t from a single new technology entering the market, but rather several different types of technology trends maturing and converging at the same time. Mobile, cloud, big data and analytics and converged infrastructures are triggering a technological shift – causing many businesses to rethink traditional business strategies in order to adapt to this digitalisation.

The “old way” of handling business strategies is to plan ahead for at least three years – or more. However, because we’re dealing with a high level of technological velocity as the “new norm,” within even two years technology may change enough that could render your strategy outdated.

How can you capture these advancements in technology to give yourself a competitive advantage in the marketplace, stay relevant to your customers and have the flexibility and agility to respond to the next new technology trends?

The answer is with an inclusive digital business strategy that supports business goals through the benefit of digital tools, while making the lives of your customers and employees easier along the way.

Customer experience is the foundation

Developing a digital strategy and working toward digital transformation is not only about the utilisation of technology. It’s about using technology harmoniously across a company to optimise leadership, operations and the customer experience. Customer experience is the overall involvement your users have with your business, covering all touch points across the whole of their user journey.

A seamless and cohesive customer experience means that no matter what tools or channels are used, the interaction leaves behind a positive feeling. The better the customer experience, the more likely your users are to become loyal brand advocates. Digital transformation can allow users to see, feel and interact with your company through the use of digital products across desktop, mobile, tablet, TV or other connected devices, but digital transformation can seem daunting or unattainable.

Digital Business PyramidRethinking the ideal business strategy, from the bottom up

When many companies launch initiatives to improve customer experience or utilize digital technologies, they’re usually constricted within certain departments, and tend to start with the question of which technology should be implemented.

The ideal scenario for a digital business strategy is an enterprise-wide plan that starts with asking about user needs and pain points, then working backwards to identify which technology can be used to best address those needs.

For many companies, this may require a new type of thinking: Put the user at the core of the overall strategy, followed by an understanding of business and technology processes and dependencies, ultimately driving you to the execution of the strategy.

How to plan a digital business strategy that puts customers first, technology second

A customer-centric digital business strategy is about putting customers first and technology second. Here’s how to do that:

Customer first

Goal:
Develop a deep understanding of with whom you want to interact and engage. This could also mean your employees. How can their user experience be improved?

Actions:

  • Conduct primary and secondary user research
  • Drill down into various user characteristics values, behaviors, motivations and drivers
  • Analyze the user journey with pain points and expectations
  • Confirm which features have real relevance on user requirements and which don’t

Technology second

Goal:
Now that you understand the needs of the audience, it’s time to use technology to improve their experience. Create and understand processes to give shape to your digital business strategy.

Actions:

  • Review existing architecture of platforms and technologies so that you not only understand your existing capabilities, but also the bigger picture on how to expand on them
  • Prioritize what technology should be implemented first based on business objectives, development time and importance of target segments
  • Address other areas of the business that may have been put on the backburner during the digital initiative, such as logistics, training or change management
  • It’s important to remember that a digital strategy is part of a wider, singular strategy that encompasses an entire business, so all moving parts must be considered when devising your digital business strategy. This will reduce friction on deployment and execution, while ensuring all aspects of the business are fully informed.

Making it happen

If the first two steps are undertaken successfully, then the execution of your digital business strategy should be simple. The outcome could be a physical product that is part of your digital business strategy, or it could be a complete digital transformation, improving agility across your entire enterprise.


If you need assistance in developing your customer-centric digital business strategy or moving toward digital transformation, Tech Data has decades of digital technology experience across multiple industries.

Want more digital insight? The Digital Agenda delivers a quarterly report on critical mobile and digital trends impacting enterprises, verticals and markets worldwide. Learn more.

Get in touch:

 

Are consumer and enterprise apps converging?

20 year olds build consumer apps and 40 year olds build enterprise apps. Over simplification, or there some truth in my non-scientific observation? The consumer space is full of entrepreneurs, large investment, lots of success (and lots of failure) with high speed to market. The enterprise app is full of return on investment studies, business cases, slow speed to market, but lower failures, at least that’s the traditional view.

But, taking the vibe at Apps World at the ExCel, London https://www.apps-world.net it was clear that consumer apps weren’t being built using the traditional “waterfall” process, everything is about speed to market, so agile or scrum methods are the order of the day . Change is embraced with these methods, constant re-prioritisation and “fail fast, learn quickly” is normal practice. I have noticed a recent trend where naturally more risk averse enterprise customers are accepting that agile gives better results, largely due to the closer involvement of the business throughout the app lifecycle. I’ve seen an increase in enterprise apps built with “gamification” – in other-words, taking some of the consumer app gaming style to keep enterprise users engaged through the app workflow.

Businesses seem to be learning the lesson that enterprise apps, done well, are a significant competitive advantage and that failing fast and learning early is an acceptable way to harness innovation in their businesses.

Dark data set to rise exponentially

Dark data is a term for the data you have access to, but have not acted on. Having seen an internal blog by Sam Oliver, Solution Program Manager at Tech Data, it’s clear that IoT sensors bring a vast amount of low-level data to an organisation: usually most of it is either stored or discarded (“dark data”), with maybe just a trigger point (e.g. “AC system too hot”) being the only data point being acted on.

I’ll use an example most of us know about: – a smart electricity meter, all most of them do for us is to negate the need for a meter reader to come into our house to read the meter once a year. This means the utility company do not need meter readers any more and they get regular updates on your consumption for billing purposes, so a mild win-win.

But, SO much more could be done with this data – by looking at the power consumption signatures of your fridge, freezer, cooker, kettle, etc., you can tell what appliance is consuming how much electricity and when. If I had an app that could tell me that that my fridge is costing £100/year more than it should be, maybe because it was faulty or just plain old-tech – I’d like to know that, I might even decide to buy a new, more energy efficient fridge. But, I don’t know: the data is in that meter or the utility data centre, but it is “dark”. This is where big data can help, taking all that data and create new insights, maybe in ways that it was never intended for. Buying that more efficient fridge is a win for my pocket (over a year or so), the environment and for the creaking grid system.

It seems that IoT and big data are technologies that are growing up together and I think we, as consumers, need to see some of these insights coming our way, not just being used to predict or even alter our buying behaviour, but to give us non-biased insights that allow us to make decisions for ourselves.

Organisations that allow customers access to their insights about us (not just the raw data), will gain trust. These days, trust with businesses and what they do with the data they hold on us, is in much shorter supply than it used to be.

Photo from http://www.starwars.com