Robotic Process Automation (RPA) uses software (the robot) to ‘learn’ a manual or semi manual process, then to mimic the steps in the process and replace manual intervention altogether. Such processes are typically rule based with many repetitive steps, like re-keying insurance contracts.

The key part is that the underlying process and environment doesn't need to change. The source is the same, the target is the same, the process steps are the same - it’s just a robot carrying it out instead of humans.

This is attractive; the costs and risks usually associated with IT change are much smaller, human error can be eliminated, there is no need for the support facilities that those pesky humans need, and robots don’t need to sleep.

But has the problem – the need for such a process rather than true integration – been solved?

If the source and target never change, and the data isn’t needed elsewhere, then yes, to all intents and purposes the systems are now integrated and probably on the cheap too.

But in the real world things change – and this is where the risk of a blind rush to RPA arises. 

  • RPA increases the dependency between the systems. 
  • If the process itself is flawed, all RPA does is formalise the flaws. 
  • When things change it's useful to ask why things are the way they are now – a human can tell you (maybe not much, but something), RPA knows nothing about ‘why’.

The good news is that using RPA can save money quite quickly in the short term - giving the breathing space necessary to sort out the wider business problem. The challenge is avoiding the lure of keeping the RPA sticking plaster in place forever.

So a truly successful RPA implementation needs to be able to show how it will ultimately itself be replaced, or those cheap and cheerful robots will be back to haunt us in the future.