Software invisibly permeates our everyday lives: operating devices in our physical world (traffic lights and cars), effecting our business transactions and powering the vast World Wide Web. We have come to rely on such software to work correctly and efficiently. The generally accepted narrative is that any software errors that do occur can be traced back to a human operator's mistakes. Software engineers know that this is merely a comforting illusion. Software sometimes has bugs, which might lead to erratic performance: intermittently generating errors. The software, hardware and communication infrastructure can all introduce errors, which are often challenging to isolate and correct. Anomalies that manifest are certainly not always due to an operator's actions. When the general public and the courts believe the opposite, that errors are usually attributable to some human operator's actions, it is entirely possible for some hapless innocent individual to be blamed for anomalies and discrepancies whose actual source is a software malfunction. This is what occurred in the Post Office Horizon IT case, where unquestioning belief in the veracity of software-generated evidence led to a decade of wrongful convictions. We will use this case as a vehicle to demonstrate the way biases can influence investigations, and to inform the development of a framework to guide and inform objective digital forensics investigations. This framework, if used, could go some way towards neutralising biases and preventing similar miscarriages of justice in the future.