The Blockpass Identity Lab achieved great success recently when attending the Diffusion 2019: Blockchain Hackathon. Over two days in Berlin, and hosted by Outlier Ventures, the hackathon saw over 500 experts in cryptography, machine learning, IoT and many other areas come together to investigate the possibilities provided by blockchain and linked technologies.
Representing the Blockpass Identity Lab, Pavlos Papadopoulos, Adam James Hall and Will Abramson formed a 3-man team and took their knowledge and experience from research into identity, blockchain and privacy-preserving machine learning to take home not one, but two prizes - ‘Self-Sovereign Identity Best Social Impact’ and ‘Machine Learning in the Decentralised World Track Winner’ - for their Proof of Concept around a certified persons in a federated learning scenario. It is an incredible achievement, not only for showcasing how adept the team at the Blockpass Identity Lab are, but also for showing how important Self-Sovereign Identity is and how it has the ability to change technology for the better.
The solution combined Self-Sovereign Identity and Machine Learning using Hyperledger Aries and secure messaging capability DIDcomm (decentralised identifier communication) which was demonstrated at a conference recently by the British Columbia Government. In the project they created, the team designed an ecosystem which only allowed trusted entities to contribute to the machine learning process of the model, which was focused on a hospital and researcher use-case. Using secure channels with decentralised identifier communication, relevant authentications can be requested of the participants to ensure that those using the channel are not bad actors attempting to negatively impact the system.
The team described the process and results: “In our example a NHS Trust issued hospital credentials to hospitals, and a regulatory authority granted the researcher's credentials. After the credential validations from each side using the public DIDs of the relevant issue, the researcher sends his model to each participant with confidence that they are legitimate. The participants train their raw data, and a secure aggregator summary could then be used to summarise their outputs before sent back to the researcher.”
The final federated trained model defends the researcher from malicious misuses such as model poisoning attacks, and at the same time, it is protecting the privacy of the participants, since their raw data never left their premises.”
In essence, the BIL team created a model which could be trained via machine learning without sensitive information being exposed. The use-case of the NHS and healthcare in general is just one possible application of this concept. A similar approach could be applied to any industry where data privacy is sought or required, they could simply define a credential ecosystem and rules that make sense for them.
Given that there were only two days to complete the task, the PoC was not entirely finished to the team’s satisfaction by the end; the biggest issue was understanding the DIDcomm protocol and how to use it to share the machine learning parameters. Nevertheless, the scope of their achievement is incredible and we're excited to see what they achieve next!