Researcher Collab

Secure multi-party computation in deep learning : Enhancing privacy in distributed neural networks

Journal of Discrete Mathematical Sciences and Cryptography

Ensuring data privacy while applying Deep Learning (DL) on distributed datasets represents an essential task in the current period of critical data security. Data privacy and accuracy of models are typically impacted by traditional methods. Data privacy is of the most tremendous significance in distributed data settings, and the current research presents a novel model for DL termed Secure Multi-Party Computation (SMPC). The accuracy of the mathematical models and confidentiality of the data are frequently compelled to coexist in conventional methods. In order to enable collaborative DL without compromising private information, the recommended system uses the Paillier Homomorphic Encryption Scheme (PHES). By using innovative cryptographic methods, this decentralized method secures the confidentiality of data without utilizing a Trusted Authority (TA). By performing a thorough assessment of the CIFAR-10 and IMDB datasets, the present study demonstrates that the system the author uses is simple and scalable and that it offers accuracy on par with conventional methods. By presenting an approach for achieving a balance between the two competing demands of data security and computational performance, this method signifies a vast advance forward with a confidentiality DL.

Authors: P. Vidya Sagar, Hayder M. A. Ghanimi, L. Arokia Jesu Prabhu, Linesh Raja, Pankaj Dadheech, Sudhakar Sengan

DOI: https://doi.org/10.47974/jdmsc-1879

Publish Year: 2024