
I am an Assistant Professor in the Department of Computer Science and Engineering at Bharati Vidyapeeth’s College of Engineering, Delhi. My academic and research contributions span across cybersecurity, cloud computing, data analytics, and AI-driven digital transformation. I have published extensively in reputed journals and conferences, serve as reviewer and editorial board member for IEEE/Elsevier/MDPI/Frontiers, and actively mentor student innovation, incubation, and startup initiatives. Currently, I am working on developing an AI-driven Cyber Maturity Index framework to strengthen the digital security posture of higher education and industry institutions in India.
Cybersecurity and Digital Forensics Cloud Computing and Security Artificial Intelligence and Machine Learning Applications Business Analytics and Data Mining Secure Digital Transformation Strategies Innovation Entrepreneurship and Startup Ecosystems Cyber Maturity Models and Risk Assessment Frameworks Higher Education Policy Accreditation and Technology Adoption
A company's most valuable resource is its workforce, which includes each worker. Because of the crucial role that employees play in the success of an organization, measuring employee turnover rate has become one of the most important metrics that businesses are concentrating on in the modern era. Attrition may occasionally arise owing to unavoidable circumstances such as moving to a distant place, retirement, etc. But when attrition begins creating holes in the pockets of an organization, it is necessary to monitor the situation closely. When hiring new staff, a company must use a significant quantity of its available resources. The process of rehiring employees needs to be eliminated, and a strong workforce needs to be maintained, so it is necessary to adapt the analysis of systematic machine learning models. From these models, a suitable model that gauges the risk of attrition may then be selected. This not only helps an organization save money by preserving its resources but also assists in preserving the status quo of its staff.
The "Internet of Things" or IoT has recently transformed the established sector into smart infrastructure with 6G data-driven design. Because of their decentralization, straightforwardness, absence of range assets, intrinsic protection and security, absence of interoperability, mystery, and prospering shrewd application regions like IoT Innovation and Industry 4.0, blockchains (BCT) has drawn in a ton of consideration. These facts served as the impetus for this paper's extensive survey, which focused on the potential benefits and difficulties of integrating blockchain technology into 6G cell networks, Industrial iot, and smart industries. These issues included power grid sharing, mathematical loads, response time, transmission capacity above, plans of action, maintainability goals, and edge insight. Specialists accentuated the combination of blockchain and IoT to empower knowledge dispersion in modern IoT later on, as well as the innovation model of 6G to understand the effective execution of BCT plans. This paper talked about the interesting issues that are now being faced, mitigating strategies, and potential future research directions that could aid in the realisation of this vision.
Machine learning (ML) and artificial intelligence (AI) are two technologies that are expanding quickly and have the potential to completely change how organisations conduct themselves online. While ML is a subset of AI that entails training algorithms to recognise patterns and make predictions based on data, AI entails the development of intelligent algorithms and systems that can execute activities that generally need human-like intellect. Businesses may improve customer experiences, optimise operations, and boost profitability by integrating AI and ML. This study analyses the possible advantages of ML and AI integration in e-businesses. Here, a number of application cases, including as supply chain management, fraud detection, sales and marketing, and customer support are analyzed. Furthermore, this study also analyses some of the drawbacks and shortcomings of these technologies, like the requirement for a lot of data and the possibility of bias. Overall, this study comes to the conclusion that the combination of AI and ML has the potential to improve e-commerce operations by bringing fresh perspectives, boosting productivity, and enhancing the general customer experience.
More and more security attacks today are perpetrated by exploiting the hardware: memory errors can be exploited to take over systems, side-channel attacks leak secrets to the outside worlds, weak random number generators render cryptography ineffective, etc. At the same time, many of the tenets of efficient design are in tension with guaranteeing security. For instance, classic secure hardware does not allow to optimize common execution patterns, share resources, or provide deep introspection.
Machine learning as well as deep learning algorithms is recently in huge demand in the field of image classification. This research paper illustrates the relevance of these two prospects for making further improvement in the treatment system in the upcoming days. In this context the evolution of the machine learning from its invention to the modern stage has been discussed. The limitations of machine learning are elaborated and as a consequence the emergence of deep learning have been depicted. The study also provides an insight to the different leaning process and their relevance in the field of image classification and image analysis. Opinion of people are gathered about this aspect and presented in a survey format. The overall presentation illustrates a precise discussion about the relevance and evaluation of artificial learning system in the context of image classification which can be used in diagnostic process. The different fields of medical treatment which are seeing benefits of introduction of deep learning network and convolutional neural network are also discussed here. At the end a conclusive summary about the machine learning process and its pros and cons are elaborated to get an idea about the present standing point of machine learning on the field of medical science.
Communication protocols are guidelines governing the transmission of data between the entities of the vehicular networks. These protocols play a crucial role in enabling optimal connectivity, ensuring safety and traffic management. The effective functioning of the V2V network depends on the right choice of communication protocols, as the mismatch in selection results in incompatibility, performance degradation, congestion, and other security issues. This chapter focuses primarily on the intervention of a multi-criteria approach in making an optimal selection of the communication protocols used in the V2V network. The decision-making problem comprises the alternatives namely IEEE 802.11p, Cellular-V2X, Dedicated Short-Range Communication (DSRC), LTE-V2X, IEEE 1609.x, and ITS-G5. The criteria considered are Compatibility, Security, Data Rate, Range, Scalability, and Spectral Efficiency. Different MCDM approaches such as AHP, Entropy method, and FUCOM are applied to this linguistic decision matrix to obtain the criterion weights, and methods such as COPRAS, SMART, and MAIRCA are applied in ranking the alternatives. A comparative analysis is made to determine the validity of the criterion weights and ranking results. These combined MCDM approaches shall be applied to other decision-making scenarios of vehicular networking to design optimal solutions.
Over the last 10 years, data mining has become more important, and the field of healthcare research has seen a significant uptick in activity. Most of the applications that were presented may be grouped into two distinct classifications: decision support benches and policy formulation. It is still difficult to come across books in the field of healthcare that are worth reading. This review article provides an overview of the current health sector study using a variety of DM approaches and algorithms to look at various diseases such as cancer, diabetes, HIV, and skin co-related disorders and their accurate assessment. Additionally, a startling discovery that was provided to define the article is also provided.
It is not uncommon for modern systems to be composed of a variety of interacting services, running across multiple machines in such a way that most developers do not really understand the whole system. As abstraction is layered atop abstraction, developers gain the ability to compose systems of extraordinary complexity with relative ease. However, many software properties, especially those that cut across abstraction layers, become very difficult to understand in such compositions. The communication patterns involved, the privacy of critical data, and the provenance of information, can be difficult to find and understand, even with access to all of the source code. The goal of Dataflow Tomography is to use the inherent information flow of such systems to help visualize the interactions between complex and interwoven components across multiple layers of abstraction. In the same way that the injection of short-lived radioactive isotopes help doctors trace problems in the cardiovascular system, the use of “data tagging” can help developers slice through the extraneous layers of software and pin-point those portions of the system interacting with the data of interest. To demonstrate the feasibility of this approach we have developed a prototype system in which tags are tracked both through the machine and in between machines over the network, and from which novel visualizations of the whole system can be derived. We describe the system-level challenges in creating a working system tomography tool and we qualitatively evaluate our system by examining several example real world scenarios.
In this work, user's emotion using its facial expressions will be detected. These expressions can be derived from the live feed via system's camera or any pre-exisiting image available in the memory. Emotions possessed by humans can be recognized and has a vast scope of study in the computer vision industry upon which several researches have already been done. The work has been implemented using Python (2.7, Open Source Computer Vision Library (OpenCV) and NumPy. The scanned image(testing dataset) is being compared to the training dataset and thus emotion is predicted. The objective of this paper is to develop a system which can analyze the image and predict the expression of the person. The study proves that this procedure is workable and produces valid results.
As information technology and wireless technology have developed, digital archive management systems have grown in popularity. Electronic files and data are mostly stated based on the access database, whereas old paper repositories, have inbuilt individuality and powerful tamperproof alteration. With the use of the Internet, marketers can now connect with their current clients on a deeper level, create new online markets, and generate new desires. This active market participation targets clients more successfully using current technology. This study examines how blockchain technology might affect a company's marketing initiatives. This study combines distributed ledgers, consensus procedures, encryption methods, and blockchain technology. Businesses that desire to engage in green products have access to a number of incentives. Electronic information management is therefore necessary because of its ability to securely retain and access large volumes of data while keeping the confidentiality of computer system functionality.
Chronic kidney disease (CKD), a significant issue for public health, affects millions of individuals globally. The course of end-stage renal disease must be stopped or reversed, hence it is crucial to find chronic kidney disease early in order to receive therapy. Prediction of CKD is a second source of treatment, as machine learning techniques, with their high classification accuracy, are becoming increasingly significant in medical diagnosis. To learn about CKD and associated issues in this situation, deep learning is used. The sole inputs used to construct the three distinct types of models were the retinal fundus image alone (test model), the covariate only (the reference model), and the retinal fundus image plus covariate (hybrid model). To maintain the accuracy of contemporary classification systems, feature selection techniques must be applied correctly in order to reduce data size. Here, recommend the Fruit fly optimisation algorithm (FFOA) and the heterogeneous artificial neural network (HMANN) in this paper for efficient disease categorization. An Internet of Medical Things (IoMT) platform is presented for the early detection, segmentation, and diagnosis of chronic renal failure using a heterogeneous modified artificial neural network (HMANN). The Multilayer Perceptron (MLP) and Support Vector Machine (SVM) algorithms are used to classify the suggested HMANN. The ideal feature is chosen using an FFOA from a large pool of candidate features. The proposed method uses ultrasound pictures as its foundation and, as a first step in processing, slices a region of interest in the kidney in the ultrasound image. The accuracy, sensitivity, specificity, positive predictive power, negative predictive power, false positive rate, and false negative rate of the suggested CKD classification system were all taken into consideration when evaluating its performance.
In order to get around the inherent constraints of distinct imaging modalities, multimodal fusion in neuroimaging integrates data from various imaging modalities. Higher temporal and spatial precision, improved contrast, the correction of imaging distortions, and the bridging of physiological and cognitive data can all be achieved through neuroimaging fusion. This analysis aims to examine the fusion and optimization of the multimodal neuroimaging technique and to examine a multimodal neuroimaging-based technique for measuring brain fatigue. Four-dimensional consistency of local neural activities (FOCA) and local multimodal serial analysis (LMSA) are primarily presented to naturally merge electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI). With the time precision relying on EEG and the space precision focused on fMRI, the time–space matching in the data fusion system has acceptable outcomes, with the time precision above 88% and the space precision above 89%.
Trusted applications frequently execute in tandem with untrusted applications on personal devices and in cloud environments. Since these co-scheduled applications share hardware resources, the latencies encountered by the untrusted application betray information about whether the trusted applications are accessing shared resources or not. Prior studies have shown that such information leaks can be used by the untrusted application to decipher keys or launch covert-channel attacks. Prior work has also proposed techniques to eliminate information leakage in various shared resources. The best known solution to eliminate information leakage in the memory system incurs high performance penalties. This work develops a comprehensive approach to eliminate timing channels in the memory controller that has two key elements: (i) We shape the memory access behavior of each thread so that it has an unchanging memory access pattern. (ii) We show how efficient memory access pipelines can be constructed to process the resulting memory accesses without introducing any resource conflicts. We mathematically show that the proposed system yields zero information leakage. We then show that various page mapping policies can impact the throughput of our secure memory system. We also introduce techniques to re-order requests from different threads to boost performance without leaking information. Our best solution offers throughput that is 27% lower than that of an optimized non-secure baseline, and that is 69% higher than the best known competing scheme.
Nanophotonic architectures have recently been proposed as a path to providing low latency, high bandwidth network-on-chips. These proposals have primarily been based on micro-ring resonator modulators which, while capable of operating at tremendous speed, are known to have both a high manufacturing induced variability and a high degree of temperature dependence. The most common solution to these two problems is to introduce small heaters to control the temperature of the ring directly, which can significantly reduce overall power efficiency. In this paper, we introduce plasmonics as a complementary technology. While plasmonic devices have several important advantages, they come with their own new set of restrictions, including propagation loss and lack of wave division multiplexing (WDM) support. To overcome these challenges we propose a new hybrid photonic/plasmonic channel that can support WDM through the use of photonic micro-ring resonators as variation tolerant passive filters. Our aim is to exploit the best of both technologies: wave-guiding of photonics, and modulating using plasmonics. This channel provides moderate bandwidth with distance independent power consumption and a higher degree of temperature and process variation tolerance. We describe the state of plasmonics research, present architecturally-useful models of many of the most important devices, explore new ways in which the limitations of the technology can most readily be minimized, and quantify the applicability of these novel hybrid schemes across a variety of interconnect strategies. Our link-level analysis shows that the hybrid channel can save from 28% to 45% of total channel energy-cost per bit depending on process variation conditions.
In recent times, a massive count of data and their increase gradually changed the significance of data security and data analysis methods for Big Data. An intrusion detection system (IDS) is a scheme which analyzes and monitors data for detecting some intrusion from the system or network. Massive volume, variety, and maximum speed of data created in the network develop the data analysis procedure for detecting attacks with typical approaches highly complex. Big Data systems can be utilized in IDS for managing Big Data for accurate and effectual data analysis procedures. This study develops an Intrusion Detection Approach using Hierarchical Deep Learning-based Butterfly Optimization algorithm (ID-HDLBOA) in Big Data Platform. The presented ID-HDLBOA technique combines the concept of DL with hyperparameter tuning process. In the presented ID-HDLBOA technique, hierarchical LSTM model is provided for intrusion detection purposes. Finally, BOA is used as a hyperparameter tuning strategy for the LSTM model and it results in improvised detection efficiency. The experimental validation of the ID-HDLBOA technique is assessed on benchmark intrusion dataset and the model gives the accuracy value of 98%. Wide-ranging experiments were performed and the outcomes emphasized the supremacy of the ID-HDLBOA algorithm.
This study examines the effect of green finance and digitalization on the business performance of SMEs, with a particular focus on the mediating role of sustainable business practices in the relationship of green finance and business performances and digitalization and business performances. A quantitative research method has been employed in this study where primary data with a sample size of 135 was collected from field surveys done inside the Kathmandu valley through a structured questionnaire developed using a 5-point Likert scale. The collected data were coded into SPSS for descriptive and inferential analysis, using Regression Analysis and the Baron Kenny mediation analysis to assess the relationship between the variables. The findings indicated that both green finance and digitalization positively impact the business performance of SMEs. Sustainable business practices partially mediate the relationship between green finance and performance, but no such mediating effect was found for digitalization. These findings underscore the importance of aligning green finance initiatives with sustainable business practices to achieve improved performance outcomes among SMEs.
In recent days, additive manufacturing (AM) plays a vital role in manufacturing a component compared to subtractive manufacturing. AM has a wide advantage in producing complex parts and revolutionizing logistics panorama worldwide. Many researchers compared this emerging manufacturing methodology with the conventional methodology and found that it helps in meeting the demand, designing highly complex components, and reducing wastage of materials, and there are a wide variety of AM processes. The process of making the components in full use of technology with several manufacturing applications to meet the above is studied along with the properties of AM, and subsequently, the advantages of AM over the subtractive methods are described. In this paper, the achievements in this manner with considerable gains are studied and are concluded as a paradigm shift to fulfil the AM potential.
In 2020, Novartis Pharmaceuticals Corporation and the U.S. Food and Drug Administration (FDA) started a 4-year scientific collaboration to approach complex new data modalities and advanced analytics. The scientific question was to find novel radio-genomics-based prognostic and predictive factors for HR+/HER- metastatic breast cancer under a Research Collaboration Agreement. This collaboration has been providing valuable insights to help successfully implement future scientific projects, particularly using artificial intelligence and machine learning. This tutorial aims to provide tangible guidelines for a multi-omics project that includes multidisciplinary expert teams, spanning across different institutions. We cover key ideas, such as "maintaining effective communication" and "following good data science practices," followed by the four steps of exploratory projects, namely (1) plan, (2) design, (3) develop, and (4) disseminate. We break each step into smaller concepts with strategies for implementation and provide illustrations from our collaboration to further give the readers actionable guidance.
Deep Learning has taken an utmost interest in the field of big data analytics due to its feature extraction and classification properties. Traditionally, researchers used Machine Learning algorithms to classify big data; however, the feature extraction was carried out by a human-driven process. Thus, researchers discovered the deep learning approaches to carry out the feature extraction by using algorithms. In this research, a "Convolutional Neural Network" or CNN algorithm has been selected to understand the accuracy of big data analytics. Primary research has been carried out to understand how hidden layers and nodes impact the accuracy of this neural network. Moreover, CNN has been compared with other neural networks to understand if CNN is out-competing the rest algorithms or not (other two algorithms selected are "Recurrent Neural Network" or RNN and "Artificial Neural Network" or ANN). In the primary research, CNN and other algorithms were tasted for accuracy against hidden layers, nodes, training and validation time. Regression and Correlation analyses have been carried out where independent variables were: Training time, Validation time, Hidden layers and Hidden nodes. Dependent variables were CNN, ANN and RNN. Findings showed that CNN is 92% accurate whereas other neural networks possess less than 90% of accuracy in big data analytics. The hidden nodes have significant positive impact on the accuracy of CNN.
Key exchange protocols establish a secret key to confidentially communicate digital information over public channels. Lattice-based key exchange protocols are a promising alternative for next-generation applications due to their quantum-cryptanalysis resistance and implementation efficiency. While these constructions rely on the theory of quantum-resistant lattice problems, their practical implementations have shown vulnerability against side-channel attacks in the context of public-key encryption or digital signatures. Applying such attacks on key exchange protocols is, however, much more challenging because the secret key changes after each execution of the protocol, limiting the side-channel adversary to a single measurement. In this paper, we demonstrate the first successful power side-channel attack on lattice-based key exchange protocols. The attack targets the hardware implementation of matrix and polynomial multiplication used in these protocols. The crux of our idea is to apply a horizontal attack that makes hypothesis on several intermediate values within a single execution all relating to the same secret and to combine their correlations for accurately estimating the secret key. We illustrate that the design of key exchange protocols combined with the nature of lattice arithmetic enables our attack. Since a straightforward attack suffers from false positives, we demonstrate a novel procedure to recover the key by following the sequence of intermediate updates during multiplication. We analyzed two key exchange protocols, NewHope (USENIX'16) and Frodo (CCS'16), and show that their implementations can be vulnerable to our attack. We test the effectiveness of the proposed attack using concrete parameters of these protocols on a physical platform with real measurements. On a SAKURA-G FPGA Board, we show that the proposed attack can estimate the entire secret key from a single power measurement with over 99% success rate.
Traffic jam is caused by the rise in traffic capacity brought on by the accelerated growth of roadway facilities. Inside the Sultanate of Oman, the similar situation is present. One of the biggest issues in Muscat & similar towns inside the Sultanate of Oman is transportation traffic. It is mostly driven by the quick increase in automobile numbers over a short period of time. It is necessary to create an IoT-based transportation management technology to mitigate the effects of road gridlock. A calculation of the real vehicle volume on the roadway would serve as the foundation for the suggested method. . Genuine film & picture analysis tools will be used for this. In order to determine the thickness, the photographs that were taken & saved on the computer will be matched to the live pictures from the cameras. The goal is to manage transportation by calculating the amount of traffic on every sides of the route & providing the customer with a technology application-based signal signals management choice.
Key exchange protocols and key encapsulation mechanisms establish secret keys to communicate digital information confidentially over public channels. Lattice-based cryptography variants of these protocols are promising alternatives given their quantum-cryptanalysis resistance and implementation efficiency. Although lattice cryptosystems can be mathematically secure, their implementations have shown side-channel vulnerabilities. But such attacks largely presume collecting multiple measurements under a fixed key, leaving the more dangerous single-trace attacks unexplored. This article demonstrates successful single-trace power side-channel attacks on lattice-based key exchange and encapsulation protocols. Our attack targets both hardware and software implementations of matrix multiplications used in lattice cryptosystems. The crux of our idea is to apply a horizontal attack that makes hypotheses on several intermediate values within a single execution all relating to the same secret, and to combine their correlations for accurately estimating the secret key. We illustrate that the design of protocols combined with the nature of lattice arithmetic enables our attack. Since a straightforward attack suffers from false positives, we demonstrate a novel extend-and-prune procedure to recover the key by following the sequence of intermediate updates during multiplication. We analyzed two protocols, Frodo and FrodoKEM , and reveal that they are vulnerable to our attack. We implement both stand-alone hardware and RISC-V based software realizations and test the effectiveness of the proposed attack by using concrete parameters of these protocols on physical platforms with real measurements. We show that the proposed attack can estimate secret keys from a single power measurement with over 99% success rate.
High assurance systems used in avionics, medical implants, and cryptographic devices often rely on a small trusted base of hardware and software to manage the rest of the system. Crafting the core of such a system in a way that achieves flexibility, security, and performance requires a careful balancing act. Simple static primitives with hard partitions of space and time are easier to analyze formally, but strict approaches to the problem at the hardware level have been extremely restrictive, failing to allow even the simplest of dynamic behaviors to be expressed. Our approach to this problem is to construct a minimal but configurable architectural skeleton . This skeleton couples a critical slice of the low level hardware implementation with a microkernel in a way that allows information flow properties of the entire construction to be statically verified all the way down to its gate-level implementation. This strict structure is then made usable by a runtime system that delivers more traditional services (e.g. communication interfaces and long-living contexts) in a way that is decoupled from the information flow properties of the skeleton. To test the viability of this approach we design, test, and statically verify the information-flow security of a hardware/software system complete with support for unbounded operation, inter-process communication, pipelined operation, and I/O with traditional devices. The resulting system is provably sound even when adversaries are allowed to execute arbitrary code on the machine, yet is flexible enough to allow caching, pipelining, and other common case optimizations.
Attacks often succeed by abusing the gap between program and machine-level semantics-- for example, by locating a sensitive pointer, exploiting a bug to overwrite this sensitive data, and hijacking the victim program's execution. In this work, we take secure system design on the offensive by continuously obfuscating information that attackers need but normal programs do not use, such as representation of code and pointers or the exact location of code and data. Our secure hardware architecture, Morpheus, combines two powerful protections: ensembles of moving target defenses and churn. Ensembles of moving target defenses randomize key program values (e.g., relocating pointers and encrypting code and pointers) which forces attackers to extensively probe the system prior to an attack. To ensure attack probes fail, the architecture incorporates churn to transparently re-randomize program values underneath the running system. With frequent churn, systems quickly become impractically difficult to penetrate. We demonstrate Morpheus through a RISC-V-based prototype designed to stop control-flow attacks. Each moving target defense in Morpheus uses hardware support to individually offer more randomness at a lower cost than previous techniques. When ensembled with churn, Morpheus defenses offer strong protection against control-flow attacks, with our security testing and performance studies revealing: i) high-coverage protection for a broad array of control-flow attacks, including protections for advanced attacks and an attack disclosed after the design of Morpheus, and ii) negligible performance impacts (1%) with churn periods up to 50 ms, which our study estimates to be at least 5000x faster than the time necessary to possibly penetrate Morpheus.
Wireless sensor network (WSN) applications are added day by day owing to numerous global uses (by the military, for monitoring the atmosphere, in disaster relief, and so on). Here, trust management is a main challenge. Sensor nodes are important in wireless sensor networks, but they are easily depleted because of their short lifespan from continuous sensing activity and low battery capacity. So efficient energy utilization is a challenging task in a WSN. To minimize energy loss, clustering with an optimum path selection process is needed to retain energy in sensor nodes. This manuscript proposes multi-objective Pelican Optimization Algorithm (POA) routing to maintain energy efficiency and minimize transmission distances in wireless sensor networks. A cluster head (CH) is selected by using a Separable Convolution Neural Network (SCNN). Simulation outcomes prove the proposed technique attains 22.3% and 25.04% improvements in energy consumption when compared to the Multi-Objective CH Energy-aware Optimized Routing Approach at WSN (MOCH-EORA-WSN) and Multiple Optimum Cluster Head Multi-Objective Grasshopper Optimization with Harmony-search at WSN (MOCH-MOGOH-WSN), respectively.
Tools such as multi-threaded data race detectors, memory bounds checkers, dynamic type analyzers, data flight recorders, and various performance profilers are becoming increasingly vital aids to software developers. Rather than performing all the instrumentation and analysis on the main processor, we exploit the fact that increasingly high-throughput board level interconnect is available on many systems, a fact we use to offload analysis to an off-chip accelerator. We characterize the potential of such a system to both accelerate existing software development tools and enable a new class of heavyweight tools. There are many non-trivial technical issues in taking such an approach that may not appear in simulation, and to flush them out we have developed a prototype system that maps a DMA based analysis engine, sitting on a PCI-mounted FPGA, into the Valgrind instrumentation framework. With our novel instrumentation methods, we demonstrate that program analysis speedups of 29% to 440% could be achieved today with strictly off-the-shelf components on some of the state-of-the-art tools, and we carefully quantify the bottlenecks to illuminate several new opportunities for further architectural innovation.
This study's main goal was to assess the impact of IoT adoption in smart cities. The goals were to determine how IoT was being used in smart cities and the approaches that were being employed. The third goal was to determine the value of IoT. For the gathering of secondary data, the qualitative approach was chosen. It has made it possible to compile secondary data from numerous internet articles. Google Scholar was also employed as a tool for the gathering of secondary data. It has made it possible to recognise the significance of IoT in smart cities, and the descriptive evaluation method was used to examine the data that was gathered. It has made it possible for us to understand the value and underlying technology of IoT applications in smart cities. Digital tools and technology are the primary focus of IoT applications in smart cities. IoT's cutting-edge technology can manage traffic and cut waste in urban areas. Municipalities are employing these technologies to concentrate on enhancing infrastructure. The population's lifestyles have undergone significant change as a result of numerous IoT-related aspects. Additionally, the gathering of secondary data and the descriptive method of analysis have helped us understand the significance of IoT technology in smart cities.
The increased need of various services with different QoS (Quality of Service) requirements motivated the deployment of 5G wireless communication networks. While the one logical network approach was good in theory, today has given rise to a new technology called Network Slicing where multiple independent Logical Networks were provided on shared infrastructure that caters specifically for service specific requirements. However, a 5G network is far more dynamic and large scale so the concept of network slicing and resource allocation becomes significantly harder. This work has been made from our exploration to explore the deep learning approach based on CNN for better network slicing in 5G networks. The CNN algorithm is to search the spatial pattern in data, so we are able to provision resource allocation and QoS parameter for each slice automatically at network level. Network slicing framework allows great flexibility in responding to the highly dynamic network conditions and services demands, an advantage that is best leveraged with deep learning. CNN models discover spatial patterns in network data with high accuracy, which can be significantly beneficial to optimize resource usage and prediction for different network slices. This way, it enhances the QoS provisioning more than conventional workings eventually resulting in good network performance with higher resource utilization. This paper also discusses the trade-off between model complexity of CNNs and their corresponding improvement in logistic performance at practical deployment scenario when they are running on 5G networks, computational requirements (with natural scalability guarantees or not). By and large, deep learning is able to significantly improve the efficiency of 5G networks so they are more accommodating as well as adaptable to cater for various services & applications being placed at the very top of the 5G ecosystem.
Data containers enable users to control access to their data while untrusted applications compute on it. However, they require replicating an application inside each container - compromising functionality, programmability, and performance. We propose DATS - a system to run web applications that retains application usability and efficiency through a mix of hardware capability enhanced containers and the introduction of two new primitives modeled after the popular model-view-controller (MVC) pattern. (1) DATS introduces a templating language to create views that compose data across data containers. (2) DATS uses authenticated storage and confinement to enable an untrusted storage service, such as memcached and deduplication, to operate on plain-text data across containers. These two primitives act as robust declassifiers that allow DATS to enforce non-interference across containers, taking large applications out of the trusted computing base (TCB). We showcase eight different web applications including Gitlab and a Slack-like chat, significantly improve the worst-case overheads due to application replication, and demonstrate usable performance for common-case usage.
Digital twins are becoming more relevant for business and academic users due to advances in IoT, AI, and Big Data. Due to global urbanization, pollution, public safety, traffic congestion, and other challenges have arisen. New technologies make cities smarter to keep up with growth. In the Internet of Things (IoT) age, many sensing devices acquire and/or produce a broad range of sensory data over long periods of time for a variety of businesses and applications. The use case determines the device's data stream volume and speed. The efficacy of the analytics process used to analyze these streams of data to learn, predict, and act determines IoT's worth as a business paradigm changer and quality-of-life technology. This study introduces Deep Learning (DL), a family of advanced machine learning techniques, to enhance IoT analytics and teaching. Introducing new results, challenges, and research opportunities. This study may assist academics and newbies comprehend how to use DL to smart cities. Analyzing and summarizing major IoT DL research initiatives. Check out smart IoT devices with DL embedded into their AI. Ultimately, the study will identify issues and suggest additional research. Each chapter concludes with experimental findings and the newest literature review.
A significant expansion in monetary misfortunes gathering due to PC and organization assaults on RuNet objects demonstrates that current strategies for security are not adequately compelling. Examination of data security episodes shows that crooks to accomplish their goals apply different and multidirectional cyber assaults. At the point when the pandemic struck and education went online around the world, colleges needed to go with squeezing choices that decent cyber security against different variables, including well-being and security, convenience, and cost. A few colleges reflexively advanced virtual private organizations used for movements of every sort. However, due to the licences and resources needed for the large number of clients and the high-throughput applications they rely on, such a methodology would not have been feasible at IU. Maybe much more terrible, it would have expanded the possibility that the VPN would be inaccessible during a basic episode or other circumstance in which secure correspondences should be ensured.
Insects and illnesses that affect plants can have a major negative effect on both their quality and their yield. Digital image processing may be applied to diagnose plant illnesses and detect plant pests. In the field of digital image processing, recent developments have shown that more conventional methods have been eclipsed by deep learning by a wide margin. Now, researchers are concentrating their efforts on the question of how the technique of deep learning may be applied to the issue of identifying plant diseases and pests. In this paper, the difficulties that arise when diagnosing plant pathogens and pests are outlined, and the various diagnostic approaches that are currently in use are evaluated and contrasted. This article presents a summary of three perspectives, each of which is based on a different network design, in recent research on deep learning applied to the detection of plant diseases and pests. We developed a convolutional neural network (CNN)-based framework for identifying pest-borne diseases in tomato leaves using the Plant Village Dataset and the MobileNetV2 architecture. We compared the performance of our proposed MobileNetV2 model with other existing methods and demonstrated its effectiveness in pest detection. Our MobileNetV2 model achieved an impressive accuracy of 93%, outperforming some other models like GoogleNet and VGG16, which were fully trained on the pest dataset in terms of speed.
The Internet of Medical Things (IoMT): It is changing the healthcare sector in various ways by coupling the crucial aspects of IoT to monitor and diagnose patients remotely. Existing literature regarding IoMT applications has identified the high security vulnerabilities, unrealized real-world implementations, poor scalability, and high latency, but there are no proposed solutions to these challenges. It presents a robust Internet of Medical Things (IoMT) architecture which is real-time, secure, scalable, and enables remote health monitoring. By leveraging edge computing, AI, and blockchain-based security, the framework improves data privacy, reduces latency, and increases energy efficiency. In contrast to earlier studies that discuss specific conditions, the current work generalizes IoMT applications for a variety of ailments, enabling personalized healthcare solutions through artificial intelligence (AI)–driven analytics. In addition, the proposed system is designed to be interoperable such that it supports seamless integration across different IoT healthcare devices. Using predictive analytics, this system facilitates early disease detection and preventative healthcare action, fostering better patient outcomes and fewer hospital visits. This study also presents the design of an energy-efficient IoMT network to prolong the lifetime and viability of IoMT devices. In conclusion, this research expands on the future of remote healthcare by providing solutions to the scalability, privacy and real-time decision-making challenges, thereby developing an IoMT system that is robust, future-proof and adaptable to smart healthcare applications.
Microservices are the dominant architecture used to build internet-scale applications today. Being internet-facing, their most critical attack surfaces are the OWASP top 10 Web Application Security Risks. Many of the top 10 OWASP attack types—injection, cross site scripting, broken access control and security misconfigurations—have persisted for many years despite major investments in code analysis and secure development patterns. Because microservices decompose monolithic applications into components using clean APIs, they lend themselves to practical application of a classic security/resilience principle, N-versioning. The paper introduces RDDR, a principled approach for applying N-versioning to microservices to improve resilience to data leaks. RDDR applies N-versioning to vulnerable microservices, requiring minimal code changes and with low performance impact beyond the cost of replicating microservices. Our evaluation demonstrates RDDR mitigating vulnerabilities of the top 5 of the top 10 OWASP types by applying diversity and redundancy to individual microservices.
Critical ML or CML is a critical approach development of the standard ML (SML) procedure. Conventional ML (ML) is being used in radiology departments where complex neuroimages are discriminated using ML technology. Radiologists and researchers found that sole decision by the ML algorithms is not accurate enough to implement the treatment procedure. Thus, an intelligent decision is required further by the radiologists after evaluating the ML outcomes. The current research is based on the critical ML, where radiologists’ critical thinking ability, IQ (intelligence quotient), and experience in radiology have been examined to understand how these factors affect the accuracy of neuroimaging discrimination. A primary quantitative survey has been carried out, and the data were analysed in IBM SPSS. The results showed that experience in works has a positive impact on neuroimaging discrimination accuracy. IQ and trained ML are also responsible for improving the accuracy as well. Thus, radiologists with more experience in that field are able to improve the discriminative and diagnostic capability of CML.
AI has the potential to revolutionize healthcare by enabling more accurate diagnoses, more effective treatment regimens, and improved patient outcomes. While AI is promising, many challenges remain including limited case studies from the real world, regulatory pressure, bias in data and integration into existing health care delivery systems. In this research we intend to overcome these challenges by designing a comprehensive framework to enhance transactive adoption of AI in healthcare. Cohorts combined with longitudinal case studies advance the study; ethical perspectives, data quality improvement, and bias mitigation emphasises justification for the validity and generalizability of the AI technologies used, which improves the quality of the study. Focus of the Research The research attempts to build interoperable AI systems (which can connect with current healthcare infrastrukture) by Ideating solutions for scalable AI Integration Additionally, it also discusses the challenges posed by hackers and criminal organisations, along with measures to promote patient data privacy, regulatory compliance, and the long-term effects of artificial intelligence on patient healthcare. Such understanding may facilitate an adequate implementation of AI by healthcare professionals and organizations as to impact patient safety, decrease costs and increase the outcome of patient population sorting for different clinical environments.
In recent times, the internet of Things (IoT) is an alternative model that is quickly getting ground in the scenario of current wireless telecommunication. Wireless sensor network (WSN) is a significant part of IoT, and it is primarily accountable for reporting and acquiring information. As coverage area and lifetime of WSN directly define the performance of IoT, how to design a technique for conserving node energy and decreasing node death rate becomes crucial problem. Sensor network clustering is an efficient technique to overcome this problem. It splits nodes into clusters and chooses one to be cluster head (CH). The data communication and transmission within single cluster are accomplished by its CH. This study develops a hybrid evolutionary algorithm-based energy efficient cluster head selection (HEA-EECHS) technique in the IoT environment. The presented HEA-EECHS technique concentrates on the effectual choice of CHs in the IoT environment. To do so, the HEA-EECHS technique derives an improved artificial jellyfish search algorithm (IAJSA) by the incorporation of oppositional based learning (OBL) approach into the traditional AJSA. Along with that, the HEA-EECHS technique designs a fitness function incorporating four parameters namely energy, cluster node density, average neighboring distance, and average distance to BS. The experimental assessment of the HEA-EECHS technique is investigated under several IoT nodes and the final results gives the value of 500 WMNs, the HEA-EECHS method has attained decreased CMO of 0.0015. The simulation output highlighted the improvised efficacy of the HEA-EECHS technique.
Dynamically tracking the flow of data within a microprocessor creates many new opportunities to detect and track malicious or erroneous behavior, but these schemes all rely on the ability to associate tags with all of virtual or physical memory. If one wishes to store large 32-bit tags, multiple tags per data element, or tags at the granularity of bytes rather than words, then directly storing one tag on chip to cover one byte or word (in a cache or otherwise) can be an expensive proposition. We show that dataflow tags in fact naturally exhibit a very high degree of spatial-value locality, an observation we can exploit by storing metadata on ranges of addresses (which cover a non-aligned contiguous span of memory) rather than on individual elements. In fact, a small 128 entry on-chip range cache (with area equivalent to 4KB of SRAM) hits more than 98% of the time on average. The key to this approach is our proposed method by which ranges of tags are kept in cache in an optimally RLE-compressed form, queried at high speed, swapped in and out with secondary memory storage, and (most important for dataflow tracking) rapidly stitched together into the largest possible ranges as new tags are written on every store, all the while correctly handling the cases of unaligned and overlapping ranges. We examine the effectiveness of this approach by simulating its use in definedness tracking (covering both the stack and the heap), in tracking network-derived dataflow through a multi-language web application, and through a synthesizable prototype implementation.
Hardware-enclaves that target complex CPU designs compromise both security and performance. Programs have little control over micro-architecture, which leads to side-channel leaks, and then have to be transformed to have worst-case control- and data-flow behaviors and thus incur considerable slowdown. We propose to address these security and performance problems by bringing enclaves into the realm of accelerator-rich architectures. The key idea is to construct software-defined enclaves (SDEs) where the protections and slowdown are tied to an application-defined threat model and tuned by a compiler for the accelerator's specific domain. This vertically integrated approach requires new hardware data-structures to partition, clear, and shape the utilization of hardware resources; and a compiler that instantiates and schedules these data-structures to create multi-tenant enclaves on accelerators. We demonstrate our ideas with a comprehensive prototype -- Sesame -- that includes modifications to compiler, ISA, and microarchitecture to a decoupled access execute (DAE) accelerator framework for deep learning models. Our security evaluation shows that classifiers that could distinguish different layers in VGG, ResNet, and AlexNet, fail to do so when run using Sesame. Our synthesizable hardware prototype (on a Xilinx Pynq board) demonstrates how the compiler and micro-architecture enables threat-model-specific trade-offs in code size increase ranging from 3-7 $\%$ and run-time performance overhead for specific defenses ranging from 3.96$\%$ to 34.87$\%$ (across confidential inputs and models and single vs. multi-tenant systems).
For many mission-critical tasks, tight guarantees on the flow of information are desirable, for example, when handling important cryptographic keys or sensitive financial data. We present a novel architecture capable of tracking all information flow within the machine, including all explicit data transfers and all implicit flows (those subtly devious flows caused by not performing conditional operations). While the problem is impossible to solve in the general case, we have created a machine that avoids the general-purpose programmability that leads to this impossibility result, yet is still programmable enough to handle a variety of critical operations such as public-key encryption and authentication. Through the application of our novel gate-level information flow tracking method, we show how all flows of information can be precisely tracked. From this foundation, we then describe how a class of architectures can be constructed, from the gates up, to completely capture all information flows and we measure the impact of doing so on the hardware implementation, the ISA, and the programmer.
Clustering is a ubiquitous task in data science. Compared to the commonly used $k$-means clustering, $k$-medoids clustering requires the cluster centers to be actual data points and support arbitrary distance metrics, which permits greater interpretability and the clustering of structured objects. Current state-of-the-art $k$-medoids clustering algorithms, such as Partitioning Around Medoids (PAM), are iterative and are quadratic in the dataset size $n$ for each iteration, being prohibitively expensive for large datasets. We propose BanditPAM, a randomized algorithm inspired by techniques from multi-armed bandits, that reduces the complexity of each PAM iteration from $O(n^2)$ to $O(n \log n)$ and returns the same results with high probability, under assumptions on the data that often hold in practice. As such, BanditPAM matches state-of-the-art clustering loss while reaching solutions much faster. We empirically validate our results on several large real-world datasets, including a coding exercise submissions dataset, the 10x Genomics 68k PBMC single-cell RNA sequencing dataset, and the MNIST handwritten digits dataset. In these experiments, we observe that BanditPAM returns the same results as state-of-the-art PAM-like algorithms up to 4x faster while performing up to 200x fewer distance computations. The improvements demonstrated by BanditPAM enable $k$-medoids clustering on a wide range of applications, including identifying cell types in large-scale single-cell data and providing scalable feedback for students learning computer science online. We also release highly optimized Python and C++ implementations of our algorithm.
Users today are unable to use the rich collection of third-party untrusted applications without risking significant privacy leaks. In this paper, we argue that current and proposed applications and data-centric security policies do not map well to users' expectations of privacy. In the eyes of a user, applications and peripheral devices exist merely to provide functionality and should have no place in controlling privacy. Moreover, most users cannot handle intricate security policies dealing with system concepts such as labeling of data, application permissions and virtual machines. Not only are current policies impenetrable to most users, they also lead to security problems such as privilege-escalation attacks and implicit information leaks. Our key insight is that users naturally associate data with real-world events, and want to control access at the level of human contacts. We introduce Bubbles, a context-centric security system that explicitly captures user's privacy desires by allowing human contact lists to control access to data clustered by real-world events. Bubbles infers information-flow rules from these simple context-centric access-control rules to enable secure use of untrusted applications on users' data. We also introduce a new programming model for untrusted applications that allows them to be functional while still upholding the users' privacy policies. We evaluate the model's usability by porting an existing medical application and writing a calendar app from scratch. Finally, we show the design of our system prototype running on Android that uses bubbles to automatically infer all dangerous permissions without any user intervention. Bubbles prevents Android-style permission escalation attacks without requiring users to specify complex information flow rules.
Concerns about cybersecurity have increased along with the Internet of Things' (IoT) exponential growth in recent years. Artificial intelligence (AI), which is used to create sophisticated algorithms to safeguard networks and systems, including Internet of Things technologies, is at the forefront of cybersecurity. However, hackers have discovered methods to take advantage of his AI, and they have begun leveraging it against their adversaries to launch cyber security assaults. This review study investigates the connections between IoT, AI, and attacks utilizing and against AI, synthesizes data from several previous studies and research publications, and explores these areas. It is intended to be a comprehensive presentation and summary of the relevant literature.
The RTS games are one of the incredibly challenging tasks for the AI because of their large action space, long-term strategic planning, and multi-agent cooperation requirements. Conventional deep reinforcement learning (DRL) methods are effective but may have limitations in terms of scalability, computational grandiosity, generalization capabilities, and interpretable analyses. We present a deep reinforcement learning framework that overcomes these hurdles by boosting multi-agent coordination, sample efficiency, and employing explainable AI (XAI) techniques to improve the model interpretability in rigorous decision-making. In contrast to these existing methods, which rely on large amounts of computation and are severely limited in long-term strategic adaptation, our design features hierarchical learning, curriculum to shape rewards across adjudicated proxy games, and Bayesian uncertainty to promote work in action areas consistent with changing dynamics relative to game mechanics; thereby facilitating rapid adaptability to new situations in RTSs. We also propose dynamic action pruning methods to alleviate redundant action space representation, as well as enhancing the advantage of real-time decision-making. We validate our proposed model over diverse RTS environments, and it not only generalizes better but trains faster while having a richer strategic depth than existing state-of-the-art DRL models. This study closes the gap between theoretical advancements and practical RTS applications, introducing an efficient, interpretable and scalable solution for RTS game strategies driven by AI.
The Internet of Things (IoT), which is utilized for data collection and long-distance wireless communications, has assimilated into contemporary life. There is a problem with transferring, preserving, and analyzing a lot of statistics as a result of the growth in IoT applications. Precision agriculture (PA) can now be used because to enhancements in wireless communication and current computer technology. In this research, we analyze the agricultural application situations and empirical experiments to find acceptable, realistic and practicable wireless communication systems for PA. For PA uses, 3 different types of Wireless Sensor Networks (WSN) architectures depending on narrowband IoT (NB-IoT), long range (Lo-Ra), and zig-bee wireless communication systems are implemented an enabled. Three WSN systems' viability is confirmed by related tests. The energy usage of 3 wireless communication systems are contrasted by evaluating the average communication period. Field trials and in-depth assessment reveal that Lo-Ra and NB-IoT are two viable wireless communication systems for field agriculture situations, whereas ZigBee is a superior option for tracking establishment agriculture.
Sarcasm is a language phrase that transports the polar opposite of what is being said, usually something extremely disagreeable to mock or offend someone. Sarcasm was commonly employed on social networking sites daily. Since sarcasm might alter the significance of statement, the opinion analysis process is error-prone. Concerns regarding the integrity of analytics have developed as the utilization of automatic social media analytics apparatuses has extended. Based on the earlier study, sarcastic statements alone have considerably decreased the performance of automated sentiment analysis. This article develops a Hybrid Particle Swarm Optimization with Deep Learning Driven Sarcasm Detection (HPSO-DLSD) technique. The presented HPSO-DLSD technique mainly concentrates on the recognition of sarcasm on social media. In the presented HPSO-DLSD technique, the initial stage of data preprocessing is carried out. To detect and classify sarcasm, sparse stacked autoencoder (SAE) model is exploited and the detection performance can be boosted via the HPSO algorithm. The experimental result analysis of the HPSO-DLSD technique can be tested on benchmark dataset and the outcomes emphasized the enhancements of the HPSO-DLSD method over other current approaches.
Objectives: The increasing frequency of cyber threats necessitates the advancement of Intrusion Prevention Systems (IPS). However, existing IPS models suffer from high false positive rates, inefficiencies in real-time detection, and suboptimal accuracy levels. Methods: This study presents a CNN-LSTM hybrid model optimized for real-time cyber intrusion detection. The CICIDS2018 dataset was utilized for training, incorporating feature selection, hyper-parameter tuning, and dropout-based regularization to improve efficiency and prevent over-fitting. Findings: The proposed system achieved an F1-score of 99.5%, significantly outperforming conventional methods. Additionally, the false positive rate was reduced by 18%, enhancing system reliability in cyber-security applications. Novelty: Unlike prior works, this study integrates optimized feature selection mechanisms with real-time sequence learning through CNN-LSTM, leading to higher detection accuracy, improved generalization, and reduced computational complexity. Keywords: Convolutional neural networks (CNNs), CICIDS2018, Deep Learning, Feature selection, Long Shortterm Memory Networks (LSTMs)
With the proliferation of digital technologies, cybercrime has become a pervasive system where end users, government users and business industries worldwide. Addressing this complex challenge requires innovative approaches that leverage different technologies like machine learning and data science. This abstract presents a machine learning-based computational system designed to combat cybercrime offenses effectively. The proposed system integrates machine learning algorithms with computational systems that find a lot of information and its similar patterns which are the ways of cybercrimes. By leveraging supervised, unsupervised, and reinforcement learning methods, the system can detect anomalies, classify malicious behavior, and predict potential cyber threats in real-time. Cybercrime has become a significant concern in the modern system where most of the users are using digital devices which causes the identity theft to financial fraud. Traditional methods of combating cybercrime are often reactive and fall short in finding the rapidly evolving system of cyber threats. In response, there is a growing interest in developing proactive and intelligent systems to detect, prevent, and mitigate cybercrime offenses. This paper presents a machine learning-based computational system for controlling cybercrime offenses. By seeing the information in the data set it starts learning itself and adapting to emerging threats, the system can effectively detect and reaction on the cyber threats active in the real environment.
Side-channel attacks monitor some aspect of a computer system's behavior to infer the values of secret data. Numerous side-channels have been exploited, including those that monitor caches, the branch predictor, and the memory address bus. This paper presents a method of defending against a broad class of side-channel attacks, which we refer to as digital side-channel attacks. The key idea is to obfuscate the program at the source code level to provide the illusion that many extraneous program paths are executed. This paper describes the technical issues involved in using this idea to provide confidentiality while minimizing execution overhead. We argue about the correctness and security of our compiler transformations and demonstrate that our transformations are safe in the context of a modern processor. Our empirical evaluation shows that our solution is 8.9× faster than prior work (GhostRider [20]) that specifically defends against memory trace-based side-channel attacks.
Building a firm includes strategic management and corporate development. These elements can help a corporate organisation develop its financial and economic stability. With the help of this organisation, it is possible to plan out the business-related strategy that will be used to put ideas into practice and accomplish their organisational goals. An organisation can comprehend its accessibility, leads, revenue, sales, stability, etc., with the aid of this aspect. On the other hand, developing a corporate strategy is what is meant by strategic management. They carry out the strategic vision with the aid of this preparation and examine the outcomes by putting this planning into practice. The machine learning process helps in the strategic management of the financial sector of a company.
Morpheus II is a secure processor designed to prevent control flow attacks. Morpheus II strengthens the defenses of the Morpheus [1] processor, by deploying always-on encryption to obfuscate code and pointers along with runtime churn to thwart side-channel attacks. Focusing on Remote Code Execution attacks, we modified the RISC-V Rocket core to support always-encrypted code and code pointers with negligible performance impact and less than 2% area overhead. Morpheus II was deployed running a web server interface to a mock medical database on AWS F1 instances, where it was red-teamed for three months by over 500 security researchers. No vulnerabilities were discovered in Morpheus II. In addition, we evaluated Morpheus II against a range of CWE attack classes including a Blind ROP attack on the web server. We show that Morpheus II defenses increase Blind ROP probe time for gadgets from weeks to likely thousands of years.
The enhanced blind equalization technique for QAM/M-PSK signals, which are frequently employed in digital communication systems, is presented in this study. It is in accordance with the logarithmic cost function. In contrast to the two commonly used blind equalization techniques for QAM/M-PSK systems, the CMA (Constant Modulus Algorithm) and MCMA (Modified Constant Modulus Algorithm), The suggested approach can reduce steady state error and converge more quickly. When used in traditional equalization systems, CMA and MCMA show a significant steady-state mean square error and a very poor convergence speed. The proposed scheme uses a improved resilient back propagation with/ without weight backtracking based modeling which allows a simplification in weight adaption technique of the equalizer. Improved Logarithmic cost function based weight adaption has been incorporated which enhanced the suggested algorithm’s effectiveness. The simulation findings, the suggested approach outperforms the CMA and MCMA algorithms in terms of convergence rates and steady state error.
Quantum cryptography has emerged as a revolutionary technology for ensuring secure communication in the era of quantum computing. While existing research primarily focuses on theoretical frameworks and small-scale experimental setups, significant challenges remain in practical implementation, scalability, and security vulnerabilities. This study aims to bridge the gap between theory and real-world deployment by developing robust quantum cryptographic protocols that address key challenges such as noise management, side-channel attacks, and Trojan horse attacks. Additionally, we propose an optimized quantum key distribution (QKD) mechanism that ensures secure communication over long distances under realistic conditions. Our research integrates post-quantum cryptography with quantum cryptographic techniques to provide a hybrid security model that is resilient against both classical and quantum computing threats. By leveraging commercially available quantum hardware and advanced randomness extraction methods, this study contributes to the development of scalable, secure, and efficient quantum communication networks. The findings of this research will play a crucial role in advancing secure digital communication systems and fortifying data security in the post-quantum era.
In recent work published at ACM CCS 2013 [5], we introduced Phantom, a new secure processor that obfuscates its memory access trace. To an adversary who can observe the processor’s output pins, all memory access traces are computationally indistinguishable (a property known as obliviousness). We achieve obliviousness through a cryptographic construct known as Oblivious RAM or ORAM. Existing ORAM algorithms introduce a fundamental overhead by having to access signicantly more data per mem
This study examines how blockchain filters might modify network architecture to boost internet speed. This novel strategy may address the growing need for faster, more reliable internet connections in a future where digital connectedness is crucial. Our work develops a blockchain-based filtering mechanism, evaluates performance, and uses mathematical modelling, simulations, and statistical analysis. Blockchain filters reduce latency, increase throughput, and reduce packet loss. It has major ramifications for network providers and customers. Network providers can improve resource allocation, customer happiness, and income streams, while customers may enjoy quicker connections, lower latency, and better security online. The report also suggests scalability, security, and real-world deployment to maximize this disruptive technology. This discovery enables a brighter, more efficient future for global internet access as the digital world evolves.
Because of its on-the-go nature, edge AI has gained popularity, allowing for realtime analytics by deploying artificial intelligence models onto edge devices. Despite the promise of Edge AI evidenced by existing research, there are still significant barriers to widespread adoption with issues such as scalability, energy efficiency, security, and reduced model explainability representing common challenges. Hence, while this paper solves the Edge AI in a number of ways, with real use case of a deployment, modular adaptability, and dynamic AI model specialization. Our paradigm achieves low latency, better security and energy efficiency using light-weight AI models, federated learning, Explainable AI (XAI) and smart edge-cloud orchestration. This framework could enable generic AI beyond specific applications that depend on multi-modal data processing, which contributes to the generalization of applications across various industries such as healthcare, autonomous systems, smart cities, and cybersecurity. Moreover, this work will help deploy sustainable AI by employing green computing techniques to detect anomalies in near real-time in various critical domains helping to ease challenges of the modern world.
We introduce PHANTOM [1] a new secure processor that obfuscates its memory access trace. To an adversary who can observe the processor's output pins, all memory access traces are computationally indistinguishable (a property known as obliviousness). We achieve obliviousness through a cryptographic construct known as Oblivious RAM or ORAM. We first improve an existing ORAM algorithm and construct an empirical model for its trusted storage requirement. We then present PHANTOM, an oblivious processor whose novel memory controller aggressively exploits DRAM bank parallelism to reduce ORAM access latency and scales well to a large number of memory channels. Finally, we build a complete hardware implementation of PHANTOM on a commercially available FPGA-based server, and through detailed experiments show that PHANTOM is efficient in both area and performance. Accessing 4KB of data from a 1GB ORAM takes 26.2us (13.5us for the data to be available), a 32x slowdown over accessing 4KB from regular memory, while SQLite queries on a population database see 1.2-6x slowdown. PHANTOM is the first demonstration of a practical, oblivious processor and can provide strong confidentiality guarantees when offloading computation to the cloud.
The integration of advanced artificial intelligence (AI) techniques into horticulture has opened new avenues for optimizing crop management and enhancing productivity. This study explores the application of K-means clustering and Generative Adversarial Networks (GANs) in horticultural practices, focusing on interactive hyperspectral data visualization. Through the evaluation of K-means clustering for plant health assessment, precision values ranging from 0.82 to 0.87 and recall values ranging from 0.77 to 0.84 were observed across 10 experimental trials, affirming the algorithm's efficacy in accurately classifying plant health status. Additionally, GANs were employed to generate synthetic hyperspectral data, yielding structural similarity index (SSI) scores ranging from 0.90 to 0.94 and root mean square error (RMSE) values ranging from 0.03 to 0.07, underscoring the high fidelity of synthetic data compared to real-world observations. These results highlight the potential of AI-enhanced horticulture to revolutionize decision-making processes and resource management strategies. By leveraging AI techniques for spectral analysis and data synthesis, horticulturists can gain actionable insights into plant health, nutrient levels, and environmental conditions, leading to improved crop yields and sustainable agricultural practices.
Blockchain technology has become a disruptive force that is quickly reshaping supply chain management by providing greater transparency, security, and efficiency. As promising as the Blockchain is, previous research fails to provide solutions for bringing Blockchain into real-world practice, for getting it up to mass scale, and for ensuring that it meets the regulators’ requirements and these factors limit a more widespread adoption of Blockchain. By offering a review of blockchain-based supply chain management initiatives, their pros and cons, and addressing under-researched topics related to optimized consensus mechanisms, interoperability solutions, and AI-driven solutions integration, this research fills the gaps to decrease inefficiencies in supply chains. As part of the latter, the study presents zero-knowledge proofs, decentralized identity verification and cross-chain protocols as potential solutions to address security concerns and enhance interoperability among multiple chains. Also, a deeper implementation roadmap is provided, enabling pragmatic applicability in real business operations across the global supply chain. By analyzing cases, the study emphasizes the practical contributions of blockchain in traceability, fraud prevention, inventory optimization, and automated contract execution. The results highlight blockchain as a scalable, secure, and legally compliant technology for solving modern supply chain problems, filling the gap between theory and practical adoption.
Hardware-based side channels are known to expose hard-to-detect security holes enabling attackers to get a foothold into the system to perform malicious activities. Despite this fact, security is rarely accounted for in hardware design flows. As a result, security holes are often only identified after significant damage has been inflicted. Recently, gate level information flow tracking (GLIFT) has been proposed to verify information flow security at the level of Boolean gates. GLIFT is able to detect all logical flows including hardware specific timing channels, which is useful for ensuring properties related to confidentiality and integrity and can even provide real-time guarantees on system behavior. GLIFT can be integrated into the standard hardware design, testing and verification process to eliminate unintended information flows in the target design. However, generating GLIFT logic is a difficult problem due to its inherent complexity and the potential losses in precision. This paper provides a formal basis for deriving GLIFT logic which includes a proof on the NP-completeness of generating precise GLIFT logic and a formal analysis of the complexity and precision of various GLIFT logic generation algorithms. Experimental results using IWLS benchmarks provide a practical understanding of the computational complexity.
High assurance systems such as those found in aircraft controls and the financial industry are often required to handle a mix of tasks where some are niceties (such as the control of media for entertainment, or supporting a remote monitoring interface) while others are absolutely critical (such as the control of safety mechanisms, or maintaining the secrecy of a root key). While special purpose languages, careful code reviews, and automated theorem proving can be used to help mitigate the risk of combining these operations onto a single machine, it is difficult to say if any of these techniques are truly complete because they all assume a simplified model of computation far different from an actual processor implementation both in functionality and timing. In this paper we propose a new method for creating architectures that both a) makes the complete information-flow properties of the machine fully explicit and available to the programmer and b) allows those properties to be verified all the way down to the gate-level implementation the design.
Attacks known as Distributed Denial-of-Service (DDoS) are rising as a result of recent, dramatic increase in demand for Internet access. When the amount and characteristics of network traffic, which may include harmful DDoS contents, expand dramatically, traditional basic algorithms using machine learning for classifying DDoS attacks frequently fail because it cannot automatically extract high-value characteristics. In order to effectively identify and classify DDoS attacks, a hybrid technique called DFNN-SAE-DCGAN that combines three deep learning-based models is suggested. The Deep Feed-Forward Neural Network (DFNN) and Stacked Auto encoder offers an efficient method for extracting features that identifies the most pertinent feature sets without the assistance of a person. To avoid the operational overhead and presumption associated with processing massive set of features with distortion and redundant characteristic values, the Deep Convolutional Generative Adversarial Networks (DCGAN) component of the proposed model classifies the attacks into various DDoS attack types using the restricted and minimized characteristic sets generated by the DFNN-SAE as inputs. The experimental results show a very high and resilient accuracy rate and an F1-score of 98.5%, which is higher than the performance of many similar approaches. These results were acquired by thorough and extensive trials on various performance aspects on the CICDDoS2019 dataset. This demonstrates that the suggested methodology may be utilized to defend against the increasing amount of DDoS attacks.
Local thermal hot-spots in microprocessors lead to worst-case provisioning of global cooling resources, especially in large-scale systems where cooling power can be 50~100% of IT power. Further, the efficiency of cooling solutions degrade non-linearly with supply temperature. Recent advances in active cooling techniques have shown on-chip thermoelectric coolers (TECs) to be very efficient at selectively eliminating small hot-spots. Applying current to a superlattice TEC-film that is deposited between silicon and the heat spreader results in a Peltier effect, which spreads the heat and lowers the temperature of the hot-spot significantly and improves chip reliability. In this paper, we propose that hot-spot mitigation using thermoelectric coolers can be used as a power management mechanism to allow global coolers to be provisioned for a better worst case temperature leading to substantial savings in cooling power.
Generative Flow Networks (GFlowNets) have been introduced as a method to sample a diverse set of candidates in an active learning context, with a training objective that makes them approximately sample in proportion to a given reward function. In this paper, we show a number of additional theoretical properties of GFlowNets. They can be used to estimate joint probability distributions and the corresponding marginal distributions where some variables are unspecified and, of particular interest, can represent distributions over composite objects like sets and graphs. GFlowNets amortize the work typically done by computationally expensive MCMC methods in a single but trained generative pass. They could also be used to estimate partition functions and free energies, conditional probabilities of supersets (supergraphs) given a subset (subgraph), as well as marginal distributions over all supersets (supergraphs) of a given set (graph). We introduce variations enabling the estimation of entropy and mutual information, sampling from a Pareto frontier, connections to reward-maximizing policies, and extensions to stochastic environments, continuous actions and modular energy functions.
For many mission-critical tasks, tight guarantees on the flow of information are desirable, for example, when handling important cryptographic keys or sensitive financial data. We present a novel architecture capable of tracking all information flow within the machine, including all explicit data transfers and all implicit flows (those subtly devious flows caused by not performing conditional operations). While the problem is impossible to solve in the general case, we have created a machine that avoids the general-purpose programmability that leads to this impossibility result, yet is still programmable enough to handle a variety of critical operations such as public-key encryption and authentication. Through the application of our novel gate-level information flow tracking method, we show how all flows of information can be precisely tracked. From this foundation, we then describe how a class of architectures can be constructed, from the gates up, to completely capture all information flows and we measure the impact of doing so on the hardware implementation, the ISA, and the programmer.
Managing sensitive data and ensuring its security are critical components of modern organizational processes. However, conventional centralized systems have significant limitations in transparency and security. To address these issues, the emergence of blockchain technology (BT) and distribution systems (DSs) presents a promising approach. This chapter aims to explore the potential benefits of combining BT and DS to enhance data management (DM) and security. Initially, it provides an overview of the current state of DM and security, emphasizing their importance in modern organizations. The chapter then delves into the concepts of BT and DS, highlighting their unique features and advantages in DM and security. Moreover, it discusses the challenges and limitations of BT, including scalability and interoperability issues, and weighs its advantages and disadvantages in DM and security. We analyze the potential of BT and DSs in enhancing DM and security, identifying the opportunities and challenges that arise from their use. Finally, the chapter provides insights into future research directions and highlights the potential impact of these technologies on DM and security. This chapter serves as a valuable resource for researchers, practitioners, and decision-makers who wish to explore the possibilities of these technologies in enhancing DM and security. With this new approach, we can anticipate a future with greater transparency, security, and efficiency in managing sensitive data.
Evolution of AI in the medical field is the biggest challenge. Research organisations are committed to continuing the in-depth quest for intelligence because of specific long-term demands and challenges in medicine. Because of developments in areas like the internet of things, cloud computing, and 5G mobile networks, artificial intelligence (AI) technology is being used in healthcare. Additionally, improved public services are made possible by the extensive integration of IoT technology and artificial intelligence, which gradually enhances diagnostic and therapeutic capabilities. The authors join the thoughts behind unambiguous calculations like to portray situation-based applications like distant conclusion and clinical coordination, pediatric escalated care units, cardiovascular concentrated care units, trauma centers, venous thromboembolism, patient consideration, and imaging utilizing the web of things (IoT), the cloud, huge information examination, and AI in medical services.
Hardware-based malware detectors (HMDs) are a key emerging technology to build trustworthy computing platforms, especially mobile platforms. Quantifying the efficacy of HMDs against malicious adversaries is thus an important problem. The challenge lies in that real-world malware typically adapts to defenses, evades being run in experimental settings, and hides behind benign applications. Thus, realizing the potential of HMDs as a line of defense - that has a small and battery-efficient code base - requires a rigorous foundation for evaluating HMDs. To this end, we introduce EMMA - a platform to evaluate the efficacy of HMDs for mobile platforms. EMMA deconstructs malware into atomic, orthogonal actions and introduces a systematic way of pitting different HMDs against a diverse subset of malware hidden inside benign applications. EMMA drives both malware and benign programs with real user-inputs to yield an HMD's effective operating range - i.e., the malware actions a particular HMD is capable of detecting. We show that small atomic actions, such as stealing a Contact or SMS, have surprisingly large hardware footprints, and use this insight to design HMD algorithms that are less intrusive than prior work and yet perform 24.7% better. Finally, EMMA brings up a surprising new result - obfuscation techniques used by malware to evade static analyses makes them more detectable using HMDs.
Banks serves the basic necessities of everyone next to hospitals and schools. People reach out to banks for various purposes. But one of the most common services offered by banks is loans. However, many common people are not completely aware of the banking procedures and eligibility criteria for loans. This study aims to develop a Machine Learning (ML) model which is capable of predicting whether the person is eligible for a health loan or not by analyzing some basic values entered by the user. For this process, a dataset consisting of all necessary parameters for a loan application is collected from Kaggle. The collected dataset is then preprocessed by two methods namely the null value elimination method and encoding. Simultaneously, three ML models were developed using three different algorithms. They are the Random Forest (RF), Naive Bayes (NB), and Linear Regression (LR). The preprocessed data will next be used to train the models. Following that, a comparison of a few parameters will be used to assess the models' effectiveness. The results of the analysis prove that the RF algorithm is the best in terms of both accuracy and error. The accuracy of the RF algorithm is 91% and it also predicts loan eligibility with lesser error values. The LR model has the lowest accuracy values and the highest error value making it the least efficient algorithm that can be used in loan prediction.
Flight control, banking, medical, and other high assurance systems have a strict requirement on correct operation. Fundamental to this is the enforcement of non-interference where particular subsystems should not affect one another. In an effort to help guarantee this policy, recent work has emerged with tracking information flows at the hardware level. This article uses a specific method known as gate-level information flow tracking (GLIFT) to provide a methodology for testing information flows in two common bus protocols, I2C and USB. We show that the protocols do elicit unintended information flows and provide a solution based on time division multiple access (TDMA) that provably isolates devices on the bus from these flows. This paper also discusses the overheads in area and simulation time incurred by this TDMA based solution.
These days, cloud systems and the Internet of Things (IoT) are extensively used in a variety of medical services. Instead of relying on the limited storage and processing power found in mobile equipment, the vast amount of data generated by IoT equipment in the medical industry may be analysed on a cloud system. In this research, an internet healthcare judgment supporting platform for predicting chronic kidney disease (CR) is presented as a means of providing effective healthcare care. The data collection, preparation, and categorization of medical information are the three processes that the proposed model goes through in order to predict CKD. To categorise the data examples into CKD and non-CKD, the logistic regression (LR) method is used. Additionally, the Adaptive Moment Estimation (Adam) & adapt training rate optimisation algorithms are used to fine-tune the LR's settings. With the use of a reference CKD dataset, the effectiveness of the newly proposed model is evaluated. The test results showed that the provided model's better properties on the used dataset.
Piping systems are designed to perform a definite function. Piping system designing and construction of any plant or services are time consuming, complex, and expensive effort. Designing of piping systems are governed by Industrial/International Codes and Standards. Piping codes defines the requirements of design, fabrication, use of materials, tests and inspection of piping systems and the standards are more on defining application design and construction rules and requirements for piping components. The basic design code used in this paper is ASME B31.3 Process Piping code which includes petroleum refineries, chemical plants, textile plants, paper plants and semiconductor plant. The objective of this paper is to explain the basic concept of flexibility such as flexibility characteristics and flexibility factor, and also stress intensification factor (SIF) referring to this code. CAD Packages like CAEPIPE has been developed for the comprehensive analysis of complex systems. This software make use of Finite Element Methods to carry out stress analysis. However this require the pipe system to be modelled before carrying out stress analysis. Static analysis is carried out in order to find the sorted code stresses, code compliance stresses, element forces and moments in coordinates and displacement at all nodes in the piping layout. Compare the SIF results against the results obtained with CAEPIPE by using some observations on SIF equations. In CAEPIPE, if the ratio of Maximum Stress Induced to Maximum Allowable Stress is below 1 then the pipe system is safe else redesigning is required.
Hardware-based malware detectors (HMDs) are a key emerging technology to build trustworthy computing platforms, especially mobile platforms. Quantifying the efficacy of HMDs against malicious adversaries is thus an important problem. The challenge lies in that real-world malware typically adapts to defenses, evades being run in experimental settings, and hides behind benign applications. Thus, realizing the potential of HMDs as a line of defense - that has a small and battery-efficient code base - requires a rigorous foundation for evaluating HMDs. To this end, we introduce EMMA - a platform to evaluate the efficacy of HMDs for mobile platforms. EMMA deconstructs malware into atomic, orthogonal actions and introduces a systematic way of pitting different HMDs against a diverse subset of malware hidden inside benign applications. EMMA drives both malware and benign programs with real user-inputs to yield an HMD's effective operating range - i.e., the malware actions a particular HMD is capable of detecting. We show that small atomic actions, such as stealing a Contact or SMS, have surprisingly large hardware footprints, and use this insight to design HMD algorithms that are less intrusive than prior work and yet perform 24.7% better. Finally, EMMA brings up a surprising new result - obfuscation techniques used by malware to evade static analyses makes them more detectable using HMDs.
Successful applications of predictive analysis can be found in finance. For these goals, modern soft computing theories are applied. In contrast to other apps, financial applications have unique characteristics. Forecasting is crucial, particularly in the financial industry because it lowers expenses, which can increase revenues and help businesses win the competition. Due to inescapable changes and expansion in every aspect of life, almost every organization on earth is operating in an unpredictable environment. Forecasting becomes increasingly difficult as a result of these developments, which either directly or indirectly affect stock market values. The demand for trustworthy, economical, and efficient forecasting models is therefore great in order to reduce uncertainty as well as risk in investing in the stock market. Academics and information researchers have developed a variety of time series models for the most precise and perfect future prediction. Financial autoregressive time series models have produced precise prediction-capable predictive models like the autoregressive moving average and autoregressive integrated moving average. To predict the closing value of the BSE100 S&P Sensex record every week and every day, discrete wavelet change and wavelet denoising soft computing models are combined with autoregressive models in the continuing work.
The assertiveness theory next addresses the difficulties of the travelling salesman after discussing the problem with transportation and assignment. The Shortest Cycling Route Problem (SCRP) finds the shortest route that stops in each city exactly once using a preset set of cities and their bilateral distances. The arc lengths in TSO are typically seen as representing travel time or travel expenses rather than actual distance. The precise arc length cannot be predicted because cargo, climate, road conditions, and other factors also can affect the journey time or cost. For handling the unpredictability in SCRP, fuzzy set theory provides a new tool. The shortest cyclic route problem with interval-valued neutrosophic fuzzy numbers as cost coefficients is solved using the simplified matrix techniques in this study. Reduced Matrix Method is used to solve a numerical problem and its efficacy is demonstrated.
Privacy and integrity are important security concerns. These concerns are addressed by controlling information flow, i.e., restricting how information can flow through a system. Most proposed systems that restrict information flow make the implicit assumption that the hardware used by the system is fully ``correct'' and that the hardware's instruction set accurately describes its behavior in all circumstances. The truth is more complicated: modern hardware designs defy complete verification; many aspects of the timing and ordering of events are left totally unspecified; and implementation bugs present themselves with surprising frequency. In this work we describe Sapper, a novel hardware description language for designing security-critical hardware components. Sapper seeks to address these problems by using static analysis at compile-time to automatically insert dynamic checks in the resulting hardware that provably enforce a given information flow policy at execution time. We present Sapper's design and formal semantics along with a proof sketch of its security. In addition, we have implemented a compiler for Sapper and used it to create a non-trivial secure embedded processor with many modern microarchitectural features. We empirically evaluate the resulting hardware's area and energy overhead and compare them with alternative designs.
Applications in the cloud are vulnerable to several attack scenarios. In one possibility, an untrusted cloud operator can examine addresses on the memory bus and use this information leak to violate privacy guarantees, even if data is encrypted. The Oblivious RAM (ORAM) construct was introduced to eliminate such information leak and these frameworks have seen many innovations in recent years. In spite of these innovations, the overhead associated with ORAM is very significant. This paper takes a step forward in reducing ORAM memory bandwidth overheads. We make the case that, similar to a cache hierarchy, a lightweight ORAM that fronts the full-fledged ORAM provides a boost in efficiency. The lightweight ORAM has a smaller capacity and smaller depth, and it can relax some of the many constraints imposed on the full-fledged ORAM. This yields a 2-level hierarchy with a relaxed ORAM and a full ORAM. The relaxed ORAM adopts design parameters that are optimized for efficiency and not capacity. We introduce a novel metadata management technique to further reduce the bandwidth for relaxed ORAM access. Relaxed ORAM accesses preserve the indistinguishability property and are equipped with an integrity verification system. Finally, to eliminate information leakage through LLC and relaxed ORAM hit rates, we introduce a deterministic memory scheduling policy. On a suite of memory-intensive applications, we show that the best Relaxed Hierarchical ORAM (ρ) model yields a performance improvement of 50%, relative to a Freecursive ORAM baseline.
High assurance systems used in avionics, medical implants, and cryptographic devices often rely on a small trusted base of hardware and software to manage the rest of the system. Crafting the core of such a system in a way that achieves flexibility, security, and performance requires a careful balancing act. Simple static primitives with hard partitions of space and time are easier to analyze formally, but strict approaches to the problem at the hardware level have been extremely restrictive, failing to allow even the simplest of dynamic behaviors to be expressed.
Abstract: Cloud computing is arising with developing fame in work process booking, particularly for logical work process. During resource allotment, the cloud computing climate might experience impressive issues as far as execution time and execution cost, which might prompt disturbances in assistance quality given to clients. Text Passwords are at this point a completely used affirmation part, but graphical passwords and biometrics are also used. Nevertheless, to make any system astoundingly secure we really want to improve comfort as well as security. Cloud resource scheduling is a critical aspect of cloud computing that involves efficiently allocating and managing cloud resources to meet the demands of applications and services. Various scheduling approaches aim to optimize resource utilization, minimize costs, and improve overall system performance. Automatically scaling resources up or down based on workload demand. Improves cost-efficiency by dynamically adjusting resources to match varying workloads, ensuring optimal performance during peak times and cost savings during low demand. Our proposed FSOS (Fastest scheduling Optimized system) algorithm is most probably used for the framework so that the performance may increase. It uses components of both Graphical and Text Password plans. In this proposed FSOS (Fastest scheduling Optimized system) mannequin we have in contrast with the most famous algorithms like multi-objective scheduling (MOS), fee conscious scheduling (SCAS) are the various scheduling algorithms to make cloud services secure and highly available to the end users. In this exploration, we endeavor to show the most well-known three static errand booking calculations execution of various algorithm system. All the algorithms are calculating the total execution cost and timing of different algorithms.
Computational characteristics of a program can potentially be used to identify malicious programs from benign ones. However, systematically evaluating malware detection techniques, especially when malware samples are hard to run correctly and can adapt their computational characteristics, is a hard problem.
For many years swarm intelligence (SI) algorithms have shown successful performance for complex optimization problems in many fields. Challenges are still there as computational complexity, premature convergence, sensitivity to parameters, and limitation of scaling in spite of their success. This creates a unique opportunity for SI algorithms to be further enhanced through these challenges. Parallelization and hybrid models can save a lot of computation resource consumption. Furthermore, moving past premature convergence provides more robust algorithms that can discover global optima. Moreover, the theoretical aspects of SI algorithms are still in their infancy and propose novel methods to improve predictability and reliability. The responsiveness of SI algorithms to parameter configurations facilitates the development of adaptive methods that dynamically adjust parameters, while the demand for a better exploration-exploitation balance creates opportunity for development of convergence strategies that improve efficiency. Moreover, achieving more sophisticated with the proposed constraints means that specific mechanisms could greatly improve the efficiency of multiple conditional tasks in the real world. As slow convergence and overfitting become noticeable obstacles, strategies for accelerated convergence and regularization techniques present opportunities for better and more generalized results. Finally, new designs in terms of scalability and memory efficiency will broaden the applicability of swarm intelligence algorithms in large-scale, resource-constrained environments. We present a survey of recent developments in SI algorithms, highlighting both their strengths and challenges, as well as potential new applications of these algorithms in optimization problems.
Abstract Mobile devices are equipped with increasingly smart batteries designed to provide responsiveness and extended lifetime. However, such smart batteries may present a threat to users’ privacy. We demonstrate that the phone’s power trace sampled from the battery at 1KHz holds enough information to recover a variety of sensitive information. We show techniques to infer characters typed on a touchscreen; to accurately recover browsing history in an open-world setup; and to reliably detect incoming calls, and the photo shots including their lighting conditions. Combined with a novel exfiltration technique that establishes a covert channel from the battery to a remote server via a web browser, these attacks turn the malicious battery into a stealthy surveillance device. We deconstruct the attack by analyzing its robustness to sampling rate and execution conditions. To find mitigations we identify the sources of the information leakage exploited by the attack. We discover that the GPU or DRAM power traces alone are sufficient to distinguish between different websites. However, the CPU and power-hungry peripherals such as a touchscreen are the primary sources of fine-grain information leakage. We consider and evaluate possible mitigation mechanisms, highlighting the challenges to defend against the attacks. In summary, our work shows the feasibility of the malicious battery and motivates further research into system and application-level defenses to fully mitigate this emerging threat.
The investigation on smart cities using WSN-wireless sensor networks seems to be able to benefit from the development of the Internet of Things (IoT) since both of the technologies' goals were comparable. Simultaneously with them, research on managing mobile crowd sensing (MCS) and WSN innovations encounter fresh potential and difficulties, particularly when implemented in a sizable context like a smart city setting. However, fresh approaches are being put out to handle current WSN and resource utilization challenges. To integrate the two technologies of sensing WSN and MCS, this study suggests a hybrid routing protocol depending on the RPL protocol. The idea is to assist the fixed nodes of WSN to improve performance by appropriately using MCS nodes. A fixed WSN has been used to evaluate the suggested protocol to examine the effect of integration on WSN functionality. When compared to RPL without MCS integration, the proposed findings show a good improvement in packet distribution proportion of 17% higher, end-to-end latency of 50% lesser, and energy usage of 25% less. Hence this research believes that the hybrid-RPL protocol may be effective for sensing and data acquisition, particularly in urban and smart city situations.
Speculative attacks such as Spectre can leak secret information without being discovered by the operating system. Speculative execution vulnerabilities are finicky and deep in the sense that to exploit them, it requires intensive manual labor and intimate knowledge of the hardware. In this paper, we introduce SpecRL, a framework that utilizes reinforcement learning to find speculative execution leaks in post-silicon (black box) microprocessors.
Information flow tracking is an effective tool in computer security for detecting unintended information flows. However, software based information flow tracking implementations have drawbacks in preciseness and performance. As a result, researchers have begun to explore tracking information flow in hardware, and more specifically, understanding the interference of individual bits of information through logical functions. Such gate level information flow tracking (GLIFT) can track information flow in a system at the granularity of individual bits. However, the theoretical basis for GLIFT, which is essential to its adoption in real applications, has never been thoroughly studied. This paper provides fundamental analysis of GLIFT by introducing definitions, properties, and the imprecision problem with a commonly used shadow logic generation method. This paper also presents a solution to this imprecision problem and provides results that show this impreciseness can be tolerated for the benefit of lower area and delay.
The Internet of Things, sometimes known as IoT, is a relatively new kind of Internet connectivity that connects physical objects to the Internet in a way that was not possible in the past. The Internet of Things is another name for this concept (IoT). The Internet of Things has a larger attack surface as a result of its hyperconnectivity and heterogeneity, both of which are characteristics of the IoT. In addition, since the Internet of Things devices are deployed in managed and uncontrolled contexts, it is conceivable for malicious actors to build new attacks that target these devices. As a result, the Internet of Things (IoT) requires self-protection security systems that are able to autonomously interpret attacks in IoT traffic and efficiently handle the attack scenario by triggering appropriate reactions at a pace that is faster than what is currently available. In order to fulfill this requirement, fog computing must be utilised. This type of computing has the capability of integrating an intelligent self-protection mechanism into the distributed fog nodes. This allows the IoT application to be protected with the least amount of human intervention while also allowing for faster management of attack scenarios. Implementing a self-protection mechanism at malicious fog nodes is the primary objective of this research work. This mechanism should be able to detect and predict known attacks based on predefined attack patterns, as well as predict novel attacks based on no predefined attack patterns, and then choose the most appropriate response to neutralise the identified attack. In the environment of the IoT, a distributed Gaussian process regression is used at fog nodes to anticipate attack patterns that have not been established in the past. This allows for the prediction of new cyberattacks in the environment. It predicts attacks in an uncertain IoT setting at a speedier rate and with greater precision than prior techniques. It is able to effectively anticipate both low-rate and high-rate assaults in a more timely manner within the dispersed fog nodes, which enables it to mount a more accurate defence. In conclusion, a fog computing-based self-protection system is developed to choose the most appropriate reaction using fuzzy logic for detected or anticipated assaults using the suggested detection and prediction mechanisms. This is accomplished by utilising a self-protection system that is based on the development of a self-protection system that utilises the suggested detection and prediction mechanisms. The findings of the experimental investigation indicate that the proposed system identifies threats, lowers bandwidth usage, and thwarts assaults at a rate that is twenty-five percent faster than the cloud-based system implementation.
Crimes committed online rank among the most critical global concerns. Daily, they cause country and citizen economies to suffer massive financial losses. With the proliferation of cyber-attacks, cybercrime has also been on the rise. To effectively combat cybercrime, it is essential to identify its perpetrators and understand their methods. Identifying and preventing cyber-attacks are difficult tasks. To combat these concerns, however, new research has produced safety models and forecast tools grounded on artificial intelligence. Numerous methods for predicting criminal behaviour are available in the literature. While they may not be perfect, they may help in cybercrime and cyber-attack tactic prediction. To find out whether an attack happened and, if so, who was responsible, one way to look at this problem is by using real-world data. There is data about the crime, the perpetrator's demographics, the amount of property damaged, and the entry points for the assault. Potentially, by submitting applications to forensics teams, victims of cyber-attacks may get information. This study uses ML methods to analyse cyber-crime consuming two patterns and to forecast how the specified characteristics will furnish to the detection of the cyber-attack methodology and perpetrator. Based on the comparison of eight distinct machine-learning methods, one can say that their accuracy was quite comparable. The Support Vector Machine (SVM) Linear outperformed all other cyber-attack tactics in terms of accuracy. The initial model gave us a decent notion of the assaults that the victims would face. The most successful technique for detecting malevolent actors was logistic regression, according to the success rate. To anticipate who the perpetrator and victim would be, the second model compared their traits. A person’s chances of being a victim of a cyber-attack decrease as their income and level of education rise. The proposed idea is expected to be used by departments dealing with cybercrime. Cyber-attack identification will also be made easier, and the fight against them will be more efficient.
This paper presents a new, co-designed compiler and architecture called GhostRider for supporting privacy preserving computation in the cloud. GhostRider ensures all programs satisfy a property called memory-trace obliviousness (MTO): Even an adversary that observes memory, bus traffic, and access times while the program executes can learn nothing about the program's sensitive inputs and outputs. One way to achieve MTO is to employ Oblivious RAM (ORAM), allocating all code and data in a single ORAM bank, and to also disable caches or fix the rate of memory traffic. This baseline approach can be inefficient, and so GhostRider's compiler uses a program analysis to do better, allocating data to non-oblivious, encrypted RAM (ERAM) and employing a scratchpad when doing so will not compromise MTO. The compiler can also allocate to multiple ORAM banks, which sometimes significantly reduces access times.We have formalized our approach and proved it enjoys MTO. Our FPGA-based hardware prototype and simulation results show that GhostRider significantly outperforms the baseline strategy.
Sensors and biosensors are devices for analytical purposes used for the quantification and qualification of an analyte of interest. The biosensor is able to interpret the chemical and physical changes produced in the presence of the compound to be analyzed, giving rise to an electronic signal capable of being interpreted. The newest application fields for biosensors vary depending on the type of transducer used and the biological agent, with the main applications being food, pharmaceutical and chemical industries, oil and gas prospecting, environmental control, quality control, medicine and engineering, biomedicine, pesticide control in agriculture, anti-doping control, etc. Biosensors have been linked with nanotechnology to improve their quality and reduce their size. With artificial intelligence, the quality of analysis is improved and provides concise results from a large amount of data. In this work, a study was carried out to understand the current scenario of this technology.
Privacy and integrity are important security concerns. These concerns are addressed by controlling information flow, i.e., restricting how information can flow through a system. Most proposed systems that restrict information flow make the implicit assumption that the hardware used by the system is fully ``correct'' and that the hardware's instruction set accurately describes its behavior in all circumstances. The truth is more complicated: modern hardware designs defy complete verification; many aspects of the timing and ordering of events are left totally unspecified; and implementation bugs present themselves with surprising frequency. In this work we describe Sapper, a novel hardware description language for designing security-critical hardware components. Sapper seeks to address these problems by using static analysis at compile-time to automatically insert dynamic checks in the resulting hardware that provably enforce a given information flow policy at execution time. We present Sapper's design and formal semantics along with a proof sketch of its security. In addition, we have implemented a compiler for Sapper and used it to create a non-trivial secure embedded processor with many modern microarchitectural features. We empirically evaluate the resulting hardware's area and energy overhead and compare them with alternative designs.
Enhancing the Internet of Things (IoT) security is among the most pressing concerns confronting the information technology sector. With significant numbers of loT systems being created and established, it is difficult for these systems to interact securely without affecting performance. The difficulties arise since the majority of loT systems are resource restricted and so possess restricted processing capability. In, this study, we intend to study the enhancement of IoT security of anomaly detection and intrusion prevention. To enhance anomaly detection and intrusion prevention performance, a binary categorization of typical and unusual IoT traffic is created. In this paper, we carefully evaluate the specificity and complication of IoT security protection, and then discover that Artificial Intelligence (AI) approaches like Machine Learning (ML) and ensemble classifiers may offer new strong abilities to satisfy IoT security demands. This enhancement can be associated with ensemble learning methods that make use of a variety of learning processes with variable capacities. We were capable of improving the predictability of our predictions while decreasing the likelihood of classification errors through the combination of these methods. The outcomes of the experiments suggest that the architecture could enhance the effectiveness of the anomaly detection and intrusion prevention System, with an accuracy level of 0.9863.
To understand function of proteins in living bodies we need to derive the protein sequences genome sequencing projects. For this purpose, we can use various tools or latest computational methods. These methods are related to the functions directly. Nuclear magnetic resonance (NMR) is helpful to make the 3 D protein structure. We're using a unique method to determine the protein structures in this paper. 1491 proteins have been taken in consideration from BMRB - Biological Magnetic Resonance Bank. The structural categorization of proteins (SCOP) method was useful in locating a set of 119 traits divided into 5 separate types. After conducting study, we were able to determine the structural classes of proteins with an accuracy of 80%. taking help of using Matthew Correlation coefficient. Results conclude that we can use NMR-based method for protein structural class identification as a tool for low-resolution.
Clustering is a ubiquitous task in data science. Compared to the commonly used $k$-means clustering, $k$-medoids clustering requires the cluster centers to be actual data points and support arbitrary distance metrics, which permits greater interpretability and the clustering of structured objects. Current state-of-the-art $k$-medoids clustering algorithms, such as Partitioning Around Medoids (PAM), are iterative and are quadratic in the dataset size $n$ for each iteration, being prohibitively expensive for large datasets. We propose BanditPAM, a randomized algorithm inspired by techniques from multi-armed bandits, that reduces the complexity of each PAM iteration from $O(n^2)$ to $O(n \log n)$ and returns the same results with high probability, under assumptions on the data that often hold in practice. As such, BanditPAM matches state-of-the-art clustering loss while reaching solutions much faster. We empirically validate our results on several large real-world datasets, including a coding exercise submissions dataset, the 10x Genomics 68k PBMC single-cell RNA sequencing dataset, and the MNIST handwritten digits dataset. In these experiments, we observe that BanditPAM returns the same results as state-of-the-art PAM-like algorithms up to 4x faster while performing up to 200x fewer distance computations. The improvements demonstrated by BanditPAM enable $k$-medoids clustering on a wide range of applications, including identifying cell types in large-scale single-cell data and providing scalable feedback for students learning computer science online. We also release highly optimized Python and C++ implementations of our algorithm.
Information flow is an important security property that must be incorporated from the ground up, including at hardware design time, to provide a formal basis for a system's root of trust. We incorporate insights and techniques from designing information-flow secure programming languages to provide a new perspective on designing secure hardware. We describe a new hardware description language, Caisson, that combines domain-specific abstractions common to hardware design with insights from type-based techniques used in secure programming languages. The proper combination of these elements allows for an expressive, provably-secure HDL that operates at a familiar level of abstraction to the target audience of the language, hardware architects.
Future wireless communications must find a way to overcome the challenge of efficient channel management in order to keep up with the growing need for bandwidth and transmission rate needed. Non-orthogonal multiple access is among the effective spectrum sharing techniques used in 5G backhaul wireless mesh networks (NOMA). For 5G backhaul wireless mesh networks, we describe a channel I allocation mechanism in this study. that is based on power demand, employing NOMA to increase channel utility and considering the traffic demands of tiny cells. We employ physical layer transmission in this method. By jointly optimizing the uplink/downlink NOMA channel assignment, the main objective is to promote user equity. There are two steps to the procedure in question. The first channel allocation is first determined using the travelling salesman problem (TSP) because of the double-side many-to-many A random velocity is added to the altered PSO to improve convergence and exploration behavior in order to increase exploration capacity. Simulated results are used to measure the intended scheme's performance while accounting for factors including throughput, signal-to-interference plus noise ratio (SINR), spectral efficiency, sum rate, outage likelihood, and fairness. The suggested plan increases fairness between the individual stations while maximizing network capacity. Experimental findings demonstrate that the suggested technique outperforms current approaches.
As more critical applications move to the cloud, there is a pressing need to provide privacy guarantees for data and computation. While cloud infrastructures are vulnerable to a variety of attacks, in this work, we focus on an attack model where an untrusted cloud operator has physical access to the server and can monitor the signals emerging from the processor socket. Even if data packets are encrypted, the sequence of addresses touched by the program serves as an information side channel. To eliminate this side channel, Oblivious RAM constructs have been investigated for decades, but continue to pose large overheads. In this work, we make the case that ORAM overheads can be significantly reduced by moving some ORAM functionality into the memory system. We first design a secure DIMM (or SDIMM) that uses commodity low-cost memory and an ASIC as a secure buffer chip. We then design two new ORAM protocols that leverage SDIMMs to reduce bandwidth, latency, and energy per ORAM access. In both protocols, each SDIMM is responsible for part of the ORAM tree. Each SDIMM performs a number of ORAM operations that are not visible to the main memory channel. By having many SDIMMs in the system, we are able to achieve highly parallel ORAM operations. The main memory channel uses its bandwidth primarily to service blocks requested by the CPU, and to perform a small subset of the many shuffle operations required by conventional ORAM. The new protocols guarantee the same obliviousness properties as Path ORAM. On a set of memory-intensive workloads, our two new ORAM protocols – Independent ORAM and Split ORAM – are able to improve performance by 1.9x and energy by 2.55x, compared to Freecursive ORAM.
Since they are used in an array of real-life uses, wireless sensor equipment in Internet of Things (IoT) systems will be among the biggest prolific sources of big data on the connection. The huge quantity of data collected from sensing equipment increases transmission overhead, reducing the short lifespan of IoT sensing equipment. As a result, it is required to cleanse and reduce the sensed information in order to cut transmission costs and conserve power on sensing equipment. A Data Reduction and Cleansing Technique (DRCT) for power consumption in IoT-based wireless sensor networks (WSNs) is suggested in this research. This method depends on 2 levels of data cleansing and reduction: sensing and aggregation.
High-assurance embedded systems are deployed for decades and expensive to re-certify – hence, each new attack is an unpatchable problem that can only be detected by monitoring out-of-band channels such as the system's power trace or electromagnetic emissions. Micro-Architectural attacks, for example, have recently come to prominence since they break all existing software-isolation based security – for example, by hammering memory rows to gain root privileges or by abusing speculative execution and shared hardware to leak secret data. This work is the first to use anomalies in an embedded system's power trace to detect evasive micro-architectural attacks. To this end, we introduce power-mimicking micro-architectural attacks – including DRAM-rowhammer attacks, side/covert-channel and speculation-driven attacks – to study their evasiveness. We then quantify the operating range of the power-anomalies detector using the Odroid XU3 board – showing that rowhammer attacks cannot evade detection while covert channel and speculation-driven attacks can evade detection but are forced to operate at a 36× and 7× lower bandwidth. Our power-anomaly detector is efficient and can be embedded-of-band into (e.g.,) programmable batteries. While rowhammer, side-channel, and speculation-driven attack defenses require invasive code- and hardware-changes in general-purpose systems, we show that power-anomalies are a simple and effective defense for embedded systems. Power-anomalies can help future-proof embedded systems against vulnerabilities that are likely to emerge as new hardware like phase-change memories and accelerators become mainstream.
Understanding the flow of information is an important aspect in computer security. There has been a recent move towards tracking information in hardware and understanding the flow of individual bits through Boolean functions. Such gate level information flow tracking (GLIFT) provides a precise understanding of all flows of information. This paper presents a theoretical analysis of GLIFT. It formalizes the problem, provides fundamental definitions and properties, introduces precise symbolic representations of the GLIFT logic for basic Boolean functions, and gives analytic and quantitative analysis of the GLIFT logic.
Recent work suggests that quantum machine learning techniques can be used for classical image classification by encoding the images in quantum states and using a quantum neural network for inference. However, such work has been restricted to very small input images, at most 4 x 4, that are unrealistic and cannot even be accurately labeled by humans. The primary difficulties in using larger input images is that hitherto-proposed encoding schemes necessitate more qubits than are physically realizable. We propose a framework to classify larger, realistic images using quantum systems. Our approach relies on a novel encoding mechanism that embeds images in quantum states while necessitating fewer qubits than prior work. Our framework is able to classify images that are larger than previously possible, up to 16 x 16 for the MNIST dataset on a personal laptop, and obtains accuracy comparable to classical neural networks with the same number of learnable parameters. We also propose a technique for further reducing the number of qubits needed to represent images that may result in an easier physical implementation at the expense of final performance. Our work enables quantum machine learning and classification on classical datasets of dimensions that were previously intractable by physically realizable quantum computers or classical simulation
Speech is considered as the fundamental aspect of communication between human being, speech recognition is stated as the overall process to convert the sound into corresponding text based on a specific language. The implementation of speech recognition has supported individuals, business and others in order to possess better communication and interaction so as to realise its objectives. This been regarded as the process of collating text message or in some form of meaning based on the input received from voice of another individual. The speech analytics is stated as the key part in the speech recognition as it converts the individual voice into digital form so as to store them and transmit it as and when required using computing equipments.The speech synthesis is considered as the reversal of speech recognition as they convert the data from the digitised format into voice which supports the users to listen quickly and easily.The application of speech recognition in organisation is confined in building more interactive virtual assistants, supports the customers in addressing their queries and offer solutions at quick span of time, furthermore organisations can use speech recognition to identify the individuals so that they can access classified information or reset their password etc. The enhanced development in the technology domain has deepened the importance of artificial intelligence in different areas of work and life, The implementation of AI in speech recognition supports the business and individuals in apprehending better services to the stakeholders and perform the task in an efficient manner. Hence, this study is focused in analysing the key determinates of using AI in speech recognition for effective multifunctional Machine learning platform using regression analysis.
The varied types of Vehicular Networks Communication models (VNCM) circumscribe the decision-makers into a chaotic decision-making environment. The choice-making of VNCM is purely dependent on the attributes of traffic volume, vehicle density, road type, weather conditions, and application demands. However, the attribute of management techniques with the attribute values of dynamic routing, load balancing, and congestion control is more significant in choosing the desirable communication model. This challenging decision-making problem shall be resolved by applying the machine learning approach. This chapter proposes a matching model with supervised data that associates VNCM with management techniques and other core attributes. Supervised machine learning algorithms are applied to develop a matching model that best associates the VNCM with the management techniques considering vehicular efficiency. The performance metrics are computed to determine the most promising algorithm and to validate the consistency of the model. The results of the decision model facilitate the choice-making of the VNCM and also serve as the foundation for the construction of a predictive model.
Deep learning has revolutionized many fields, but caused the 'black-box' problem, where model prediction is not interpretable and transparent. Explainable Artificial Intelligence (XAI) attempts to overcome this problem with the help of Interpretability and Transparency in AI systems. We review important XAI methods focusing on LIME, SHAP and saliency maps that explain the elements behind model predictions. The paper discusses about the role of Explainable Artificial Intelligence (XAI) in high-stake fields such as healthcare, finance and autonomous systems, emphasizing on why trust is important for these sectors and how they help adhere to regulations while promoting ethical AI use. Despite the promise of Explainable Artificial Intelligence (XAI) in promoting transparency, challenges persist, including standardization of interpretability metrics and some users may have difficulty associating their rationales to transparent forms. The study highlights the need for XAI frameworks that are not only robust but also scalable so as to provide a bridge between complex AI systems and their deployment in society. In the end, it is XAI that enables us to use AI in a responsible way in the most critical domains of our modern lives by creating an atmosphere of accountability, fair treatment and trust
The Internet of Things (IoT) enables smart settings, which help human pursuits. Although the IoT has increased economic opportunities and made numerous human conveniences possible, it has also made it easier for intruders or attackers to take advantage of the technology by either attacking it or avoiding it. Therefore, the primary concerns for IoT networks are security and privacy. For Internet of Things (IoT) networks, several intrusion detection systems (IDS) have been developed thus far using various optimization techniques methods. But as data dimensionality has increased, the search space has grown significantly, offering difficult problems for optimization techniques like swarm optimization using particles (PSO). To overcome these obstacles, this work suggests an approach for feature selection dubbed enhanced to increase the sticky binary dynamic sticky binary particle swarm optimization's searchability, and particle swarm optimization (IDSBPSO) was developed. It introduced a method for decrease of the dynamic search space and various dynamic parameters (SBPSO). To identify malicious data flow in IoT networks, an IDS was developed using this methodology. The IoTID20 and UNSW-NB15 IoT network datasets were used to assess the proposed model. It was found that, even with less features, IDSBPSO typically attained accuracy that was either higher or comparable. Moreover, as compared to traditional IDSBPSO and PSO-based feature selection methods dramatically reduced computational cost and prediction time.
The use of network intrusion detection systems is expanding as cloud computing becomes more widespread. Network intrusion detection systems (NIDS) are crucial to network security since network traffic is increasing and cyberattacks are being launched more frequently. Algorithms for detecting anomalies in intruder detection use either machine learning systems or pattern matching systems. Pattern-matching methods frequently produce false positive results, while AI/ML-based systems predict possible assaults by identifying connections between metrics, features, or collections of metrics, features. KNN, SVM, and other models are the most widely used, but they only apply to a few features, are not very accurate, and have a higher false positive rate. This proposal developed a deep learning model that combines the benefits of two-dimensional LSTMs and convolutional neural networks to learn the characteristics of spatial and temporal data. The study’s model was developed and evaluated using the freely available NSL-KDD dataset. The suggested model is very effective, having a low rate of false positives and a high rate of detection. Some sophisticated network intrusion detection systems use machine learning and deep learning models, and their performance is superior to that of the proposed model.
In this research, we introduce the Interval Valued Temporal Neutrosophic Fuzzy Sets (IVTNFS) and some of its basic operations. Also, examine some of their properties. The Neutrosophic Fuzzy Sets of membership and non-membership values are not always possible up to our satisfaction, but the IVTNFS part has a more important role here, because the time movement with an interval in NFS gave the best solution to making a decision, deciding their careers in our real-life situation.
Abstract Supply chains are intended to become the essential part of this global competitive world as organizations are seeking to formulate strategized advantages. To put it differently, most of the companies are aiming to adopt new businesses and raise the competition in the global market. For this, most of the companies or organizations are choosing strategies such as corporate social responsibility and supply chain management that allow the companies to ensure financial strengthening. Moreover, stakeholders and customers have raised their concerns towards corporate social responsibility with respect to global supply chain and ensured to act in a socially responsible way. It is believed that majority of the organizations are placing high importance on implementing corporate social responsibility or CSR activities in order to enhance the whole business. The active participation of companies in CSR activities will always benefit them in terms of profit, virtues and other moral standards. Usually, corporate firms exhibit their virtues ignoring the moral standard, which eventually reveals the nature of hypocrisy. Corporate hypocrisy is a context when companies or organizations assure the public to do something in corporate social responsibility context and end up doing entirely different. Keeping this in mind, the current study aims to reveal the hidden nature of corporate hypocrisy and its effect on inner stakeholders explicitly focusing on the employees' observations on corporate hypocrisy; examine the hypocritical behaviour of the reputed firms towards the consumers, which in turn affects the supply chain globally. For this, data analysis will be carried out with consumer firms across countries. Results show that firms are completely held responsible for their behaviour eventually defaming reputation.
Cyber-attacks may pose a threat to information security. The importance of cyber awareness is increasing as internet and data usage rates rise. This research concentrates on the connections between general human awareness, knowledge, and behavior in relation to security tools. An investigation into phishing attacks was made for the purpose of cyber security simulation exercises for staff members of a sizable financial services company with hundreds of branches across the country. Information about students' use of social media platforms to demonstrate Information Security Awareness (CSA) was gathered through an online survey. The results show that despite having a sufficient understanding of cyber threats, internet users only take a few simple and common precautions, which is insufficient. The study's results also show a connection between greater cyber experience and understanding and threshold of cyber awareness, beyond the differences in participating member country or gender. Results, implications, and recommendation useful for cyber security training programs are presented and discussed.
This research looks at how data methods and IoT technologies can be used effectively for planning and improving supply chain transparency. It seeks to explain the importance of measurements namely cycle time, lead time, on-time delivery, inventory turns, and fill rate. Analyzing the company objectives and requirements based on the identified ones, the study underscores the paramount importance of KPI visualization in helping the users comprehend organizational processes and seek improvement. The study also explores how the effectiveness of the IoT infrastructure is assessed and how the IoT devices are chosen and subsequently deployed for strategic purposes and the building of real-time data acquisition systems. In addition, the article also covers the approaches with regard to data acquisition and assimilation; more focus is given to the understanding of the performance of the machine, conditions of the environment, and the logistical aspects by means of data visualization of the IoT. The study also emphasizes data quality governance mechanisms to ensure the accuracy and comprehensiveness of IoT data, and thus make people more confident in data reliability.
Hardware-based malware detectors (HMDs) are a key emerging technology to build trustworthy systems, especially mobile platforms. Quantifying the efficacy of HMDs against malicious adversaries is thus an important problem. The challenge lies in that real-world malware adapts to defenses, evades being run in experimental settings, and hides behind benign applications. Thus, realizing the potential of HMDs as a small and battery-efficient line of defense requires a rigorous foundation for evaluating HMDs. We introduce Sherlock — a white-box methodology that quantifies an HMD's ability to detect malware and identify the reason why. Sherlock first deconstructs malware into atomic, orthogonal actions to synthesize a diverse malware suite. Sherlock then drives both malware and benign programs with real user-inputs, and compares their executions to determine an HMD's operating range, i.e., the smallest malware actions an HMD can detect. We show three case studies using Sherlock to not only quantify HMDs' operating ranges but design better detectors. First, using information about concrete malware actions, we build a discrete-wavelet transform based unsupervised HMD that outperforms prior work based on power transforms by 24.7% (AUC metric). Second, training a supervised HMD using Sherlock's diverse malware dataset yields 12.5% better HMDs than past approaches that train on ad-hoc subsets of malware. Finally, Sherlock shows why a malware instance is detectable. This yields a surprising new result — obfuscation techniques used by malware to evade static analyses makes them more detectable using HMDs.
Identity and Access Management (IAM) is an access control service employed within cloud platforms. Customers must configure IAM to establish secure access control rules for their cloud organizations. However, IAM misconfigurations can be exploited to conduct Privilege Escalation (PE) attacks, resulting in significant financial losses. Consequently, addressing these PEs is crucial for improving security assurance for cloud customers. Nevertheless, the area of repairing IAM PEs due to IAM mis-configurations is relatively underexplored. To our knowledge, the only existing IAM repair tool called IAM-Deescalate focuses on a limited number of IAM PE patterns, indicating the potential for further enhancements. We propose a novel IAM Privilege Escalation Repair Engine called IAMPERE that efficiently generates an approximately minimal patch for repairing a broader range of IAM PEs. To achieve this, we first formulate the IAM repair problem into a MaxSAT problem. Despite the remarkable success of modern MaxSAT solvers, their scalability for solving complex repair problems remains a challenge due to the state explosion. To improve scalability, we employ deep learning to prune the search space. Specifically, we apply a carefully designed GNN model to generate an intermediate patch that is relatively small, but not necessarily minimal. We then apply a MaxSAT solver to search for a minimum repair within the space defined by the intermediate patch, as the final approximately minimum patch. Experimental results on both synthesized and real-world IAM misconfigurations show that, compared to IAM-Deescalate, IAMPERE repairs a significantly larger number of IAM misconfigurations with markedly smaller patch sizes.
The algebraic structures Group, Ring, Field and Vector spaces are important innovations in Mathematics. Most of the theoretical concepts of Mathematics are based on the theorems related to these algebraic structures. Initially many mathematicians developed theorems related to all these algebraic structures. In 20th century most of the researchers introduced the theorems on the algebraic structures with Fuzzy and Intuitionistic fuzzy sets. Recently in 21st century the researchers concentrated on Neutrosophic sets and introduced the algebraic structures like Neutrosophic Group, Neutrosophic Ring, Neutrosophic Field, Neutrosophic Vector spaces and Neutrosophic Linear Transformation. In the current scenario of relating the spaces with the structures, we have introduced the concepts of Neutrosophic topological vector spaces. In this article, the study of Neutrosophic Topological vector spaces has been initiated. Some basic definitions and properties of classical vector spaces are generalized in Neutrosophic environment over a Neutrosophic field with continuous functions. Neutrosophic linear transformations and their properties are also included in Neutrosophic Topological Vector spaces. This article is an extension work of fuzzy and intuitionistic fuzzy vector spaces which were introduced in fuzzy and intuitionistic fuzzy environments. Even though it is an extension work, Neutrosophic Topological Vector space will play an important role in Neural Networks, Image Processing, Machine Learning and Artificial Intelligence Algorithms.
IoT networks can be defined as groups of physically connected things and devices that can connect to the Internet and exchange data with one another. Since enabling an increasing number of internets of things devices to connect with their networks, organizations have become more vulnerable to safety issues and attacks. A major drawback of previous research is that it can find out prior seen types only, also any new device types are considered anomalous. In this manuscript, IoT device type detection utilizing Training deep quantum neural networks optimized with a Chimp optimization algorithm for enhancing IOT security (IOT-DTI-TDQNN-COA-ES) is proposed. The proposed method entails three phases namely data collection, feature extraction and detection. For Data collection phase, real network traffic dataset from different IoT device types are collected. For feature mining phase, the internet traffic features are extracted through automated building extraction (ABE) method. IoT device type identification phase, Training deep quantum neural networks (TDQNN) optimized with Chimp optimization algorithm (COA) is utilized to detect the category of IoT devices as known and unknown device. IoT network is implemented in Python. Then the simulation performance of the proposed IOT-DTI-TDQNN-COA-ES method attains higher accuracy as26.82% and 23.48% respectively, when compared with the existing methods.
Heart attack and anxiety disorder are proving to be two major medical complexities in the present scenario. This study illustrates the role of machine learning and artificial neural networks in making successful predictions of these two medical abnormalities. This research paper presents the significance of incorporating artificial intelligence-based mechanisms in minimising the errors of the diagnosis process. In this context, a brief description of the evolution of the usage of AI technology in medical diagnosis has been portrayed here. The different branches of AI like machine learning, artificial neural networks and their contribution to the analysis and interpretation of heart diseases are elaborated in this research paper. Besides primary research has been conducted for the purpose of gaining knowledge about what is the human perception of this technology and its relevance in the modern healthcare system. Analysis and interpretation have also been provided in this research paper to present a clear description of experimental result. Findings suggested that "Random Forest" and Vector Machine are the most used algorithms in heart attack risk prediction and ANN is the least used algorithm. However, developing ANN by integrating autoencoder and feature classifier can perform better in anxiety disorder heart attack prediction.
The increased need of various services with different QoS (Quality of Service) requirements motivated the deployment of 5G wireless communication networks. While the one logical network approach was good in theory, today has given rise to a new technology called Network Slicing where multiple independent Logical Networks were provided on shared infrastructure that caters specifically for service specific requirements. However, a 5G network is far more dynamic and large scale so the concept of network slicing and resource allocation becomes significantly harder. This work has been made from our exploration to explore the deep learning approach based on CNN for better network slicing in 5G networks. The CNN algorithm is to search the spatial pattern in data, so we are able to provision resource allocation and QoS parameter for each slice automatically at network level. Network slicing framework allows great flexibility in responding to the highly dynamic network conditions and services demands, an advantage that is best leveraged with deep learning. CNN models discover spatial patterns in network data with high accuracy, which can be significantly beneficial to optimize resource usage and prediction for different network slices. This way, it enhances the QoS provisioning more than conventional workings eventually resulting in good network performance with higher resource utilization. This paper also discusses the trade-off between model complexity of CNNs and their corresponding improvement in logistic performance at practical deployment scenario when they are running on 5G networks, computational requirements (with natural scalability guarantees or not). By and large, deep learning is able to significantly improve the efficiency of 5G networks so they are more accommodating as well as adaptable to cater for various services & applications being placed at the very top of the 5G ecosystem.
Hardware designers need to precisely analyze high-level descriptions for illegal information flows. Language-based information flow analyses can be applied to hardware description languages, but a straight-forward application either conservatively rules out many secure hardware designs, or constrains the designers to work at impractically low levels of abstraction. We demonstrate that choosing the right level of abstraction for the analysis, by working on Finite State Machines instead of the hardware code, allows both precise information flow analysis and high-level programmability.
Information flow is an important security property that must be incorporated from the ground up, including at hardware design time, to provide a formal basis for a system's root of trust. We incorporate insights and techniques from designing information-flow secure programming languages to provide a new perspective on designing secure hardware. We describe a new hardware description language, Caisson, that combines domain-specific abstractions common to hardware design with insights from type-based techniques used in secure programming languages. The proper combination of these elements allows for an expressive, provably-secure HDL that operates at a familiar level of abstraction to the target audience of the language, hardware architects. We have implemented a compiler for Caisson that translates designs into Verilog and then synthesizes the designs using existing tools. As an example of Caisson's usefulness we have addressed an open problem in secure hardware by creating the first-ever provably information-flow secure processor with micro-architectural features including pipelining and cache. We synthesize the secure processor and empirically compare it in terms of chip area, power consumption, and clock frequency with both a standard (insecure) commercial processor and also a processor augmented at the gate level to dynamically track information flow. Our processor is competitive with the insecure processor and significantly better than dynamic tracking.
This paper presents a new, co-designed compiler and architecture called GhostRider for supporting privacy preserving computation in the cloud. GhostRider ensures all programs satisfy a property called memory-trace obliviousness (MTO): Even an adversary that observes memory, bus traffic, and access times while the program executes can learn nothing about the program's sensitive inputs and outputs. One way to achieve MTO is to employ Oblivious RAM (ORAM), allocating all code and data in a single ORAM bank, and to also disable caches or fix the rate of memory traffic. This baseline approach can be inefficient, and so GhostRider's compiler uses a program analysis to do better, allocating data to non-oblivious, encrypted RAM (ERAM) and employing a scratchpad when doing so will not compromise MTO. The compiler can also allocate to multiple ORAM banks, which sometimes significantly reduces access times.We have formalized our approach and proved it enjoys MTO. Our FPGA-based hardware prototype and simulation results show that GhostRider significantly outperforms the baseline strategy.
Rice (Oryza sativa L.) is a major staple food crop globally, but its productivity is severely constrained by insect pests, particularly the yellow stem borer (Scirpophaga incertulas Walker). Yellow stem borer (Scirpophaga incertulas), causing 20-70% yield losses, threatens rice production. A Kharif 2022, field study at SVPUAT, Meerut, evaluated four botanicals, one biopesticide, and one insecticide against this pest in basmati rice using a randomized block design. Results indicated that cartap hydrochloride was most effective (48.2 q/ha, 1:9.45 cost-benefit ratio), followed by Metarhizium anisopliae (44.4 q/ha) and nimbecidine (42.8 q/ha), which significantly reduced dead hearts and white ears. These bio-rational agents offer sustainable options for integrated pest management, reducing reliance on harmful chemical insecticides.
This paper presents a new, co-designed compiler and architecture called GhostRider for supporting privacy preserving computation in the cloud. GhostRider ensures all programs satisfy a property called memory-trace obliviousness (MTO): Even an adversary that observes memory, bus traffic, and access times while the program executes can learn nothing about the program's sensitive inputs and outputs. One way to achieve MTO is to employ Oblivious RAM (ORAM), allocating all code and data in a single ORAM bank, and to also disable caches or fix the rate of memory traffic. This baseline approach can be inefficient, and so GhostRider's compiler uses a program analysis to do better, allocating data to non-oblivious, encrypted RAM (ERAM) and employing a scratchpad when doing so will not compromise MTO. The compiler can also allocate to multiple ORAM banks, which sometimes significantly reduces access times.We have formalized our approach and proved it enjoys MTO. Our FPGA-based hardware prototype and simulation results show that GhostRider significantly outperforms the baseline strategy.
For many mission-critical tasks, tight guarantees on the flow of information are desirable, for example, when handling important cryptographic keys or sensitive financial data. We present a novel architecture capable of tracking all information flow within the machine, including all explicit data transfers and all implicit flows (those subtly devious flows caused by not performing conditional operations). While the problem is impossible to solve in the general case, we have created a machine that avoids the general-purpose programmability that leads to this impossibility result, yet is still programmable enough to handle a variety of critical operations such as public-key encryption and authentication. Through the application of our novel gate-level information flow tracking method, we show how all flows of information can be precisely tracked. From this foundation, we then describe how a class of architectures can be constructed, from the gates up, to completely capture all information flows and we measure the impact of doing so on the hardware implementation, the ISA, and the programmer.
A bstract In 2020, Novartis Pharmaceuticals Corporation and the U.S. Food and Drug Administration (FDA) started a 4-year scientific collaboration to find novel radiogenomics-based prognostic and predictive factors for HR+/HER2-metastatic breast cancer under a Research Collaboration Agreement. This manuscript aims to detail the guiding principles and methodology for this study. We include a discussion of internal and external clinical, genomics, imaging datasets, data processing workflows, and machine learning model development strategies. We also prospectively define our success criteria to ensure robust scientific outputs. Disclosure This publication reflects the views of the authors and should not be construed to represent FDA’s views or policies.
Hardware-based malware detectors (HMDs) are a key emerging technology to build trustworthy systems, especially mobile platforms. Quantifying the efficacy of HMDs against malicious adversaries is thus an important problem. The challenge lies in that real-world malware adapts to defenses, evades being run in experimental settings, and hides behind benign applications. Thus, realizing the potential of HMDs as a small and battery-efficient line of defense requires a rigorous foundation for evaluating HMDs. We introduce Sherlock — a white-box methodology that quantifies an HMD's ability to detect malware and identify the reason why. Sherlock first deconstructs malware into atomic, orthogonal actions to synthesize a diverse malware suite. Sherlock then drives both malware and benign programs with real user-inputs, and compares their executions to determine an HMD's operating range, i.e., the smallest malware actions an HMD can detect. We show three case studies using Sherlock to not only quantify HMDs' operating ranges but design better detectors. First, using information about concrete malware actions, we build a discrete-wavelet transform based unsupervised HMD that outperforms prior work based on power transforms by 24.7% (AUC metric). Second, training a supervised HMD using Sherlock's diverse malware dataset yields 12.5% better HMDs than past approaches that train on ad-hoc subsets of malware. Finally, Sherlock shows why a malware instance is detectable. This yields a surprising new result — obfuscation techniques used by malware to evade static analyses makes them more detectable using HMDs.
AI can transform healthcare by improving diagnostic accuracy, personalising patient-care, and allowing for more efficient operations. As promising as it is, however, current research is limited in many ways including lack of validation on extensive scales, biases associated with AI, regulatory hurdles, scale, and privacy concerns. We call upon scientific community to participata on real-world clinical trials to re-train next-genAI to overcomes the above 3 challenges, hybrid bias detection algorithms to output of next-genAI, and scalable explainable models. This includes implementing AI-driven personalized medicine, predictive analytics, and remote patient monitoring systems to optimize patient outcomes and increase access to care. We enhance data privacy by implementing privacy-preserving methods including federated learning and homomorphic encryption. In addition, our framework emphasizes regulatory compliance, ensuring that AI healthcare solutions are ethical and legally viable. XAI will promote doctor-AI collaboration by ensuring transparency of AI model to instill trust in healthcare professionals. This paper proposes an all-in-one advanced solution for scaling AI applications globally in drug discovery, clinical research, and telemedicine. The ultimate goal of this research is to develop new AI-driven systems that are secure, transparent, and personalized, and that will foster a more effective, fair, and scalable healthcare system around the world.
We describe the first hardware implementation of a quantum-secure encryption scheme along with its low-cost power side-channel countermeasures. The encryption uses an implementation-friendly Binary-Ring-Learning-with-Errors (B-RLWE) problem with binary errors that can be efficiently generated in hardware. We demonstrate that a direct implementation of B-RLWE exhibits vulnerability to power side-channel attacks, even to Simple Power Analysis, due to the nature of binary coefficients. We mitigate this vulnerability with a redundant addition and memory update. To further protect against Differential Power Analysis (DPA), we use a B-RLWE specific opportunity to construct a lightweight yet effective countermeasure based on randomization of intermediate states and masked threshold decoding. On a SAKURA-G FPGA board, we show that our method increases the required number of measurements for DPA attacks by 40 χ compared to unprotected design. Our results also quantify the trade-off between side-channel security and hardware area-cost of B-RLWE.
This article examines how Agile methodology and concepts are used to the delivery of artifical intelligence as well as how Agile has transformed over time. Artificial Intelligence (AI) are wide-ranging set of technologies that can promise various benefits for the company in terms off added business value and customer satisfaction.In earlier times, organizations/companies are increasingly turning to intelligence technology in order to gain more business value following a deluge of data as well as a strong increase in computational capacity. This is encouraging the incorporation of AI into business operations, but the effects of this adoption need to be investigated more thoroughly. The way that enterprises as well as consumers use information has evolved as a result of the internet's and smart devices' exponential growth in data volume. As a result, companies are starting to use AI technologies to embrace agility. Agile technique is the capacity to adapt quickly and effectively to these external situations in order to prosper in an industry that is frequently developing and unpredictable. This research focuses on the entire effects of AI on enterprises, including future changes in business models as well as research, innovation, and market deployment. Additionally, it focuses on several ways to incorporate artificial intelligence methods into scrum methodolgy.It also provides the information regarding iterative development process can be adapted to A.I evolvment well as the comparison of agile technology and artiofical intelligence in terms of business management development.
3-D circuit-level integration is a chip fabrication technique in which two or more dies are stacked and combined into a single circuit through the use of vertical electroconductive posts. Since the dies may be manufactured separately, 3-D circuit integration offers the option of enhancing a commodity processor with a variety of security functions. This paper examines the 3-D design approach and provides an analysis concluding that the commodity die system need not be independently trustworthy for the system of joined dies to provide certain trustworthy functions. In addition to describing the range of possible security enhancements (such as cryptographic services), we describe the ways in which multiple-die subsystems can depend on each other, and a set of processing abstractions and general design constraints with examples to address these dependencies.
Healthcare is an essential part of the medical field in the modern digital age. When it comes to illness prediction and other healthcare-related tasks, a healthcare system needs to examine massive amounts of patient data. A smart system would be able to analyse a patient's social life, medical history, and other lifestyle factors to forecast the likelihood of a health problem. The HRS, or health recommender system, is rapidly expanding in significance as a healthcare service delivery mechanism. In this setting, health intelligent systems have established themselves as critical components of healthcare delivery decision making. Their primary focus is guaranteeing the high quality, reliability, authenticity, and privacy of information at all times so that it may be used when it is most useful. The health recommender system is crucial for deriving outputs like proposing diagnoses, health insurance, clinical pathway-based treatment techniques, and alternative medications based on the patient's health profile as more and more individuals rely on social networks to learn about their health.In order to minimize the time and money spent on healthcare, recent studies have focused on using vast amounts of medical data by merging multimodal data from many sources. When it comes to making decisions about a patient's health, big data analytics with recommender systems play a crucial part in the healthcare industry. This article suggests a LeNET Convolution neural network (CNN) that sheds light on the application of big data analysis to the development of a useful health recommendation systems and shows how the healthcare sector can benefit from shifting from a standard model to a more individualized one in the context of telemedicine. The suggested method yields lower error rates than competing methods by taking both the Root Squared Mean Error (RSME) and Average Absolute Error (AAE) into account.
Microarchitectural resources such as caches and predictors can be used to leak information across security domains. Significant prior work has demonstrated attacks and defenses for specific types of such microarchitectural side and covert channels. In this paper, we introduce a general mathematical study of microarchitectural channels using information theory. Our conceptual contribution is a simple mathematical abstraction that captures the common characteristics of all microarchitectural channels. We call this the Bucket model and it reveals that microarchitectural channels are fundamentally different from side and covert channels in networking. We then quantify the communication capacity of several microarchitectural covert channels (including channels that rely on performance counters, AES hardware and memory buses) and measure bandwidths across both KVM based heavy-weight virtualization and light-weight operating-system level isolation. We demonstrate channel capacities that are orders of magnitude higher compared to what was previously considered possible. Finally, we introduce a novel way of detecting intelligent adversaries that try to hide while running covert channel eavesdropping attacks. Our method generalizes a prior detection scheme (that modeled static adversaries) by introducing noise that hides the detection process from an intelligent eavesdropper.
Essential to adaptive devices is the ability to reconfigure Medium Access Control (MAC) protocols to environment conditions and application requirements. We propose MadMAC, a platform for building reconfigurable MAC protocols on commodity 802.11x hardware. Programming on top of MadWiFi, MadMAC transmits packets at configurable time and frame format. In this paper, we build a TDMA-based MAC protocol using MadMAC, and examine the impact of various design parameters. Experimental results show that MadMAC allows flexible control of protocol settings with small processing overhead. We also observe that the TDMA MAC protocol provides 20% throughput improvement over the CSMA protocol in a simple two-node network.
Web applications with special computation and storage requirements benefit greatly from the cloud computing model. With an extensible and flexible architecture, Wireless Sensor Networks are integrated with the Cloud. It is possible to directly integrate REST-based Web services into other application domains, such as e-health care, smart homes, and even vehicular area networks (VANs). An IP-based WSN testbed has been used to implement a proof of concept REST API web service for accessing data from anywhere using a REST API. When monitoring data exceeds values or events of interest, users will receive notifications by email or tweet.
The internet of things (IoT) concept has recently received a lot of attention in the IT industry. It makes it easier to share data and connect to the internet. Smart surveillance, environmental monitoring, smartphones, and body sensors all make use of the internet of things (IoT). However, serious new threats to consumers' safety and privacy have emerged as a result of the internet of things' rapid growth. Due to the millions of IoT devices that are at risk, an attacker could quickly hack into an application, render it unreliable, and steal valuable user data and information. Both the architecture of the internet of things and the many forms of cyberattacks on these devices are examined in this chapter. It aims to advance IoT research by shedding light on the numerous security issues IoT is currently facing and the available security solutions we can use to make IoT devices safer. To uncover appropriate security solutions for securing IoT devices, the authors analyse them in this research across three categories: secure authentication, secure communications, and application security.
This article describes a new method for constructing and analyzing architectures that can track all information flows within a processor, including explicit, implicit, and timing flows. The key to this approach is a novel gate-level information-flow-tracking method that provides a way to create complex logical structures with well-defined information-flow properties.
With employing the Internet of Things (IoT) sensors and modern-day machine learning algorithms, this study presents a modern inquiry at the intersection of healthcare and era. This have a look at studies the capability of Long ShortTerm Memory (LSTM) and Artificial Neural Network (ANN) fashions in forecasting affected person fitness consequences by utilising the exploitation of sensor facts inclusive of temperature, blood strain, ECG, EEG, and pulse price. The study extensively examines the models' performance utilising a broad variety of performance benchmarks. Results demonstrate that once compared to the LSTM model, the ANN version performs better in terms of prediction accuracy, precision, do not forget, and Fl score. This enhanced accuracy indicates how effectively the ANN model can spot challenging patterns in the dataset. The predicted fitness possibilities for 15 human individuals also are proved in a result table, reaffirming the ANN models constant advantage in prediction. This look at proactive patient care, more intelligent treatment regimens, and improved overall patient outcomes is driving the business and is a significant step towards personalised, data-driven healthcare solutions.
Towards processing efficient technology management in healthcare sectors, the contribution of cloud computing is at once essential. In order to reduce human efforts, cloud computing and its benefits are used immensely for the growth and development of the entire health care sector. Regarding analyzing various advantages, bringing technological changes and improvements in healthcare aspects become easier than before. Today, systemic and smart healthcare approaches promote both the cooperation and exchanges of essential medical records and services. Designing proper sensor-based smart healthcare systems is an essential concern for physicians with the use of cloud computing technology. The process includes cost-effective sharing and storing of essential medical records for a safer and more successful generation of sensor-based smart systems. However, with this innovative use of cloud technology, physicians can easily track any relevant patient's information from a large database. In contrast to that, cloud computing techniques also help doctors to detect and diagnose any type of medical complications in today's clinical sectors for a sustainable future. A quantitative method has been adopted in the research study through conducting survey analysis among 60 participants. In order to analyze the importance of applying cloud computing as one of the innovative technologies along with smart designing of health care systems, these survey results can be highly effective. Moreover, a detailed interpretation, analysis, and discussion have been also conducted successfully after evaluating all the survey results regarding the particular research topic.
This chapter explores the evolving landscape of AI-enhanced supply chain management, emphasizing the pivotal role of artificial intelligence (AI) in optimizing supply chain operations. It commences with an introduction that defines AI-enhanced supply chain management and underscores the importance of AI in this context. Subsequently, the chapter delves into the multifaceted role of AI in supply chain management, elucidating its diverse benefits. These advantages encompass enhanced supply chain efficiency, streamlined inventory and transportation management, and improved revenue generation through superior customer service and data-driven demand forecasting.
Hardware resources are abundant; state-of-the-art processors have over one billion transistors. Yet for a variety of reasons, specialized hardware functions for high assurance processing are seldom (i.e., a couple of features per vendor over twenty years) integrated into these commodity processors, despite a small flurry of late (e.g., ARM TrustZone, Intel VT-x/VT-d and AMD-V/AMD-Vi, Intel TXT and AMD SVM, and Intel AES-NI). Furthermore, as chips increase in complexity, trustworthy processing of sensitive information can become increasingly difficult to achieve due to extensive on-chip resource sharing and the lack of corresponding protection mechanisms. In this paper, we introduce a method to enhance the security of commodity integrated circuits, using minor modifications, in conjunction with a separate integrated circuit that can provide monitoring, access control, and other useful security functions. We introduce a new architecture using a separate control plane, stacked using 3D integration, that allows for the function and economics of specialized security mechanisms, not available from a co-processor alone, to be integrated with the underlying commodity computing hardware. We first describe a general methodology to modify the host computation plane by attaching an optional control plane using 3-D integration. In a developed example we show how this approach can increase system trustworthiness, through mitigating the cache-based side channel problem by routing signals from the computation plane through a cache monitor in the 3-D control plane. We show that the overhead of our example application, in terms of area, delay and performance impact, is negligible.
In the ever-evolving field of scientific diagnostics, the early diagnosis of pulmonary most cancers continues a key undertaking. This observation proposed a unique deep learning-primarily based approach, in particular using Generative Adversarial Networks (GANs), intending to modernise the identification and localization of pulmonary malignancies through scientific imaging. Our models, trained using a varied dataset, demonstrated a promising accuracy fee of 70% within the sample set, suggesting its ability to adeptly differentiate between malignant and non-malignant instances in scientific images. While the conclusions suggest a significant growth in lung cancer detection, also they highlight locations demanding in addition refinement. The balance of technological prowess and scientific significance, as reflected through criteria like sensitivity and specificity, remains a focus topic for future projects. The outcomes of this research are substantial. Beyond the on the spot discoveries, the take a look at emphasizes the transformational possibility of incorporating sophisticated AI approaches into healthcare. As the scientific network grapples with the difficulties of early cancer identification, gear like the one displayed in this study ought to usher in a new age in diagnostics-marked by accuracy, efficiency, and patient-centricity. In conclusion, this have a look at now not simplest adds a fresh diagnostic tool to the sector but moreover sets the way for future innovations within the confluence of AI and healthcare.
Privacy and integrity are important security concerns. These concerns are addressed by controlling information flow, i.e., restricting how information can flow through a system. Most proposed systems that restrict information flow make the implicit assumption that the hardware used by the system is fully ``correct'' and that the hardware's instruction set accurately describes its behavior in all circumstances. The truth is more complicated: modern hardware designs defy complete verification; many aspects of the timing and ordering of events are left totally unspecified; and implementation bugs present themselves with surprising frequency. In this work we describe Sapper, a novel hardware description language for designing security-critical hardware components. Sapper seeks to address these problems by using static analysis at compile-time to automatically insert dynamic checks in the resulting hardware that provably enforce a given information flow policy at execution time. We present Sapper's design and formal semantics along with a proof sketch of its security. In addition, we have implemented a compiler for Sapper and used it to create a non-trivial secure embedded processor with many modern microarchitectural features. We empirically evaluate the resulting hardware's area and energy overhead and compare them with alternative designs.
This study proposes an IoT gadget information trade system that is haze based and utilizes a block chain approach. A typical worldview for distributed computing is haze calculation, which spots handling and storage spaces between the customers and the cloud climate. Here, a solid method is utilized by the IoT gadgets to trade information. The Internet of Things (IoT) gadgets and frameworks, for instance in a haze climate, are one of the significant information sources. The multifaceted design and interconnectedness of such IoT and haze conditions, nonetheless, can prompt security shortcomings (for example, because of execution missteps or flaws in the hidden gadgets or frameworks), which can be utilized to sabotage the legitimacy of the information. Because of the trustless idea of block chain security, its utilization in a large number of businesses is extending rapidly. Thus, we give a protected Block chain-based conspire in this paper to guarantee the veracity of hubs and information and the security of information transmission in the mist climate.
This article presents a distinctive framework that utilises blockchain technology to enable smart contracts, revolutionising the administration of electronic medical health records (EMHRs). The framework utilises the Ethereum blockchain and contemporary user interfaces to tackle significant concerns in healthcare data management, ensuring enhanced security, transparency, and user engagement. The device employs smart contracts to automate data access and permission management, reducing administrative complexities and enhancing data integrity. Utilising MetaMask and IPFS, user-friendly interfaces provide medical physicians and patients with quick access to EMHRs, promoting enhanced patient care and medical decision-making. The performance benefits of the framework are evaluated by performance evaluation, which utilises many metrics like transaction processing times, data retrieval times, system uptime, data storage efficiency, and user adoption rates. The results pertain to the device's level of responsiveness and durability, highlighting its capacity to facilitate the interchange of healthcare data management. This study establishes the basis for a future in which medical information is effectively exchanged and controlled, resulting in more accurate diagnoses, more efficient patient care, and adherence to regulatory standards. This technology plays an important part in advancing the development of safe and personalised EMHR control, as the healthcare industry continues to progress.
The sustainability of the electrical industries and persistent production runs are dependent on their suppliers. Logistic supplier selection is an indispensable one for electrical products manufacturing concerns. The identification of feasible logistic suppliers is essential and very significant before employing the ranking methods of determining the optimal suppliers. This paper proposes a hybrid decision-making approach that integrates the Fuzzy c-means clustering (FCM) algorithm and the multi-criteria decision-making method of MAIRCA (Multi-Attributive Ideal-Real Comparative Analysis). The hybrid model is two phases in which the interface of the machine learning algorithm performs the task of classifying the logistic suppliers of electrical products based on their feasibility in the first phase. The MAIRCA method is applied in the second phase of ranking the suppliers of electrical products. The efficacy of the hybrid method is tested by comparing the ranking outcomes of the alternatives of logistic suppliers with and without the interference of fuzzy c-means clustering, it results that the integrated MCDM method with fuzzy c-means clustering seems to be more time and cost-efficient. The results of the proposed hybrid method are more convincing and the efficacy of the method is measured in terms of time and cost efficiency.
Micro-architecture units like caches are notorious for leaking secrets across security domains. An attacker program can contend for on-chip state or bandwidth and can even use speculative execution in processors to drive this contention; and protecting against all contention-driven attacks is exceptionally challenging. Prior works can mitigate contention channels through caches by partitioning the larger, lower-level caches or by looking for anomalous performance or contention behavior. Neither scales to large number of fine-grained domains as required by browsers and web-services that place many domains within the same address space.
Over the past few years, the healthcare industry has seen a dramatic increase in the use of intelligent automation enabled by AI technology. These developments are made to better the standard of medical decision making and the standard of treatment given to patients. Fuzzy boundaries, shifting sizes, and aberrations like hair or ruler lines all provide difficulties for automatic detection of skin lesions in dermoscopic images, slowing down the otherwise efficient process of diagnosing skin cancer. However, these difficulties may be conquered by employing image processing software. To address these issues, the authors of this paper provide a novel IMLT-DL model for intelligent dermoscopic image processing. Multi-level thresholding and deep learning are brought together in this model. Top hat filtering and inpainting have been included into IMLT-DL for use in image processing. In addition, Mayfly Optimization has been used in tandem with multilayer Kapur's thresholding to identify specific trouble spots. For further investigation, it uses an Inception v3-based feature extractor, and for data classification, it makes use of gradient boosting trees (GBTs). On the ISIC dataset, this model was shown to outperform state-of-the-art alternatives by a margin of 0.992% over the duration of trial iterations. These advances are a major step forwards in the quest for faster and more accurate skin lesion detection.
This paper aims to study the interaction and correlation of Cyber-Physical Systems (CPS) with emphasis on Smart Grids (SGs) to enhance and develop energy utilization and distribution systems. This study aligns with the design of adaptive, real-time operations that respond continuously and dynamically to energy demand fluctuations and can increase grid reliability and decrease operational costs through optimization. Novel solutions to some generic challenges of the grid, such as communication burdens and cyber-security threats are the application of new optimization algorithms, machine learning methods and scalable remediation measures. This work seeks to both investigate emerging technologies such as LoRa (Long Range, low power) and autonomous systems, and to lead in the development of intelligent, self-optimised smart grid infrastructures. Additionally, the dissertation addresses societal and regulatory challenges to decarbonization, guaranteeing that the solution is technically possible, as well as economically and politically possible. All of this effort will contribute to green, secure and renewable-integrated smart grids in the future.
High-assurance systems found in safety-critical infrastructures are facing steadily increasing cyber threats. These critical systems require rigorous guarantees in information flow security to prevent confidential information from leaking to an unclassified domain and the root of trust from being violated by an untrusted party. To enforce bit-tight information flow control, gate-level information flow tracking (GLIFT) has recently been proposed to precisely measure and manage all digital information flows in the underlying hardware, including implicit flows through hardware-specific timing channels. However, existing work in this realm either restricts to two-level security labels or essentially targets two-input primitive gates and several simple multilevel security lattices. This article provides a general way to expand the GLIFT method for multilevel security. Specifically, it formalizes tracking logic for an arbitrary Boolean gate under finite security lattices, presents a precise tracking logic generation method for eliminating false positives in GLIFT logic created in a constructive manner, and illustrates application scenarios of GLIFT for enforcing multilevel information flow security. Experimental results show various trade-offs in precision and performance of GLIFT logic created using different methods. It also reveals the area and performance overheads that should be expected when expanding GLIFT for multilevel security.
An analysis of regulatory submissions of drug and biological products to the US Food and Drug Administration from 2016 to 2021 demonstrated an increasing number of submissions that included artificial intelligence/machine learning (AI/ML). AI/ML was used to perform a variety of tasks, such as informing drug discovery/repurposing, enhancing clinical trial design elements, dose optimization, enhancing adherence to drug regimen, end-point/biomarker assessment, and postmarketing surveillance. AI/ML is being increasingly explored to facilitate drug development. Over the past decade, there has been a rapid expansion of artificial intelligence/machine learning (AI/ML) applications in biomedical research and therapeutic development. In 2019, Liu et al. provided an overview of how AI/ML was used to support drug development and regulatory submissions to the US Food and Drug Administration (FDA). The authors envisioned that AI/ML would play an increasingly important role in drug development.1 That prediction has now been confirmed by this landscape analysis based on drug and biologic regulatory submissions to the FDA from 2016 to 2021. This analysis was performed by searching for submissions with key terms "machine learning" or "artificial intelligence" in Center for Drug Evaluation and Research (CDER) internal databases for Investigational New Drug applications, New Drug Applications, Abbreviated New Drug Applications, and Biologic License Applications, as well as submissions for Critical Path Innovation Meeting and the Drug Development Tools Program. We evaluated all data from 2016 to 2021. Figure 1a demonstrates that submissions with AI/ML components have increased rapidly in the past few years. In 2016 and 2017, we identified only one such submission each year. From 2017 to 2020, the numbers of submissions increased by approximately twofold to threefold yearly. Then in 2021, the number of submissions increased sharply to 132 (approximately 10-fold as compared with that in 2020). This trend of increasing submissions with AI/ML components is consistent with our expectation based on the observed increasing collaborations between the pharmaceutical and technology industries. Figure 1b illustrates the distributions of these submissions by therapeutic area. Oncology, psychiatry, gastroenterology, and neurology were the disciplines with the most AI/ML–related submissions from 2016 to 2021. Figure 1c summarizes the distributions of these submissions by the stage of therapeutic development life cycle. In these submissions, most of the AI/ML applications happen at the clinical drug development stage, but they also happen at the drug discovery, preclinical drug development, and postmarketing stages. It is important to note that the frequency by which AI/ML is mentioned in regulatory submissions to the FDA likely only represents a fraction of its increasingly widespread use in drug discovery. As demonstrated from this analysis, AI/ML is being utilized across many aspects of the drug development life cycle. AI/ML holds great promise to help improve both the efficiency of drug development and to further inform the understanding of the efficacy and safety of the treatment. There is an increasing trend of AI/ML applications for drug development in recent years, and the authors anticipate that this trend will likely only increase over time. Both opportunities and challenges lie ahead for the potential uses of AI/ML, and pharmaceutical and technology companies are actively investing in this area. Moreover, academic researchers are continuing to investigate current and future applications. The FDA has also been preparing to manage and evaluate AI/ML uses by engaging with a broad set of stakeholders on these issues and building its capacity in these scientific fields, in order to promote responsible innovation in this area. In 2021, the FDA and other regulatory agencies jointly identified 10 guiding principles that can inform the development of Good Machine Learning Practice to help promote safe, effective, and high-quality medical devices that use AI/ML.10 Although these Good Machine Learning Practice guiding principles were developed for medical device development, many of them (e.g., multi-disciplinary collaboration; data quality assurance, data management, and robust cybersecurity practices; representativeness of study participants and data sets; independence of the training and testing data sets) are also applicable to drug development. Liu et al. discussed some expectations for the application of AI/ML in drug development (e.g., fit-for-purpose and risk-based expectations, proper validation, generalizability, explainability, etc.).1 It is important to note that the regulatory considerations for the application of AI/ML in drug development are evolving and will require input from all stakeholders in various disciplines. Effective communication and active collaboration will serve an increasingly important role in fostering innovation, helping to advance regulatory science, and aiding in the promotion and protection of public health in the United States and the world. The authors thank Rajanikanth Madabushi for critical review of the manuscript, and Giang Ho and Kimberly Bergman for their assistance in the production of Figure 2. This work was supported in part by an appointment to the Research Participation Program at the Office of Clinical Pharmacology/Center for Drug Evaluation and Research, US Food and Drug Administration, administered by the Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the US Department of Energy and the US Food and Drug Administration. Julie Hsieh and Mo Tiwari were ORISE fellows contributing to this work. No funding was received for this work. The authors declared no competing interests for this work. The contents of this article reflect the views of the authors and should not be construed to represent the FDA's views or policies. No official support or endorsement by the FDA is intended or should be inferred.
Data centers in the smart grid, which automate the business of electricity delivery through the integration of the electronic technology and the electrical infrastructure Because the smart grid relies on the transfer of sensitive data, keeping that data secure from unauthorized access and other cyber threats is extremely difficult. In particular, the issued system focuses on a new method, which exploring the CNN with RNN using LSTM networks to a smart grid security and communication protocol. Realizing this, the hybrid CNN, RNN, and LSTM models proposed in this manuscript hierarchically focus on multi-level domains for data privacy and security. Your CNN portion is great for grabbing the spatial features of the data being passed to it, which itself can help to get a read on whether the information being relayed is suspicious or not. On the other hand, the RNN part of the model is trained to recognize temporal dependencies within the data, allowing it to provide a system capable of recognizing sequences of actions[6] that may indicate potential threats or breaches. Moreover, LSTM module's ability to retain recent information helps it to learns from past data effectively before making predictions on future patterns, empowering the system to anticipate and address potential safety threats proactively. By using the hybrid of CNN, RNN, and LSTM, the smart grid can learn and adapt over time, making it more effective at detecting and preventing attacks, which in turn reduces the likelihood of data breaches and protects the privacy of data for both consumers and service providers. In addition, it is to mention that the secure message channels integration with the DNN-based protection system guarantees the security of those sensitive data while providing end-to-end encryption for the users.
Kidney stone disease is a common urological illness that affects millions of people worldwide. The identification of kidney stones early and accurately is critical for timely intervention and effective management of this illness. Deep learning approaches have showed promising results in a variety of medical image processing jobs in recent years. This paper describes a novel deep learning-based approach for automatic kidney stone diagnosis utilising medical imaging data. A convolutional neural network (CNN) architecture is used in the suggested method to identify and classify kidney stones in medical photographs. A huge collection of kidney stone images is first collected and preprocessed to ensure homogeneity and improve feature extraction capabilities. To optimise its performance, the CNN model is trained on this dataset using a large number of annotated samples. The trained CNN model distinguishes kidney stone presence from healthy regions in medical pictures with good accuracy and robustness. It detects kidney stones of various sizes and shapes while overcoming hurdles given by different stone compositions and human anatomy. Furthermore, the deep learning model has fast processing speeds, making it suited for real-time clinical applications. Extensive validation and testing on an independent dataset are performed to evaluate the model's performance. The results show that the proposed deep learning method is effective in autonomous kidney stone identification, with sensitivity, specificity, and accuracy metrics comparable to or exceeding those of existing classical methods.
Multiple virtual machines (VMs) are typically co-scheduled on cloud servers. Each VM experiences different latencies when accessing shared resources, based on contention from other VMs. This introduces timing channels between VMs that can be exploited to launch attacks by an untrusted VM. This paper focuses on trying to eliminate the timing channel in the shared memory system. Unlike prior work that implements temporal partitioning, this paper proposes and evaluates bandwidth reservation. We show that while temporal partitioning can degrade performance by 61% in an 8-core platform, bandwidth reservation only degrades performance by under 1% on average.
Seeking collaborators for developing and validating an AI-driven Cyber Maturity Index (CMI) framework tailored to higher education and indu…