
Dr. Pankaj Dadheech is currently working as a Professor in the Department of Computer Science & Engineering (NBA Accredited), Swami Keshvanand Institute of Technology, Management & Gramothan (SKIT), Jaipur, Rajasthan, India (Accredited by NAAC A++ Grade). He has more than 20 years of experience in teaching. He has published 120+ papers in many reputed journals in high indexed outlets, 8 international patents from Australia, Germany, South Africa and USA, 26 patents & 2 copyrights from India, 5 authored books, 4 edited books & several book chapters. He has been awarded with best paper awards, and several other recognitions. He is a Sr. Member of IEEE (#99561311) and Member of Professional Organizations like the IEEE Computer Society, ACM, CSI, IAENG & ISTE. He is supervising/co-supervising a number of PhDs and research associates. He has Chaired Technical Sessions in various International Conferences & Contributed as Resource Person in various FDP’s, Workshops, STTP’s, and Conferences. He is also acting as a Guest Editor of the various reputed Journals, Conference Proceedings and Bentham Ambassador of Bentham Science Publisher.
High Performance Computing Cloud Computing Information Security Big Data Analytics Artificial Intelligence Intellectual Property Right and Internet of Things.
Image processing has become more crucial in medical applications due to its ability to collect and evaluate data from medical images. This book chapter provides an overview of various image processing techniques used for medical applications, including deep learning algorithms, segmentation techniques, and a combination of both. Additionally, the authors discuss other studies that analysed X-rays and used image processing to identify cancer, brain tumours, and other disorders. The results of the study demonstrate how image processing techniques have the potential to significantly improve sickness detection speed and accuracy, facilitating early diagnosis and treatment. The planning of therapies and the accuracy of diagnoses can both be enhanced by the use of image processing tools. Healthcare workers' ability to recognise and manage a variety of medical conditions will undoubtedly increase.
<b>Objective:</b> Breast cancer remains a leading cause of mortality among women worldwide. Early detection and accurate prognosis are crucial for improving patient outcomes. This study presents a novel approach that integrates feature elimination techniques with machine learning to enhance the accuracy of breast cancer prognosis. The approach addresses class imbalance in the dataset to improve sensitivity, particularly in minimizing false negatives. Additionally, it emphasizes the use of machine learning algorithms, which are considered more transparent and computationally efficient compared to deep learning methods.<br> <b>Method:</b> The Wisconsin Breast Cancer (WBC) dataset was used to develop an interpretable machine learning model. Recursive Feature Elimination (RFE) identified key features, while Principal Component Analysis (PCA) reduced dimensionality. The optimized feature set was trained using XGBoost. To address class imbalance, class weighting and decision threshold adjustments were applied to improve sensitivity and minimize false negatives <br><b>Results:</b> The model achieved high performance: accuracy of 99.12%, precision of 100%, recall of 97.69%, and an F1 score of 98.9%. Feature selection and class imbalance handling enhanced sensitivity and computational efficiency. The model's interpretable results highlight its suitability for clinical applications. <br><b>Conclusions:</b> This study presents an interpretable machine learning model integrating RFE, PCA, and XGBoost to enhance breast cancer prognosis. High accuracy and sensitivity, coupled with explainability, make it a promising tool for clinical decisionmaking in early detection and treatment planning.
Abstract Watermarking has been used frequently to authenticate the accuracy and security of the image and video files. In the world of computer technology, several watermarking strategies have developed during the past 20 years. Integrate a picture of identity that is not always concealed, such that no detail is not possible to delete. A monitoring code can also be used to deter unauthorized recording equipment. Another application is the watermark copyright control, which works at stopping the creator of the image from stealing photos unlawfully. A watermark is a promising option for the copyright transcendence of multi-media files, as embedded messages are still included. Due to the limits of fidelity, a watermark may be implemented in a small multimedia data space. There is no proof that Automated Watermarking technologies will fulfil the ultimate purpose of the cleaners of all sorts of copyright security operations to gather knowledge from the data they obtain. Relevant situations may be deemed more fairly expected with the usage of automated copyright marking technologies. A perfect device will not be able to add a digital watermark without the limit, which does not supply the whole object with details. In this work, a modern technique for watermarking includes injecting two or more messages or photographs into a single picture for protection purposes and repeating a similar procedure for N-frames for authentication in the film.
The field of brain-computer interface (BCI) technology is developing quickly, and its fast growth could transform e-learning by bringing more engaging and intuitive educational experiences to the platform. To enhance educational outcomes, the integration of BCI systems into virtual learning environments is examined in this chapter. Brain-computer interfaces (BCIs) can monitor engagement levels, adjust learning materials dynamically to maximize comprehension and retention, and customize educational content to each individual's cognitive state by using real-time neural data. The study includes a thorough analysis of the impact that BCI technology has on students' academic performance, motivation, and engagement in a range of learning situations. The goal of this research is to provide effective methods for integrating BCI into e-learning platforms, resolve any issues, and assess the general effectiveness of this strategy. User input and experimental trials will be used to achieve this. The results suggest that e-learning with BCI enhancements.
The discipline of computer vision is becoming more popular as a research subject. In a surveillance-based computer vision application, item identification and tracking are the core procedures. They consist of segmenting and tracking an object of interest from a sequence of video frames, and they are both performed using computer vision algorithms. In situations when the camera is fixed and the backdrop remains constant, it is possible to detect items in the background using more straightforward methods. Aerial surveillance, on the other hand, is characterized by the fact that the target, as well as the background and video camera, are all constantly moving. It is feasible to recognize targets in the video data captured by an unmanned aerial vehicle (UAV) using the mean shift tracking technique in combination with a deep convolutional neural network (DCNN). It is critical that the target detection algorithm maintains its accuracy even in the presence of changing lighting conditions, dynamic clutter, and changes in the scene environment. Even though there are several approaches for identifying moving objects in the video, background reduction is the one that is most often used. An adaptive background model is used to create a mean shift tracking technique, which is shown and implemented in this work. In this situation, the background model is provided and updated frame-by-frame, and therefore, the problem of occlusion is fully eliminated from the equation. The target tracking algorithm is fed the same video stream that was used for the target identification algorithm to work with. In MATLAB, the works are simulated, and their performance is evaluated using image-based and video-based metrics to establish how well they operate in the real world.
This paper presents a secure and lightweight image encryption framework combining Hyperelliptic Curve Cryptography (HECC) and Discrete Wavelet Transform (DWT) for optical images. The proposed method begins with HECC-based key generation, ensuring efficient encryption using compact key sizes. The image undergoes ASCII conversion, followed by encryption using the recipient’s public key. To enhance security, DWT is applied, embedding encrypted data in the frequency domain, followed by inverse DWT reconstruction. SHA-2 hashing ensures integrity verification, preventing unauthorized modifications. Decryption is performed using HECC, restoring the original image upon hash verification. A Boolean search mechanism also enables efficient retrieval of encrypted images, while proxy re-encryption facilitates dynamic key updates. The proposed system ensures lightweight, secure, and efficient optical image encryption.
Initiating the study into digital twin technology, the planning and implementation of the 6G network necessitates real-time interaction and alignment between physical systems and their virtual representation. From simple parts to intricate systems, the digital twin's flexibility and agility improve design and operational procedure efficiency in a predictable manner. It can validate policies, give a virtual representation of a physical entity, or evaluate how a system or entity behaves in a real-time setting. It evaluates the effectiveness and suitability of QoS regulations in 6G communication, in addition to the creation and management of novel services. Physical system maintenance costs and security threats can also be reduced, but doing so requires standardization efforts that open the door to previously unheard-of difficulties with fault tolerance, efficiency, accuracy, and security. The fundamental needs of a digital twin that are focused on 6G communication are covered in this chapter. These include decoupling, scalable intelligent analytics, data management using blockchain.
This chapter describes the different newly adopted 6G technologies, along with any security risks and potential fixes. The primary 6G technologies that will open up a whole new universe of possibilities are AI/ML, DLT, quantum computing, VLC, and THz communication. The emergence of new generation information and communication technologies, including blockchain technology, virtual reality/augmented reality/extended reality, internet of things, and artificial intelligence, gave rise to the 6G communication network. The intelligence process of communication development, which includes holographic, pervasive, deep, and intelligent connectivity, is significantly impacted by the development of 6G.
As the number of connected devices grows, the internet of things (IoT) poses new security challenges for network activity monitoring. Due to a lack of security understanding on the side of device producers and end users, the majority of internet of things devices are vulnerable. As a result, virus writers have found them to be great targets for converting them into bots and using them to perform large-scale attacks against a variety of targets. The authors provide deep learning models based on deep reinforcement learning for cyber security in IoT networks in this chapter. The IoT is a potential network that connects both living and nonliving things all around the world. As the popularity of IoT grows, cyber security remains a shortcoming, rendering it exposed to a variety of cyber-attacks. It should be emphasized, however, that while there are numerous DL algorithms presently, the scientific literature does not yet include a comprehensive catalogue of them. This chapter provides a complete list of DL algorithms as well as their many application areas.
Image segmentation is a wide area for researching, and in many applications, segmentation is applied for finding the distinct group in the feature space. It separates the data into different regions or clusters, and each one is homogeneous. The current algorithm, which is proposed approach for noise reduction, eliminates most of the noise from the input image. This noise concerns to cut boundary of the noise full image. The result shows that it is very efficient in segmenting the image and reduces the time complexity. The proposed algorithm can be used in object deduction or in object analysing in image processing. The segmentation of image proceeds by using combination of different segmenting approach in 3-D RGB colour-space. Clarity of the output segmented image is better in comparison to other segmentation techniques of the image. Clarity of the output image is depending on the number of clusters used.
Next-generation telemedicine networks transform healthcare delivery by enabling real-time diagnostics, monitoring, and treatment. However, they face significant cybersecurity challenges, especially in safeguarding sensitive biomedical images. Traditional cryptographic techniques are vulnerable to emerging quantum computing threats, necessitating advanced solutions. Quantum cryptography, leveraging principles like Quantum Key Distribution (QKD), offers unparalleled security through physical law-based encryption. This chapter explores the integration of quantum cryptography in telemedicine, addressing implementation challenges such as cost, scalability, and infrastructure compatibility. It highlights emerging protocols, like MDI-QKD, and their role in securing biomedical data in applications such as remote surgeries and teleradiology.
In Wireless Local Area Network (WLAN) IEEE802.11, during the connection establishment four way handshake approaches is used for authentication. 4-way handshake approach, thought has been worked upon by many researchers, but this approach has some inadequacies like Denial of Service (DoS), Memory Exhaustion (ME), Distributed Denial of Service (DDoS) and flooding attacks. A solution for aforementioned vulnerabilities is proposed in this work. The proposed work is an enhancement in 4-way handshake process for more robust authentication process. This is done by encryption of message-1 by using effective encryption techniques; message-2 and message-3 will be secured by a cookie packet, encrypted by secret key. The proposed 4-way handshake process is an improvement over the existing 4-way handshake used in IEEE802.11i. To show effectiveness and correctness, various simulations are performed and also compared with existing 4-way handshake technique. The Enhanced 4-Way Handshake shows improvement in terms of average delay time, success ratio, packet drop and throughput. Based on simulation results it has been observed that our proposed approach is more secure and effective authentication mechanism for IEEE802.11i and it is largely unaffected by the presence of hacker
Recently, an innovative trend like cloud computing has progressed quickly in Information Technology. For a background of distributed networks, the extensive sprawl of internet resources on the Web and the increasing number of service providers helped cloud computing technologies grow into a substantial scaled Information Technology service model. The cloud computing environment extracts the execution details of services and systems from end-users and developers. Additionally, through the system's virtualization accomplished using resource pooling, cloud computing resources become more accessible. The attempt to design and develop a solution that assures reliable and protected authentication and authorization service in such cloud environments is described in this paper. With the help of multi-agents, we attempt to represent Open-Identity (ID) design to find a solution that would offer trustworthy and secured authentication and authorization services to software services based on the cloud. This research aims to determine how authentication and authorization services were provided in an agreeable and preventive manner. Based on attack-oriented threat model security, the evaluation works. By considering security for both authentication and authorization systems, possible security threats are analyzed by the proposed security systems.
Abstract In today’s day of modern era when the data handling objectives are getting bigger and bigger with respect to volume, learning and inferring knowledge from complex data becomes the utmost problem. The research in Knowledge Discovery in Databases has been primarily directed to attribute-value learning in which one is described through a fixed set tuple given with their values. Database or dataset is seen in the form of table relation in which every row corresponds to an instance and column represents an attribute respectively. In this paper a New framework is introduced a much more sophisticated and deserving approach i.e., Hybrid Multi-Relational Decision Tree Learning Algorithm which overcomes with Exiting technology drawbacks and other anomalies. Result show that Hybrid Multi- Relational Decision Tree Learning Algorithm provides certain methods which reduces its execution time. Experimental results on different datasets provide a clear indication that Hybrid Multi-Relational Decision Tree Learning Algorithm is comprehensively a better approach.
Abstract: Lung cancer remains a significant global health challenge, demanding precise and timely diagnostic interventions for improved patient outcomes. This research proposes an innovative approach, the Optimized-ANN (Artificial Neural Network) method, to improve the precision oflung cancer diagnostic through the integration of machine learning techniques. By optimizing the architecture and parameters of the ANN, we aim to achieve superior diagnostic precision, aiding clinicians in early detection and tailored treatment planning. The Optimized-ANN methodology involves a multi-step process, encompassing preprocessing of medical imaging data, Principal component analysis (PCA) for dimensionality reduction and feature extraction, hyperparameter optimization, and construction of a customized ANN. The resulting model is trained and validated using a diverse dataset, with a focus on robustness and generalization to various patient profiles. Our research adds to the corpus of knowledge by providing a thorough and refined method of diagnosing lung cancer. The evaluation Metrics like F1-score, recall, accuracy, and precision providea detailed understanding of the design's performance. Furthermore, cross-validation ensures the reliability of the Optimized-ANN across distinct subsets of the dataset. The anticipated outcomes of this research include heightened diagnostic accuracy, efficient feature representation, and adaptability to diverse imaging conditions. As lung cancer diagnosis relies heavily on medical imaging, the Optimized-ANN Approach holds the potential to significantly impact clinical decision-making, facilitating earlier interventions and ultimately improving patient prognosis. This paper sets the stage for the detailed exploration of the Optimized-ANN Approach, underscoring its potential as a valuable tool in the realm of lung cancer diagnosis and contributing to the broader landscape of machine learning applications in healthcare.
The growing role we play in influencing and being influenced by those around us, for better or worse, underscores the importance of having proven ways to analyze and model the propagation of influence. The use of such techniques or traditional graph theoretic approaches alone do not capture the complex, non-linear nature of many real-world social networks. In response to this problem, in this paper we present a new framework combining graph and geometry approaches for full analysis of social network influence. We mix traditional centrality statistics like degree, betweenness, clustering coefficients, with more sophisticated lower-level geometric features, such as geodesics and curvature analysis obtained at the price of using Riemannian geometry. This Hybrid model also allows us to infer the key players who provide us with further detailed information about the structural strength and interaction of a network. Drawing on geometric characteristics, our approach successfully captures correlation of nonlinear effect, resulting in more accurate influence spread estimation. We use the real-world social network data to conduct a case study and show that the proposed framework not only enhances the effectiveness of critical node-oriented attacks in target identification but also leads to a more accurate discovery of bottlenecks and weakly connected regions. The presented method alleviates the difficulties of the established techniques by providing a mathematically structured, yet computationally efficient framework that provides interpretability offered from graph theory with the level of detail from geometric analysis. This work can inform targeted marketing, misinformation control and community behavior dynamics, and lends novel insights to the study of influence dynamics in social networks more broadly.
Abstract Migrations are complex and depend on the project and its requirements, so security is essential to ensure a safe migration to the cloud. There are many things to consider, such as compliance with industry regulations, performance, or internal security policies. When migrating to a cloud environment, security is a shared responsibility. The cloud is based on a shared responsibility model in which the Internet service provider is responsible for cloud security and the customer is responsible for cloud security. In this paper we proposed a high-security for transferring large scale data files to the cloud database management system and an inter-cloud knowledge migration process that provides quick intervals. Measuring the effectiveness of the proposed work we have compare the performance with exiting cloud migration services. In our proposed work communication plays an essential role in its success. It is necessary to ensure clear and efficient communication between the parties involved in the migration, from decision makers to IT experts, legal teams and security officials. By sharing the needs, goals, and threats among the parties involved, our prosed work creates a successful migration strategy for the business, thereby avoiding disruptions, data loss, and other potential risks.
The Zika virus presents an extraordinary public health hazard after spreading from Brazil to the Americas. In the absence of credible forecasts of the outbreak's geographic scope and infection frequency, international public health agencies were unable to plan and allocate surveillance resources efficiently. An RNA test will be done on the subjects if they are found to be infected with Zika virus. By training the specified characteristics, the suggested Hybrid Optimization Algorithm such as multilayer perceptron with probabilistic optimization strategy gives forth a greater accuracy rate. The MATLAB program incorporates numerous machine learning algorithms and artificial intelligence methodologies. It reduces forecast time while retaining excellent accuracy. The projected classes are encrypted and sent to patients. The Advanced Encryption Standard (AES) and TRIPLE Data Encryption Standard (TEDS) are combined to make this possible (DES). The experimental outcomes improve the accuracy of patient results communication. Cryptosystem processing acquires minimal timing of 0.15 s with 91.25 percent accuracy.
The implementation of novel intelligent applications has resulted in security enhancement in the way information is managed. The processing, storing, and functional aspects of computer systems have all been impacted by this revolution. Cloud computing has become a widely adopted and extensively utilized concept. Unfortunately, there are still some obstacles that prevent the widespread use of cloud computing. Edge computing refers to a decentralized kind of computing that allows the processing of data at its source, extending beyond traditional centralized systems. This research study introduces an innovative and efficient approach for identifying human faces for access control and security monitoring using sophisticated deep-learning algorithms. The proposed approach employs the results of two models trained using VGG face and ResNet architectures, respectively. Integrating these models with the deep face model enhances the accuracy and reliability of facial recognition. The process involves extracting distinguishing characteristics from facial photos using pre-trained VGG face and ResNet models. Subsequently, the deep face model is used to merge the characteristics, hence permitting improved portrayal and recognition of facial features. The proposed methodology has been verified by empirical evaluations, which have shown its effectiveness in improving precision and robustness in tasks associated with human facial recognition. The work significantly enhances the advancement of deep learning techniques in the field of face recognition applications for security enhancement.
Biofuels, such as coal and oil, have been the most suitable and sufficient substitutes for other conventional fuels in the energy sector for the last few years. For decades, entire populations have suffered from deforestation, global warming, and carbon monoxide issues. Biofuel production is an eco-friendly process that still has health hazards for manufacturing unit workers. That is due to flammable and combustible raw materials and their chemical reactions. Bioethanol is a flammable biofuel prepared by cellulose items and the fermentation of grains. Biodiesel is another combustible liquid-based biofuel produced by alcohol and glyceride in vegetable oil duly catalyzed by strong alkaline like caustic soda. Biofuel plants or trees are the biggest isoprene producers, which meet the ozone layer in the earth's atmosphere. While combining with ozone always causes many health hazards for manufacturing units' employees despite several safety measures. One of the significant consequences is that some factory workers often have asthma, allergy, and lung disorders due to ozone attacks.
The investigation for precision in Stock Market Forecasts (SMF) developments has led financial professionals to explore numerous modeling approaches. This paper investigates a new technique aimed at advancing the precision of Support Vector Machine (SMF) by combining the application of Random Forest (RF) methods with Stochastic Differential Equations (SDEs). Market deviations can be enhanced using the Geometric Brownian Motion (GBM) scheme, but this approach has problems because it denotes that market deviations will stay identical. To solve these challenges and develop how the GBM context more precisely corresponds to the unpredictable stock market factors, Random Forest (RF) is used for variable parameter prediction. The change in migration and deviation variables in GBM procedures has been defined using RF approaches in the present study using historical data from the stock market. It examines the performance of the RF-enhanced GBM method by comparing it to static variables in GBM, the Heston Model, and standard GBM while taking unpredictable chances into account. The quantitative metrics, such as the Sharpe Ratio (SR), Cumulative Returns (CR), and Maximum Drawdown (MD), were computed for a total of five stock markets in the research. When compared to the other models, the RF-enhanced GBM generally displayed a competitive edge in terms of risk-adjusted income and was highly accurate in terms of coming up with precise projections.
This research paper introduces a novel mathematical model that integrates fractional calculus with topological data analysis (TDA) for image segmentation. Fractional calculus, which extends the concept of derivatives and integrals to arbitrary orders, effectively addresses the non-local and hereditary properties of images, capturing textures and patterns across various scales and complexities. TDA, utilizing robust tools for identifying shape and connectivity features, complements this by capturing intricate topological characteristics often overlooked by traditional segmentation methods. Leveraging persistent homology, a core concept in TDA, the model classifies, and segments image regions based on their topological features, which remain invariant under deformations such as bending and stretching. This integrated approach aims to enhance the accuracy and reliability of image segmentation, offering a significant advancement over existing methods.
This chapter describes a prototype of a system that automatically checks the surface of precision ground metallic surfaces. It specifically focuses on bearing rolls. The authors study and test how light bounces off the surface using optical experiments. The aim is to figure out how to make things look sharp and bright, so it's easy to see the contrast between surfaces that are damaged and those that aren't. A new method for choosing a value to separate different parts of an image is presented. In addition, the text explores various search methods that go through a list one by one and have been made available to the public. These algorithms are used to find the most suitable group of traits for organizing or categorizing something. The text also looks at how much computer power these algorithms need. Finally, the authors display the results of sorting 540 blemished pictures.
Abstract AI Solution for Farmers is an agricultural based project, created to help farmers and to help them in increasing their productivity. Since the technologies are running the world, why agriculture must be deprived of it. Agriculture is one of the most important areas that have major impacts on the economy and society of a country. Technological developments serve as tools to share knowledge and practices of agricultural products and make more satisfactory lives for farmers, traders, policymakers, and the overall society. It is evident that knowledge has become a crucial component in production, society, food security, health, poverty, and other millennium development goals. The use of technology may provide a better approach to solve the problems arising from sowing the seed to harvest the crop. The new technologies like Machine Learning and Data Science may be great help to have an eye on the deciding factors to the growth of crop.
Nowadays, cardiovascular ailments are one of the major causes of death globally, taking an estimated 610,000 lives each year. One of the biggest causes of heart disease is high blood pressure, fasting blood sugar, diabetes, high cholesterol, high BMI, and high heart rate. Heart disease diagnosis is more expansive nowadays; it involves a lot of accuracy and uncertainty due to the massive amount of data, and decisions made by doctors may fail in some cases. Data mining in healthcare is an intelligent diagnostic tool. Thus, it is compulsory to predict the health risks of every human being, depending on age, sex, blood pressure, diabetes based on symptoms along with what we can do for precaution which is done by diagnosing disease and providing appropriate cure on the right instant. This chapter aims to forecast a classification model and to know which kind take part in ailment forecast by using Cleveland and Statlog heart dataset. Prediction is implemented with six techniques, naïve Bayes, k-NN, random forest, logistic regression, SVM, and decision tree, on different datasets, followed by a comparative analysis of the classification model for better accuracy and results. A random forest outperformed the other classifiers with an accuracy rate of 99.80%, followed by logistic regression.
Image segmentation is a wide area for researching, and in many applications, segmentation is applied for finding the distinct group in the feature space. It separates the data into different regions or clusters, and each one is homogeneous. The current algorithm, which is proposed approach for noise reduction, eliminates most of the noise from the input image. This noise concerns to cut boundary of the noise full image. The result shows that it is very efficient in segmenting the image and reduces the time complexity. The proposed algorithm can be used in object deduction or in object analysing in image processing. The segmentation of image proceeds by using combination of different segmenting approach in 3-D RGB colour-space. Clarity of the output segmented image is better in comparison to other segmentation techniques of the image. Clarity of the output image is depending on the number of clusters used.
By providing new and unprecedented insights into consumer behavior and preferences, Brain-Computer Interface (BCI) devices are revolutionizing customer experience management. Brain-Computer Interfaces (BCIs) can accurately anticipate consumer demands, expedite service delivery, and customize interactions with clients by utilizing brain data in real-time. This abstract investigates the potential to enhance customer satisfaction, loyalty, and overall engagement through the integration of Brain-Computer Interfaces (BCIs) into customer experience management systems. BCIs facilitate the creation of customized experiences that can dynamically alter to accommodate individual preferences by conducting a more thorough examination of consumer mood and behaviors. This investigation not only examines the technological challenges and ethical dilemmas associated with the integration of BCI into customer experience management, but also examines the potential future opportunities and current advancements.
The integration of Brain-Computer Interface (BCI) technology with mobile devices is a novel approach to enhance organisational administration. This study examines the potential of brain-computer interfaces (BCIs) to enhance task management, communication, and decision-making through smartphone control. BCI technology enables administrators and managers to remotely supervise organisational activities, monitor performance indicators, and access data through a hands-free, real-time interface with mobile applications. The paper examines the technical framework, implementation issues, and implications for organisational agility and productivity in the context of BCI-smartphone integration. Additionally, it investigates the adoption rate of BCI, security concerns, and the potential applications of BCI in mobile organisational tools in the future. This research demonstrates that modern management practices in enterprises could be transformed by brain-computer interface (BCI)-enabled mobile phones, which provide enhanced cognitive, real-time control over diverse processes.
The transmission of medical records over indiscrete and open networks has caused an increase in fraud involving stealing patients' information, owing to a lack of security over these links. An individual's medical documents represent confidential information that demands strict protocols and security, chiefly to protect the individual's identity. Medical image protection is a technology intended to transmit digital data and medical images securely over public networks. This paper presents some background on the different methods used to provide authentication and protection in medical information security. This work develops a secure cryptography-based medical image reclamation algorithm based on a combination of techniques: discrete cosine transform, steganography, and watermarking. The novel algorithm takes patients' information in the form of images and uses a discrete cosine transform method with artificial intelligence and watermarking to calculate peak signal-to-noise ratio values for the images. The proposed framework uses the underlying algorithms to perform encryption and decryption of images while retaining a high peak signal-to-noise ratio value. This value is hidden using a scrambling algorithm; therefore, a unique patient password is required to access the real image. The proposed technique is demonstrated to be robust and thus able to prevent stealing of data. The results of simulation experiments are presented, and the accuracy of the new method is demonstrated by comparisons with various previously validated algorithms.
This work presents Fixed Point Theorems (FPT) on Digital Images (DI) based on the Banach contraction concept developed. The study aims to develop the application of Banach Contraction Mapping Theory (BCMT) for DI, which was presented. The following finding generates a single Fixed Point (FP) for DI, exploring the core idea of DI and implementing the FPT in the context of Digital Image Compression (DIC). Fractal Image Compression (FIC) is a standard method for DIC. It is founded on a search for an object that is accurately in the image. However, a significant problem with DIC is its computational weight. The study proposed a method to reduce data transmission time by applying Image Compression (IC). It highlights the challenges of maximizing Image Quality (IQ) or reducing transmission time for definite IQ. The classic FIC method, which uses non-linear contractive mapping as a constant contractive factor, can attain this.
An autonomous, power-assisted Turtlebot is presented in this paper in order to enhance human mobility. The turtlebot moves from its initial position to its final position at a predetermined speed and acceleration. We propose an intelligent navigation system that relies solely on individual instructions. When there is no individual present, the Turtlebot remains stationary. Turtlebot utilizes a rotating Kinect sensor in order to perceive its path. Various angles were examined in order to demonstrate the effectiveness of the system in experiments conducted on a U-shaped experimental pathway. The Turtlebot was used as an experimental device during these trials. Based on the U-shaped path, deviations from different angles were measured to evaluate its performance. SLAM (Simultaneous Localization and Mapping) experiments were also explored. We divided the SLAM problem into components and implemented the Kalman filter on the experimental path to address it. The Kalman filter focused on localization and mapping challenges, utilizing mathematical processes considering both the system's knowledge and the measurement tool. This approach allowed us to achieve the most accurate system state estimation possible. The significance of this work extends beyond the immediate application, as it lays the groundwork for advancements in wheelchair navigation research by Dynamic Control. The experiments conducted on a U-shaped pathway not only validate the efficacy of our algorithm but also provide valuable insights into the intricacies of navigating in both forward and reverse directions. These insights are pivotal for refining the navigation algorithm, ultimately contributing to the development of more robust and user-friendly systems for individuals with mobility challenges. The data used for this purpose included actuator input, vehicle location, robot movement sensors, and sensor readings representing the world state. The study provides a strong foundation for future wheelchair navigation research by Dynamic Control. Consequently, we found that navigating the Turtlebot in the reverse direction resulted in a 5%-6% increase in diversion compared to forward navigation, providing valuable insight into further improvement of the navigation algorithm.
Formulae display:?Mathematical formulae have been encoded as MathML and are displayed in this HTML version using MathJax in order to improve their display. Uncheck the box to turn MathJax off. This feature requires Javascript. Click on a formula to zoom.
Diabetic Retinopathy (DR) affects people who have diabetes mellitus for a long period (20 years). It is one of the most common causes of preventable blindness in the world. If not detected early, this may cause irreversible damage to the patient's vision. One of the signs and serious DR anomalies are exudates, so these lesions must be properly detected and treated as soon as possible. To address this problem, the authors propose a novel method that focuses on the detection and classification of Exudateas Hard and soft in retinal fundus images using deep learning. Initially, the authors collected the retinal fundus images from the IDRID dataset, and after labeling the exudate with the annotation tool, the YOLOV3 is trained with specific parameters according to the classes. Then the custom detector detects the exudate and classifies it into hard and soft exudate.
The study aimed to apply to Machine Learning (ML) researchers working in image processing and biomedical analysis who play an extensive role in comprehending and performing on complex medical data, eventually improving patient care. Developing a novel ML algorithm specific to Diabetic Retinopathy (DR) is a challenge and need of the hour. Biomedical images include several challenges, including relevant feature selection, class variations, and robust classification. Although the current research in DR has yielded favourable results, several research issues need to be explored. There is a requirement to look at novel pre-processing methods to discard irrelevant features, balance the obtained relevant features, and obtain a robust classification. This is performed using the Steerable Kernalized Partial Derivative and Platt Scale Classifier (SKPD-PSC) method. The novelty of this method relies on the appropriate non-linear classification of exclusive image processing models in harmony with the Platt Scale Classifier (PSC) to improve the accuracy of DR detection. First, a Steerable Filter Kernel Pre-processing (SFKP) model is applied to the Retinal Images (RI) to remove irrelevant and redundant features and extract more meaningful pathological features through Directional Derivatives of Gaussians (DDG). Next, the Partial Derivative Image Localization (PDIL) model is applied to the extracted features to localize candidate features and suppress the background noise. Finally, a Platt Scale Classifier (PSC) is applied to the localized features for robust classification. For the experiments, we used the publicly available DR detection database provided by Standard Diabetic Retinopathy (SDR), called DIARETDB0. A database of 130 image samples has been collected to train and test the ML-based classifiers. Experimental results show that the proposed method that combines the image processing and ML models can attain good detection performance with a high DR detection accuracy rate with minimum time and complexity compared to the state-of-the-art methods. The accuracy and speed of DR detection for numerous types of images will be tested through experimental evaluation. Compared to state-of-the-art methods, the method increases DR detection accuracy by 24% and DR detection time by 37.
<p>Information processing requires handwritten digit recognition, but methods of writing and image defects like brightness changes, blurring, and noise make image recognition challenging. This paper presents a strategy for categorizing offline handwritten digits in both Devanagari script and Roman script (English numbers) using Deep Learning (DL) algorithms, a branch of Machine Learning (ML) that uses Neural Networks (NN) with multiple layers to acquire hierarchical representations of input autonomously. The research study develops classification algorithms for recognising handwritten digits in numerical characters (0–9), analyzing classifier combination approaches, and determining their accuracy. The study aims to optimize recognition results when working with multiple scripts simultaneously. It proposes a simple profiling technique, Linear Discriminant Analysis (LDA) implementation, and a NN structure for numerical character classification. However, testing shows inconsistent outcomes from the LDA classifier. The approach, which combines profile-based Feature Extraction (FE) with advanced classification algorithms, can significantly improve the field of HWR numerical characters, as evidenced by the diverse outcomes it produces. The model performed 98.98% on the MNIST dataset. In the CPAR database, we completed a cross-dataset evaluation with 98.19% accuracy.</p>
The networks acquire an altered move towards the difficulty solving skills rather than that of conventional computers. Artificial neural networks are comparatively crude electronic designs based on the neural structure of the brain. The chapter describes two different types of approaches to training, supervised and unsupervised, as well as the real-time applications of artificial neural networks. Based on the character of the application and the power of the internal data patterns we can normally foresee a network to train quite well. ANNs offers an analytical solution to conventional techniques that are often restricted by severe presumptions of normality, linearity, variable independence, etc. The chapter describes the necessities of items required for pest management through pheromones such as different types of pest are explained and also focused on use of pest control pheromones.
Agriculture contributes a significant amount to the economy of India due to the dependence on human beings for their survival. The main obstacle to food security is population expansion leading to rising demand for food. Farmers must produce more on the same land to boost the supply. Through crop yield prediction, technology can assist farmers in producing more. This paper's primary goal is to predict crop yield utilizing the variables of rainfall, crop, meteorological conditions, area, production, and yield that have posed a serious threat to the long-term viability of agriculture. Crop yield prediction is a decision-support tool that uses machine learning and deep learning that can be used to make decisions about which crops to produce and what to do in the crop's growing season. It can decide which crops to produce and what to do in the crop's growing season. Regardless of the distracting environment, machine learning and deep learning algorithms are utilized in crop selection to reduce agricultural yield output losses. To estimate the agricultural yield, machine learning techniques: decision tree, random forest, and XGBoost regression; deep learning techniques - convolutional neural network and long-short term memory network have been used. Accuracy, root mean square error, mean square error, mean absolute error, standard deviation, and losses are compared. Other machine learning and deep learning methods fall short compared to the random forest and convolutional neural network. The random forest has a maximum accuracy of 98.96%, mean absolute error of 1.97, root mean square error of 2.45, and standard deviation of 1.23. The convolutional neural network has been evaluated with a minimum loss of 0.00060. Consequently, a model is developed that, compared to other algorithms, predicts the yield quite well. The findings are then analyzed using the root mean square error metric to understand better how the model's errors compare to those of the other methods.
<p>Post-to-Facebook data have been eliminated from text and image analysis investigations on Social Media (SM) participation, which have tested techniques for predicting activity. SM has fundamentally revolutionised the marketing division by presenting a direct link to users’ inboxes. This research investigates Natural Language Processing (NLP) and Deep Convolutional Neural Networks (DeepCNN) to determine whether these technologies can improve SMA. Advertisers can support their SMA approaches by employing earlier methods to recognise consumer demands, behaviours, and preferences. A novel technique that integrates Deep Learning and Natural Language Processing in order to improve SM awareness has the possibility of helping revolutionise on-line advertising techniques, opening the for additional studies, and setting foundations for a Decision-Making System (DMS) which includes advertising data analytics and Artificial Intelligence (AI). A distinctive framework that forecasts how users behave using like count, post count, and sentiment was built utilising 500k posts on Facebook as the basis for the research investigation’s approach. Image and text data performed better than unpredictability methods, demonstrating that data fusion is essential when predicting user behaviour.</p>
Ensuring data privacy while applying Deep Learning (DL) on distributed datasets represents an essential task in the current period of critical data security. Data privacy and accuracy of models are typically impacted by traditional methods. Data privacy is of the most tremendous significance in distributed data settings, and the current research presents a novel model for DL termed Secure Multi-Party Computation (SMPC). The accuracy of the mathematical models and confidentiality of the data are frequently compelled to coexist in conventional methods. In order to enable collaborative DL without compromising private information, the recommended system uses the Paillier Homomorphic Encryption Scheme (PHES). By using innovative cryptographic methods, this decentralized method secures the confidentiality of data without utilizing a Trusted Authority (TA). By performing a thorough assessment of the CIFAR-10 and IMDB datasets, the present study demonstrates that the system the author uses is simple and scalable and that it offers accuracy on par with conventional methods. By presenting an approach for achieving a balance between the two competing demands of data security and computational performance, this method signifies a vast advance forward with a confidentiality DL.
Insect Monitoring includes collecting information about insect activity with the help of using traps and lures. Many different types of traps are used and they can be divided into the following types - Light traps, Sticky Traps and Pheromone Traps. After trapping the insect, the next step involves monitoring tools to monitor the further behavior of insects. Monitoring includes checking of crop fields for early detection of pests and identification of pests. Identification helps in finding which are the best naturally occurring control agents and assessing the efficiency of pest control actions that already have been taken. The main purpose of this paper is to design the insect monitoring system is to assess insect activity and gain population estimates so we can deploy a solution that will be most effective at protecting our crops. This system involves the use of traps and lures to get information on insect activity. Traps are strategically placed throughout the crop and include natural semi-chemical attractants to draw insects into the traps.
Abstract There is tremendous upturn in data repositories because of data generation by various organizations like government, cooperates, health caring in large amounts. Large amount of data is being produced, processed, collected, and analysed online. So there comes a requirement to transform this data into valuable information. This process of extracting the knowledge from large amount of data is referred as data mining. The proposed hybrid approach can be checked on different classifiers like Naïve Bayes, Random forest classifier etc. In proposed methodology we find that SMOTE algorithm which used K-nearest neighbour algorithm is limited to some minority class instances for creating synthetic samples, which sometimes leads to over fitting, so an effective oversampling approach can be developed.
Edge detection plays a vital role in medical imaging applications such as MRI segmentation. Magnetic resonance imaging (MRI) is an imaging technique used in medical science to diagnose tumors of the brain by producing high quality images of the inside of the human body, by using various edge detectors. There exists many edge detector but still, need for research is felt in order to enhance their performance. A very common problem faced by most of the edge detector is the choice of threshold values. This paper presents fuzzy based edge detection using K-means clustering method. The K-means clustering approach is used in generating various groups which are then input to the mamdani fuzzy inference system. This whole process results in the generation of the threshold parameter which is then fed to the classical sobel edge detector which helps in enhancing its edge detection capability using the fuzzy logic. This whole setup is applied on the MR images of the human brain. The retrieved results represents that fuzzy based k-means clustering enhances the performance of classical sobel edge detector and along with retaining much relevant information about the tumors of the brain.
This study emphasizes the importance of adopting a consumer-centric approach to supply chain management, highlighting the role of data-driven analytics, including artificial intelligence and machine learning (AI/ML), in extracting actionable insights from consumer data. Such insights can enhance demand forecasting, personalization strategies, supply chain efficiency, customer satisfaction, and risk mitigation. This chapter looks into the developing landscape of supply chain management, emphasizing the importance of adopting a consumer-centric approach. It examines the role of data-driven analytics, including artificial intelligence and machine learning, in extracting actionable insights from consumer data. The chapter also discusses how such insights can enhance demand forecasting, personalization strategies, supply chain efficiency, customer satisfaction, and risk mitigation.
Quantum Cryptography is revolutionary discovery in the field of network security. Quantum cryptography promises to provide sophisticated functionality for security issues but it also leads to unbelievable increment in computational parallelism which is helpful for potential cryptanalytic attacks. Some of the associated properties of Quantum Key Distribution protocol that provide security that is deficient for the shared key to be transmitted securely. The identity verification process attempts to maximize success in interpreting the EPR protocol for distribution of Quantum.
Most cars of the future might be intelligent and self-driving. 6G will be the main technology used in automobiles. An overview of 6G-enabling technologies is presented in this current study and their future application. The primary role that 6G will play in automotive technology is to accomplish this. Furthermore, a new technology called 6G wireless connection is on the rise and has the potential to completely transform automotive technology. It is anticipated that more items will be connected in the future via the new 6G technology. Therefore, it must outperform more contemporary, cutting-edge communication technologies like 5G in terms of performance. Heterogeneous object connectivity, ultra-low latency, high throughput, low power consumption, and high data reliability are the main characteristics of 6G technology. Additionally, it will use artificial intelligence to facilitate secure and intelligent communication.
This chapter employs a structured secondary research approach to comprehensively investigate the evolution of artificial intelligence (AI) in the context of business. It encompasses a rigorous literature review, data collection from reputable sources, and meticulous analysis. The study conducts comparative analyses across diverse industries, supplemented by real-world case studies, to illuminate the practical applications of AI. Additionally, the chapter explores ethical considerations and regulatory frameworks, synthesizing findings to address gaps in the existing literature. The research adheres to ethical guidelines and presents its insights in a clear and organized manner.
A significant novel approach in distributed ML, Federated Learning (FL), enables multiple parties to work simultaneously on developing models while securing the confidentiality of their unique datasets. There are issues regarding privacy with FL, particularly for models that are being trained, because private information can be accessed from shared gradients or updates to the model. This investigation proposes SecureHE-Fed, a novel system that improves FL’s defense against attacks on privacy through the use of Homomorphic Encryption (HE) and Zero-Knowledge Proofs (ZKP). Before data from clients becomes involved in the learning procedure, SecureHE-Fed encrypts it. The following lets us determine encrypted messages without revealing the data as it is. As an additional security test, ZKP is employed to verify if modifications to models are valid without sharing the true nature of the information. By evaluating SecureHE-Fed with different FL techniques, researchers demonstrate that it enhances confidentiality while maintaining the precision of the model. The results of this work obtained validate SecureHE-Fed as a secure and scalable FL approach, and we recommend its use in applications where user confidentiality is essential.
The primary form of transportation is roads. But because of the high volume of traffic on the roads and other environmental conditions, regular maintenance is required. This maintenance is frequently neglected as it is impossible to watch over every location, or just out of ignorance. Potholes are created as a result, which increases traffic and increases the likelihood of accidents. However, there are many methods/systems available which can be used to detect potholes using various image processing methods. The accuracy of these systems is highly affected in rainy weather. In this chapter, a system is designed to detect pothole during rainy season effectively. This system also collects the location of potholes, which can be further provided to authorities for maintenance work. The proposed system can be used for driverless cars.
The growing threat of cyberattacks has rendered the research of secure online systems an issue of concern. Elliptic Curve Cryptography (ECC) is an enhanced security benchmark that provides high-security levels with small key sizes. This work exhibits an Algebraic Cryptanalysis (AC) technique to evaluate and enhance ECC security systems with Gröbner Bases (GB). This paper investigates ECC vulnerabilities and recommends a technique for using GB to address the Elliptic Curve Discrete Logarithm Problem (ECDLP). Analysed amid conventional techniques such as Baby-Step Giant-Step (BS-GS), Pollard’s Rho (PR), and Pohlig-Hellman (PH), the approach was evaluated. The report proves that the proposed approach has a higher SR, averaging 96.33% throughout all tests.
The world is battling the pandemic corona virus disease (COVID-19) now, and even after 2 years of the COVID-19 pandemic, this technology is still not reasonably advanced to tackle the battle against this virus most efficiently. The total number of COVID-19 cases worldwide surpassed 420 million, with 5.8 million deaths. COVID-19 infects a person with various symptoms, and one of the symptoms is pneumonia. A person suffering from pneumonia may or may not be carrying COVID-19. This research article aims to describe an X-radiation (X-ray) of patients with pneumonia and proposes other subjects who also have the corona virus. This system is based on Deep Learning (DL), and the Convolutional Neural Networks (CNN) method is applied. The work is done with additional help from the frameworks Tensorflow and Keras. Firstly, the images are loaded into the compiler, cleaned, and preprocessed accordingly. The next thing to come up with is setting up the Neural Network (NN) layer. The CNN layer is formed, and a particular Activation Function (AF), Rectified Linear Unit (ReLu), is applied. Finally, the model is trained, and a classification is done to determine whether the patient's X-ray is only for pneumonia or pneumonia + COVID-19. The paper's outcome has an accuracy of 96% to 98%.
The development of AI has made it possible to envision “thinking robots” that are capable of learning and taking over human roles. Since the late 1970s, AI has shown considerable promise in improving human decision-making processes and, by extension, productivity across a wide range of business endeavours, thanks to its ability to recognise business patterns, understand business phenomena, seek information, and intelligently analyse data. While artificial intelligence has many practical applications, it is seldom applied in supply chain management (SCM). In order to realise AI's numerous potential benefits, this chapter explores the different areas of AI most suited to tackling real-world SCM difficulties. This chapter of the book achieves precisely that, analysing the history of AI's use in SCM applications and identifying its most potential future uses.
On one side, the modern day symmetric-key cryptography protects everything from our online banking to critical infrastructures, and the giant key-space of AES-128 alone around 3.4×10³⁸ possibilities makes the brute-force attack impractical, which makes sense for the development of more sophisticated search techniques. Although traditional combinatorial attacks eliminate keys based on algebraic relationships, they have exponential complexity even for low-degree equations and are not applicable to multiple cipher designs. At the same time, advances in machine learning that have been made recently demonstrate that reinforcement learning agents can learn to control search heuristics, cutting solution times by as much as a factor of ten in closely associated optimization problems and greatly enhanced success rates in side-channel key recovery. In this paper, we bridge by incorporating an RL policy in a combinatorial key-search engine, with partial-key candidates and their statistical properties as states, to guide an order of candidates of favorable branches. Results on several cipher instances demonstrate 14–19 percentage-point improvement in recovery rates and close to two-times speed improvement over standard and pureRL baselines, suggesting a promising path towards more intelligent, automated cryptanalysis.
ival needs of millions of people. AI, in form of machine learning and deep learning, is capable of providing a number of strategies that assist in the creation of more healthy seeds. This paper discusses significance of machine learning and deep learning that growers can use to gain access to increasingly sophisticated data and analytical tools, allowing them to make better decisions, improve efficiencies, and reduce wastes in food and bio-fuel production while minimizing negative environmental impacts. On the basis of critical parameters like temperature, rainfall, humidity, soil type, soil characteristics etc., ML and DL operate as recommenders and advise farmers to take the right action. Numerous AI applications in agriculture are addressed, with an emphasis on yield prediction. The article offers a comprehensive review of a variety of ML, DL and hybrid methodologies for correctly forecasting agricultural outputs that will promote the nation's economic growth.
The current pronunciation scoring based on Goodness of Pronunciation (GOP) uses posterior probabilities of the Acoustic Models. Such algorithms suffer from generalization since they are utilized to determine a score metric for each phoneme rather than on the completeness or comparison with the ideal utterance of the words. In this paper, a novel method is proposed for computing scores calculated using combined scores of prosodic, fluency, completeness, and accuracy. This is achieved using context-aware GOP in conjugation with dynamic time warping (DTW) matching of the pitch contours of a weighted average of the context tokens found in the audio file that is rich in mispronounced phonemes. The proposed work gives flexibility in tuning the results according to different speech aspects based on a single hyperparameter. The results achieved are encouraging and have been validated on the speechocean762 dataset, where Automatic Speech Recognition (ASR) model has been trained on the Librispeech dataset. The resultant mean error of the proposed approach is 3.38% and the value of the correlation coefficient achieved is 0.652.
In today’s digital era, the text may be in form of images. This research aims to deal with the problem by recognizing such text and utilizing the support vector machine (SVM). A lot of work has been done on the English language for handwritten character recognition but very less work on the under-resourced Hindi language. A method is developed for identifying Hindi language characters that use morphology, edge detection, histograms of oriented gradients (HOG), and SVM classes for summary creation. SVM rank employs the summary to extract essential phrases based on paragraph position, phrase position, numerical data, inverted comma, sentence length, and keywords features. The primary goal of the SVM optimization function is to reduce the number of features by eliminating unnecessary and redundant features. The second goal is to maintain or improve the classification system’s performance. The experiment included news articles from various genres, such as Bollywood, politics, and sports. The proposed method’s accuracy for Hindi character recognition is 96.97%, which is good compared with baseline approaches, and system-generated summaries are compared to human summaries. The evaluated results show a precision of 72% at a compression ratio of 50% and a precision of 60% at a compression ratio of 25%, in comparison to state-of-the-art methods, this is a decent result.
Background and Objective: VANET is an application used for the intelligent transportation system which improves traffic safety as well as its efficiency. We have reviewed the patents related to vehicular Ad-Hoc Network and their issue. To avoid road accidents a lot of information we need in advance. This paper has developed a framework which minimizes the possibilities of the black hole attack in VANET. According to us, there are two possible solutions for this purpose. The first is to see alternative routes for the same destination. The second compromises of exploiting the packet header's packet sequence number which is always included in each packet header. The second procedure is able to verify that 72% to 96% of route which is discovered depends on pause time t which is the minimum time for delay in the packet transition in the network when AODV routing protocol is used for packet transitions. Methods: In this approach we used twenty five nodes. In which two are source nodes, two are destination nodes and four are invaders. We analyses the effects of these invaders on the network and studied their behavior on the network on different time-period to analyses if invader is black hole invader or the invader is Gray hole. To calculate send packets, received packets, packet drop, packet drop fraction, end-to-end delay, AWK script is used. Results and Conclusion: Through this work we simulate the result in the time frame of 100 ms manually and on graph the time frame is not available so the time frame is processed by trace graph accordingly. In the simulation we took 25 nodes initially and start the procedure to send the packets over nodes. At first packets are broadcasted to every node to find out the location of nodes and packets are dropped once the path is established and then the packets are transferred to the path established over network. Conclusion: VANET is seen as the future of the network, and the need to secure it is crucial for the safety of it from various attacks. A secured VANET is essential for the future of the network and also currently acquiring this network will also boost the possibility of VANET to develop and reduce the time of its implementation in the real world scenarios. In this work, we have designed a framework and analyzed it for the possible attacks by the black hole, and Gray Hole attacks and also effects of the attacks are recorded and studied by practically using it. After analyzing it’s concluded that the attacks can be implemented and detected over the network.
With e-commerce constantly changing, the introduction of 6G technology presents both possibilities and difficulties. This chapter examines how e-commerce and 6G intersect, emphasizing how important it is to have strong security measures in place to counter new threats. The chapter starts with a summary of the 6G era's e-commerce environment and looks at the security issues that come with it before going into detail about the particular risks that 6G technology poses to e-commerce platforms. After that, the chapter turns its attention to methods for making e-commerce platforms more resilient to these attacks. It looks at how important it is to strengthen access controls and authentication, encrypt data for better security, and create effective disaster recovery plans.
Wireless Sensor Networks (WSNs) are spatially distributed to independent sensor networks that can sense physical characteristics such as temperature, sound, pressure, energy, and so on. WSNs have secure link to physical environment and robustness. Data Aggregation (DA) plays a key role in WSN, and it helps to minimize the Energy Consumption (EC). In order to have trustworthy DA with a rate of high aggregation for WSNs, the existing research works have focused on Data Routing for In-Network Aggregation (DRINA). Yet, there is no accomplishment of an effective balance between overhead and routing. But EC required during DA remained unsolved. The detection of objects belonging to the same event into specific regions by the Bayes Node is distributed through the Sensor Nodes (SNs). Multi-Sensor Data Synchronization Scheduling (MSDSS) framework is proposed for efficient DA at the sink in a heterogeneous sensor network. Secure and Energy-Efficient based In-Network Aggregation Sensor Data Routing (SEE-INASDR) is developed based on the Dynamic Routing (DR) structure with reliable data transmission in WSNs. Theoretical analysis and experimental results showed that in WSN, the proposed Bayes Node Energy Polynomial Distribution (BNEPD) technique reduced Energy Drain Rate (EDR) by 39% and reduced 33% of Communication Overhead (CO) using poly distribution algorithm. Similarly, the proposed MSDSS framework increased the Network Lifetime (NL) by 15%. This framework also increased 10.5% of Data Aggregation Routing (DAR). Finally, the SEE-INASDR framework significantly reduced EC by 51% using a Secure and Energy-Efficient Routing Protocol (SEERP).
In recent times, text summarization has gained enormous attention from the research community. Among the many uses of natural language processing, text summarization has emerged as a critical component in information retrieval. In particular, within the past two decades, many attempts have been undertaken by researchers to provide robust, useful summaries of their findings. Text summarizing may be described as automatically constructing a summary version of a given document while keeping the most important information included within the content itself. This method also aids users in quickly grasping the fundamental notions of information sources. The current trend in text summarizing, on the other hand, is increasingly focused on the area of news summaries. The first work in summarizing was done using a single-document summary as a starting point. The summarizing of a single document generates a summary of a single paper. As research advanced, mainly due to the vast quantity of information available on the internet, the concept of multidocument summarization evolved. Multidocument summarization generates summaries from a large number of source papers that are all about the same subject or are about the same event. Because of the content duplication, the news summarization system, on the other hand, is unable to cope with multidocument news summarizations well. Using the Naive Bayes classifier for classification, news websites were distinguished from nonnews web pages by extracting content, structure, and URL characteristics. The classifier was then used to differentiate between the two groups. A comparison is also made between the Naive Bayes classifier and the SMO and J48 classifiers for the same dataset. The findings demonstrate that it performs much better than the other two. After those important contents have been extracted from the correctly classified newscast web pages. Then, extracted relevant content is used for the keyphrase extraction from the news articles. Keyphrases can be a single word or a combination of more than one word representing the news article’s significant concept. Our proposed approach of crucial phrase extraction is based on identifying candidate phrases from the news articles and choosing the highest weight candidate phrase using the weight formula. Weight formula includes features such as TFIDF, phrase position, and construction of lexical chain to represent the semantic relations between words using WordNet. The proposed approach shows promising results compared to the other existing techniques.
Agriculture is the foremost factor which is important for the survival of human beings. Farming contributes to a very big part of GDP; still, several areas exist where improvements are required. One of those is crop recommendation. Crop productivity is boosted as a result of accurate crop prediction. As crop production has already started to suffer from climate change, improving crop output is consequently desirable because agronomists are impotent to select the appropriate crop(s) depending on environmental and soil parameters, and the mechanism of forecasting the selection of the appropriate crops manually has failed. Factors like soil characteristics, soil types, climate characteristics, temperature, rainfall, area, humidity, geographic location etc. affect crop forecast. This chapter focuses mainly on building a recommendation system, i.e., suggesting the kind of the crop by applying various machine learning and deep learning techniques depending upon several parameters. The system would help the farmers for the appropriate decision to be taken regarding the crop type.
Consumer expectations and demands for quality of service (QoS) from network service providers have risen as a result of the proliferation of devices, applications, and services. An exceptional study is being conducted by network design and optimization experts. But despite this, the constantly changing network environment continues to provide new issues that today’s networks must be dealt with effectively. Increased capacity and coverage are achieved by joining existing networks. Mobility management, according to the researchers, is now being investigated in order to make the previous paradigm more flexible, user-centered, and service-centric. Additionally, 5G networks provide higher availability, extremely high capacity, increased stability, and improved connection, in addition to quicker speeds and less latency. In addition to being able to fulfil stringent application requirements, the network infrastructure must be more dynamic and adaptive than ever before. Network slicing may be able to meet the present stringent application requirements for network design, if done correctly. The current study makes use of sophisticated fuzzy logic to create algorithms for mobility and traffic management that are as flexible as possible while yet maintaining high performance. Ultimately, the purpose of this research is to improve the quality of service provided by current mobility management systems while also optimizing the use of available network resources. Building SDN (Software-Defined Networking) and NFV (Network Function Virtualization) technologies is essential. Network slicing is an architectural framework for 5G networks that is intended to accommodate a variety of different networks. In order to fully meet the needs of various use cases on the network, network slicing is becoming more important due to the increasing demand for data rates, bandwidth capacity, and low latency.
The liver is one of the most significant and most essential organs in the human body. It is divided into two granular lobes, one on the right and one on the left, connected by a bile duct. The liver is essential in the removal of waste products from human food consumption, the creation of bile, the regulation of metabolic activities, the cleaning of the blood by sensitizing digestive management, and the storage of vitamins and minerals. To perform the classification of liver illnesses using computed tomography (CT scans), two critical phases must first be completed: liver segmentation and categorization. The most difficult challenge in categorizing liver disease is distinguishing the liver from the other organs near it. Methodology. Liver biopsy is a kind of invasive diagnostic procedure, widely regarded as the gold standard for accurately estimating the severity of liver disease. Noninvasive approaches for examining liver illnesses, such as blood serum markers and medical imaging (ultrasound, magnetic resonance MR, and CT) have also been developed. This approach uses the Partial Differential Technique (PDT) to separate the liver from the other organs and Level Set Methodology (LSM) for separating the cancer location from the surrounding tissue based on the projected pictures used as input. With the help of an Improved Convolutional Classifier, the categorization of different phases may be accomplished.Several accuracies, sensitivity, and specificity measurements are produced to assess the categorization of LSM using an Improved Convolutional classifier. Approximately, 97.5% of the performance accuracy of the liver categorization is achieved with a 94.5% continuous interval (CI) of [0.6775 1.0000] and an error rate of 2.1%. The suggested method's performance is compared to that of two existing algorithms, and the sensitivity and specificity provide an overall average of 96% and 93%, respectively, with 95% Continuous Interval of [0.7513 1.0000] and [0.7126 1.0000] for sensitivity and specificity, respectively.
Handwritten prescriptions and radiological reports: doctors use handwritten prescriptions and radiological reports to give drugs to patients who have illnesses, injuries, or other problems. Clinical text data, like physician prescription visuals and radiology reports, should be labelled with specific information such as disease type, features, and anatomical location for more effective use. The semantic annotation of vast collections of biological and biomedical texts, like scientific papers, medical reports, and general practitioner observations, has lately been examined by doctors and scientists. By identifying and disambiguating references to biomedical concepts in texts, medical semantics annotators could generate such annotations automatically. For Medical Images (MedIMG), we provide a methodology for learning an effective holistic representation (handwritten word pictures as well as radiology reports). Deep Learning (DL) methods have recently gained much interest for their capacity to achieve expert-level accuracy in automated MedIMG analysis. We discovered that tasks requiring significant responsive fields are ideal for downscaled input images that are qualitatively verified by examining functional, responsive areas and class activating maps for training models. This article focuses on the following contributions: (a) Information Extraction from Narrative MedImages, (b) Automatic categorisation on image resolution with an impact on MedIMG, and (c) Hybrid Model to Predictions of Named Entity Recognition utilising RNN + LSTM + GRM that perform admirably in every trainee for every input purpose. At the same time, supplying understandable scale weight implies that such multi-scale structures are also crucial for extracting information from high-resolution MedIMG. A portion of the reports (30%) are manually evaluated by trained physicians, while the rest were automatically categorised using deep supervised training models based on attention mechanisms and supplied with test reports. MetaMapLite proved recall and precision, but also an F1-score equivalent for primary biomedicine text search techniques and medical text examination on many databases of MedIMG. In addition to implementing as well as getting the requirements for MedIMG, the article explores the quality of medical data by using DL techniques for reaching large-scale labelled clinical data and also the significance of their real-time efforts in the biomedical study that have played an instrumental role in its extramural diffusion and global appeal.
In modern Nigerian supply chain management, the adjudication of intellectual property rights has gained paramount importance. With the rapid advancement of technology, the integration of AI-powered analytics has emerged as a promising resolution for settling disputes and safeguarding intellectual property rights. This chapter highlights the multifaceted role of AI-driven analytics in the adjudication process, exploring its impact on improving efficiency, precision, and impartiality in resolving disputes related to intellectual property within the Nigerian supply chain. The authors suggest that comprehensive ethical standards, data privacy regulations, and transparency protocols should be established by the government, stakeholders, and law enforcement agencies to mitigate potential biases, ensure data integrity, and ensure adherence to ethical and legal norms in AI technologies.
Wireless sensor networking is being used extensively in agricultural activities to increase productivity and reduce losses in various ways. The greenhouse simplifies the concept of planting, which has several benefits in agriculture. In agricultural models, soil pH sensors and gas sensors are commonly used. These sensors are applicable in various Internet of Things (IoT) integrated agricultural activities. The paper discusses the hardware design and working of the proposed model. In addition, various agricultural models used for evapotranspiration are also explained. The key factors such as congestion control are evaluated using the Penman-Monteith equation. This paper focuses on implementing more than two references parameters like evapotranspiration and humidity under different conditions, which aids in splitting the relationship evenly by the number of sources. Furthermore, the paper shows the implementation done with MATLAB and values are adjusted using the code. The paper claims to achieve similar variations with the same source value, validating the proposed model's efficiency and fairness. In an optimal region, these schemes also demonstrate higher throughput and lower delay rates. The improved packet propagation through the IoT network is demonstrated using visualization tools, and the feedback is computed to determine the overall access amount (A1+A2) obtained. The experimental results show that the propagation rate is 1.24, more significant than the link capacity value. The claims are verified by showing the improved congestion control as it outperforms different parameters, considering an additive increase condition by 0.3% and multiplicative decrease condition by 1.2 %.
Artificial intelligence and machine learning applications in image processing are examined in this chapter. It covers AI methods including supervised, unsupervised, reinforcement, and deep learning. Genetic algorithms, rule-based systems, expert systems, and fuzzy logic are AI methods. SVM, decision trees, random forests, K-means clustering, and PCA are machine learning methods. CNN, RNN, and GANs are utilised for object recognition, classification, and segmentation. The chapter discusses how artificial intelligence and machine learning affect accuracy, efficiency, and decision-making. The need to choose proper measurements and procedures for assessment and performance analysis is also stressed. Ethics like justice, privacy, transparency, and human-AI cooperation are covered in the chapter.
Radio frequency identification technology (RFID) and time series forecasts are used to create a dependable IoT-based traffic system. The system regulates city traffic. The suggested method estimates junction traffic volume over time using LSTM neural networks. RFID technology improves data collection accuracy and reliability. Data preparation includes outlier identification to remove anomalies. Training the LSTM model on preprocessed data reveals traffic volume trends. The trained model predicts traffic volume using historical data. Prediction performance is quantified by MAE, MAPE, and R2. The proposed approach is tested using four intersection traffic data. Results indicate that LSTM-based traffic volume estimation works. The optimal design is determined by evaluating system performance for 12-to-168-time steps. The experimental findings suggest that the proposed method can accurately anticipate traffic volume, helping traffic managers enhance flow. RFID and time series projections bolster traffic system reliability.
A Lucky Edge Geometric Mean Labeling (LEGML) is a function μ that assigns the integers to the vertices of graph G such that for every edge as ‘{E}’, uv∈E(G) there exists a function μ*:E(G) → N is defined by μ* (uv) = √μ(u)μ(v) (or) √μ(u)μ(v) with the state that μ* (uv) ≠ μ* (vw) whenever {u} and {vw} have a common vertex as {v}. A minor number ‘k→ E = {1, 2, 3 … k} is the LEGML of G and is signified by h¢GM(G). A result that accepts LEGML is the Lucky Edge Geometric Mean Graph (LEGMG). The idea of the Graph Theory (GT) concept is fused into the Electric Circuit (EC), and the related complicated problems are transformed into graph model representation problems; as a result, the valuation method is simplified and optimized. Network analysis means finding the current or voltage in each branch. One application of GT is the representation of EC. An ‘E’ in a diagram can signify anything in an EC. Each part of the diagram can represent a port or a terminal in the EC. In this article, we have to investigate the Middle Graph (MG) of the star, the Total Graph (TG) of the star, and the central graph (CG) of the star, which accepts LEGML and LEGMG. This method also computes the power consumed by all the resistors of the EC by LEGML.
The exploration of non-Euclidean geometries in the context of neural network architectures presents a novel avenue for enhancing the processing of complex data structures. This paper introduces the concept of Elliptic Neural Networks (ENN), a framework that integrates the intrinsic properties of elliptic geometry into the fabric of neural computation. Characterized by constant positive curvature and closed geodesic paths, elliptic geometry provides a unique perspective for representing data, especially advantageous for configurations exhibiting cyclic and dense hierarchical interconnections. We begin by delineating the fundamental aspects of elliptic geometry, focusing on its impact on conventional notions of distance and global structural interpretation within data sets. This foundational understanding paves the way for our proposed mathematical schema for ENNs. Within this framework, we articulate the methodologies for embedding data in elliptic spaces and adapt neural network operations by encompassing neuron activation, signal propagation, and learning paradigms to conform to the topological idiosyncrasies of elliptic geometry. In this paper addressing the implementation of ENNs, we discuss the computational challenges and the development of novel algorithms requisite for navigating the elliptic geometric landscape. The prospective applications of ENNs are extensive and diverse, ranging from the enhancement of cyclic pattern recognition in time-series analysis to more sophisticated representations in graph-based data and natural language processing tasks. This theoretical exposition aims to set the groundwork for subsequent empirical research to validate the proposed model and assess its practical performance relative to conventional neural network architectures. By harnessing the distinctive characteristics of elliptic geometry, ENNs mark a significant stride towards enriching the arsenal of machine learning methodologies, potentially leading to more versatile and efficacious computational tools in data analysis and artificial intelligence.
In this chapter, there are very novel techniques in which, by deleting nodes that are either overloaded or underloaded and then reassigning the total load to the collective system's nodes, it is possible to maximise the usage of resources and the amount of time it takes for tasks to be completed. The approaches that are utilised for dynamic load balancing are based on the behaviour of the system as it is being utilised right now, as opposed to the behaviour of the system as it was being utilised in the past. When constructing an algorithm of this kind, the most essential considerations to give attention to are the estimation and comparison of load, the stability and performance of the system, the interaction between nodes, the amount of work that needs to be transmitted, and the choice of nodes.
Block chain secures data. Blockchain imitates computerised trading markets. The 2008 cryptocurrency and transaction network Bitcoin leads blockchain technology. When buyers or sellers declare payments, blockchain transfers bitcoin. Blockchain can change finance like the Internet. Block chain technology may affect medicine. This fast-growing issue affects many healthcare providers. Open, irreversible, stable, and collaborative technology are preferred by cryptocurrency over regulated, hidden, proprietary, and changeable ones. Blockchain is used in medical records, security, database management, and biotechnology regulation. Blockchain helps biopharmaceuticals track illicit medications. It monitors medicines and counterfeiters. This book chapter presents a comprehensive block chain technology for health sector decision makers and explores difficulties and constraints using a suitable markup. Here are current and potential medical blockchain applications.
The transition from a physical economy to a digital economy is happening across the globe, and since the worldwide pandemic, this process has accelerated substantially. People are spending more time online than offline, which means that work and daily life are becoming more and more dependent on the internet. Today's internet is frequently the primary means through which millions of people access information and services, engage in social interaction and commerce, and find entertainment. COVID-19 has also altered corporate practices, accelerated the growth of e-commerce, and altered workplace culture. As more people work remotely, businesses are beginning to prioritize virtual environments. The pandemic has so demonstrated that technology is essential to maintaining many jobs in operation. Since COVID-19 is becoming more and more widespread, the demand for virtual reality is rising, and the Metaverse sector is expanding. Blockchain technology, artificial intelligence, and the virtual environment known as the Metaverse are all merged here.
Interconnection of networks use in multiprocessors multicomputer and distributed shared memory architecture; basically, it connects many networks simultaneously in each time interval.The objective of this paper is to compare the simulation results in certain functionalities of Network on Chip.It's a wide area of research on routing and topological structure.The archi-tecture of NoC is scalable network architecture.Point to point interconnection of links, switch functionality is used, verity in routing algorithms, topologies provide enhance performance as per efficiency, throughput, Latency, channel allocation manner, and comparison with the previ-ous methodology of the chip.This paper focuses on the fault tolerance adaptive routing on HPC mesh and compares its results with already implemented 2D mesh topology with parame-ters like path diversity, Total Power Consumed, Latency, Throughput, and Fault Tolerance.
Counterfeit technology propagation and deepfake systems threaten authenticity, trust, and data integrity. This research article proposes a novel technique that leverages Blockchain Technology (BT) integrated with improved chameleon hashing to combat these threats. BT integration assures transparency and immutability, while improved Chameleon Hash Function (CHF) gives secure and efficient authentication. This combined technique focuses on enhancing the prevention and detection of illegal content changes, providing a robust solution for preserving the authenticity and integrity of digital content. The proposed blockchain-assisted improved Chameleon Hashing (BAiCH) promises substantial progression in combating deception and fake systems via a cryptographically and CHF secure system. The performance of the proposed framework BACH is compared with existing state-of-the-art techniques. The BACH give a minimal Bit Error Rate (BER) of 0.098 bits and a higher Signal-Noise Ratio (SNR) of 30dB, which is efficient over existing techniques.
This paper introduces a unified, data-driven framework for the end-to-end synthesis and security verification of lattice-based cryptosystems. Beginning with a curated benchmark of real-world implementations (e.g., Kyber, NewHope, FrodoKEM), we extract normalized performance and security features to train a machine-learning model that predicts key-generation and encapsulation latencies with high fidelity (MAE < 5 μs, R² ≈ 0.91). We then enumerate parameter candidates within NIST’s 128-bit security envelope, rank them by a composite score of latency, key size, and failure rate, and select the top proposals. Each candidate undergoes rigorous statistical testing Kolmogorov Smirnov distribution checks and Test Vector Leakage Assessment and interactive-theorem-prover proofs for correctness and IND-CPA security, completing in under two minutes per template. Finally, the framework auto-generates PQClean-compliant C stubs with embedded provenance and CI scripts. Experimental results demonstrate that our pipeline yields deployment-ready schemes matching or exceeding manual baselines in performance while carrying machine-verified security guarantees.
Cloud Computing is a service which is rapidly increasing its growth in IT industry in recent years. The privacy and security is main challenging issue for the cloud users as well providers. The main security issues in Cloud computing are Virtualization Security, Access Control, Authentication, Application Security, Availability of Services etc. Most important issue raised in this era of cloud is Authentication, in which the identity of a user requesting for services are checked. Hence, this paper would like to discuss the methods of user Authentication and Challenges faced in this technique.
This research introduces a novel mathematical model designed to integrate Natural Language Processing (NLP), functional Magnetic Resonance Imaging (fMRI), and Electroencephalography (EEG) data, aiming to decode the complex neural mechanisms of semantic processing in the human brain. By leveraging the complementary strengths of each modality—NLP’s linguistic analysis, fMRI’s spatial resolution, and EEG’s temporal precision—the model provides a groundbreaking approach to understanding how semantic information is processed across different brain regions and over time. The core of the proposed model is a dynamic, multi-layered framework that utilizes advanced statistical methods and machine learning algorithms. At its foundation, the model employs vector space representations from NLP to quantify semantic similarity and contextuality in language. These representations are then mapped onto neural activation patterns captured by fMRI and EEG, using a series of transformation matrices that are optimized through machine learning techniques. The model uniquely incorporates time-series analysis to account for the temporal dynamics of EEG data, while spatial patterns from fMRI data are analyzed through convolutional neural networks, ensuring a comprehensive integration of multimodal neuroimaging data. Key to proposed approach is the application of Bayesian inference methods to fuse these diverse data sources, allowing for the probabilistic modeling of semantic processing pathways in the brain. This enables the prediction of neural responses to linguistic stimuli with unprecedented accuracy and detail. Theoretical implications of our model suggest significant advances in understanding the neural basis of language comprehension, offering new insights into the dynamic interplay between linguistic structures and neural processes.
The web is being used more and more by users of mobile devices. In addition, it is increasingly possible to track the user’s location, which provides immense opportunities in geospatial data and its management. Due to the use of location information in services for each mobile device, a large size of spatial data makes it difficult to process spatial queries efficiently and, therefore, we need a lightweight and scalable approach to process large amounts of stored data in distributed file systems. For the most part, all SNSs (social network services) focus on connecting the user account with their location information, such as check-in services, which helps them collect information about user activities and ratings. Of location, but also increases the load of data on their servers. . In this article we propose an indexing technique in combination with efficient processing of Boolean top-k spatial queries where location data is compressed to save space and the Boolean query helps filter results so that unrelated data is not processed, what helps to save space and faster processing of queries.
Significant implications for business are brought about by the advent of the 6G era, which is characterized by increased connectivity and extremely low latency. This chapter investigates the transformative potential of 6G technology and highlights the impact that it has on the operations of businesses. Nevertheless, the promises of 6G are accompanied by an increase in the number of security challenges, which calls for the incorporation of stringent security measures into business strategies. In the era of 6G, it is absolutely necessary to recognize new dangers that are specific to 6G networks to comprehend the significance of data protection and to reduce the risks that are associated with internet of things devices and edge computing.
Irretrievable loss of vision is the predominant result of Glaucoma in the retina. Recently, multiple approaches have paid attention to the automatic detection of glaucoma on fundus images. Due to the interlace of blood vessels and the herculean task involved in glaucoma detection, the exactly affected site of the optic disc of whether small or big size cup, is deemed challenging. Spatially Based Ellipse Fitting Curve Model (SBEFCM) classification is suggested based on the Ensemble for a reliable diagnosis of Glaucoma in the Optic Cup (OC) and Optic Disc (OD) boundary correspondingly. This research deploys the Ensemble Convolutional Neural Network (CNN) classification for classifying Glaucoma or Diabetes Retinopathy (DR). The detection of the boundary between the OC and the OD is performed by the SBEFCM, which is the latest weighted ellipse fitting model. The SBEFCM that enhances and widens the multi-ellipse fitting technique is proposed here. There is a pre-processing of input fundus image besides segmentation of blood vessels to avoid interlacing surrounding tissues and blood vessels. The ascertaining of OC and OD boundary, which characterized many output factors for glaucoma detection, has been developed by Ensemble CNN classification, which includes detecting sensitivity, specificity, precision, and Area Under the receiver operating characteristic Curve (AUC) values accurately by an innovative SBEFCM. In terms of contrast, the proposed Ensemble CNN significantly outperformed the current methods.
Edge-Based Online Learning (EBOL), a technique that combines the practical, hands-on approach of EBOL with the convenience of Online Learning (OL), is growing in popularity. But accurately monitoring student engagement to enhance teaching methodologies and learning outcomes is one of the difficulties of OL. To determine this challenge, this paper has put forth an Edge-Based Student Attentiveness Analysis System (EBSAAS) method, which uses a Face Detection (FD) algorithm and a Deep Learning (DL) model known as DLIP to extract eye and mouth landmark features. Images of the eye and mouth are used to extract landmarks using DLIP or Deep Learning Image Processing. Landmark Localization pre-trained models for Facial Landmark Localization (FLL) are one well-liked DL model for facial landmark recognition. The Visual Geometry Group-19 (VGG-19) learning model then uses these features to classify the student's level of attentiveness as fatigued or focused. Compared to a server-based model, the proposed model is developed to execute on an Edge Device (ED), enabling a swift and more effective analysis. The EBOL achieves 95.29% accuracy and attains 2.11% higher than existing model 1 and 4.41% higher than existing model 2. The study's findings have shown how successful the proposed method is at assisting teachers in changing their teaching methodologies to engage students better and enhance learning outcomes.
Breast cancer is a serious disease, and therefore early detection is crucial for successful treatment and patient management. Unfortunately, globally, the number of breast cancer cases is increasing due to various multifaceted factors. It is currently one of the leading causes of cancer deaths in women, worldwide. Cancerous cells in the breast can form lumps that impact the patient's health, and even seemingly harmless tumors could be fatal if undiagnosed early enough. Fortunately, artificial intelligence techniques have proven effective in detecting diseases, and doctors can therefore use them to effectively and accurately diagnose breast cancer early. This paper explores the use of genetic algorithms, ant colony optimization, and Hybrid Hopfield Neural Network-E2SAT (HHNN-E2SAT) models, for breast cancer prediction. The HHNN-E2SAT models outperform standard algorithms like the Random Forest and Support Vector Machines, achieving over 98% on all performance metrics (i.e. Accuracy, F1-score, Sensitivity, Specificity, and Precision).
The dawn of the quantum era marks a pivotal moment in the evolution of digital communications, bringing forth both unparalleled potential and unprecedented challenges. This paper delves into the realm of Quantum Graph Theoretic Encryption and Dynamic Key Evolution, a frontier in the quest for impenetrable communication networks in the quantum computing age. Quantum computing, with its ability to perform complex calculations at speeds unattainable by traditional computers, offers a beacon of hope and a source of peril. The very principles that endow quantum computers with exceptional power also render traditional cryptographic methods vulnerable. This dichotomy underscores the urgency for innovative encryption techniques that can withstand the onslaught of quantum computational capabilities. When merged with the principles of quantum mechanics, graph theory opens a new dimension of cryptographic possibilities. However, the dynamic nature of digital communications necessitates more than just robust encryption; it requires agility. This is where the concept of Dynamic Key Evolution comes into play. This paper is structured to guide the reader through this novel integration of quantum mechanics, graph theory, and dynamic cryptography. We begin by laying a foundational understanding of quantum computing, followed by an exploration of graph theory in the context of encryption. We then introduce our proposed method of Quantum Graph Theoretic Encryption, highlighting its advantages and potential applications. Subsequently, we delve into the concept of Dynamic Key Evolution, illustrating its significance in maintaining the integrity of quantum encryption in a rapidly changing digital landscape.
Background and Objective: Spatial queries frequently used in Hadoop for significant data process. However, vast and massive size of spatial information makes it difficult to process the spatial inquiries proficiently, so they utilized the Hadoop system for process the Big Data. Boolean Queries & Geometry Boolean Spatial Data for Query Optimization using Hadoop System are used. In this paper, a lightweight and adaptable spatial data index for big data have discussed, which have used to process in Hadoop frameworks. Results demonstrate the proficiency and adequacy of spatial ordering system for various spatial inquiries. Methods: In this section, the different type of approaches are used which helps to understand the procedure to develop an efficient system by involving the methods like efficient and scalable method for processing Top-k spatial Boolean Queries, Efficient query processing in Geographic web search engines. Geographic search engine query processing combines text and spatial data processing technique & Top-k spatial preference Queries. In this work, the implementation of all the methods is done for comparative analysis. Results and Discussion: The execution of algorithm gives results which show the difference of performance over different data types. Three different graphs are presented here based on the different data inputs indexing and data types. Results show that when the number of rows to be executed increases the performance of geohash decreases, while the crucial point for change in performance of execution is not visible due to sudden hike in number of rows returned. Conclusion: The query processing have discussed in geographic web search engines. In this work a general framework for ranking search results based on a combination of textual and spatial criteria, and proposed several algorithms for efficiently executing ranked queries on very large collections have discussed. The integrated of proposed algorithms into an existing high-performance search engine query processor and works on evaluating them on a large data set and realistic geographic queries. The results shows that in many cases geographic query processing can be performed at about the same level of efficiency as text-only queries.
Remote areas benefit from lower costs for transporting enormous amounts of data between a source node and multiple networking devices. Noise and constraints necessitated a 64 kb/s broadcaster. Bit defect rates, abnormalities, and frequent retransmission prevented data movement.This could greatly minimize channel usage. The maximum transmission unit distance was 64B. Data packet size limits exacerbated issues. Massive data traffic, connection imprecision, and changing topology affect network structure, making data dissemination from source nodes to all devices difficult.This book chapter suggests using an advanced throughput optimal broadcast in point-to-multipoint wireless networks to improve wire-less link precision by using the Mayfly optimization method, a recent swarm intelligence soft computing technique, to improve geometrical configuration of interfering terminals and forecasted per-flow throughput
The forthcoming transition to 6G brings with it the promise of holographic connectivity, which has the potential to revolutionize communication in a variety of fields, including marketing, education, medicine, business, and entertainment, among others. Users are able to interact with one another through the use of high-quality 3D representations while overcoming geographical barriers thanks to this technology. On the other hand, in order to successfully navigate this transition, effective leadership is required. This leadership must be able to anticipate technological advancements, encourage innovation, and strike a balance between risks and opportunities within the 6G ecosystem.
Cloud computing involves virtualization, distributed computers, networking, software, and web services. Clouds have clients, datacenters, and servers. It has fault tolerance, high availability, scalability, flexibility, little user overhead, cheap ownership cost, on-demand services, etc. These difficulties require a powerful load balancing algorithm. Memory, CPU, latency, or network-load. Load balancing distributes demand to avoid overloaded distributed system nodes and optimises resource use and job response time. Load balancers ensure that processors and network nodes perform similarly. Methods initiated by sender, recipient, or symmetric. Using the divisible load scheduling theorem, a load balancing method will optimise throughput and latency for clouds of different sizes.
In the background of the contemporary Supply Chain (SC) environment, the motivation is on cyber security of Cyber-Physical Systems (CPS), openness, and optimization. This paper aims to develop a Secured Supply Chain Management System (SSMS) that combines Blockchain with Chaotic Secure Hash Algortihn-256 (CPS + BC + Chaotic SHA-256) encryption. BC is a distributed and secure database that can be used to track the measure of goods in real-time and with digital certification. Chaotic SHA-256 is used to enhance security by integrating chaotic systems into the SHA-256 Hash Function (HF), thereby increasing the layers of unpredictability and security against attacks. Besides improving the quality of transaction records, this method also provides the best defense against data theft and fraud. The developed CPS + BC + Chaotic SHA-256 has the potential to overcome most of the problems that affect the SC, such as counterfeits and organizational inefficiencies, by utilizing Smart Contracts (SC). The combination of BC in and the Chaotic SHA-256 (C- SHA-256) algorithm generally enhances security and SC stability.
The problem that has occurred as a result of the increased connection between the device and the system is creating information at an exponential rate that it is becoming increasingly difficult for a possible solution for processing. Therefore, creating a platform for such advanced level data processing, such as the need to increase the level of hardware and software with bright data. To improve the efficiency of the Hadoop Cluster in extensive data collection and analysis, we have proposed an algorithm system that meets the needs of protected discrimination data in Hadoop Clusters and improves performance and efficiency. The proposed paper aims to find out the effectiveness of the new algorithm, compare, consultation, and find out the best solution for improving the big data scenario is a competitive approach. The map reduction techniques from Hadoop will help maintain a close watch on the underlying or discriminatory Hadoop clusters with insights of results as expected from the luminosity.
A set of mobile devices that employs wireless transmission for communication is termed Mobile Ad hoc Networks (MANETs). Offering better communication services among the users in a centralized organization is the primary objective of the MANET. Due to the features of MANET, this can directly End-to-End Delay (EED) the Quality of Service (QoS). Hence, the implementation of resource management becomes an essential issue in MANETs. This paper focuses on the efficient Resource Allocation (RA) for many types of Traffic Flows (TF) in MANET. In Mobile Ad hoc Networks environments, the main objective of Resource Allocation (RA) is to process consistently available resources among terminals required to address the service requirements of the users. These three categories improve performance metrics by varying transmission rates and simulation time. For solving that problem, the proposed work is divided into Queue Management (QM), Admission Control (AC) and RA. For effective QM, this paper develops a QM model for elastic (EL) and inelastic (IEL) Traffic Flows. This research paper presents an AC mechanism for multiple TF for effective AC. This work presents a Resource Allocation Using Tokens (RAUT) for various priority TF for effective RA. Here, nodes have three cycles which are: Non-Critical Section (NCS), Entry Section (ES) and Critical Section (CS). When a node requires any resources, it sends Resource Request Message (RRM) to the ES. Elastic and inelastic TF priority is determined using Fuzzy Logic (FL). The token holder selects the node from the inelastic queue with high priority for allocating the resources. Using Network Simulator-2 (NS-2), simulations demonstrate that the proposed design increases Packet Delivery Ratio (PDR), decrease Packet Loss Ratio (PLR), minimise the Fairness and reduce the EED.
An image plays a vital role in today's environment. An image is a visual representation of anything that can be used in the future for recollecting or memorizing that scene. This visual representation is created by recording the scene through an optical device like a camera or mobile phone. The image fusion process helps integrate relevant data of the different images in a process into a single image. Image fusion applications are wide in range, and so is the fusion technique. In general, pixel, feature, and decision-based techniques for picture fusion are characterised. This study's main thrust is the application and comparison of two approaches to the image fusion process: PCA (principal component analysis) and CNN (convolutional neural network).The study implemented a practical approach to MATLAB. The result of the study is that CNN is much more favorable in terms of image quality and clarity but less favorable in terms of time and cost.
Mental well-being is often seen as a fragile state, akin to a tightrope walk of balance. In our paper, “Cognitive Equilibrium and Instability: Lyapunov Stability Analysis in Mental Health Research,” we explore the nuanced dance between consistency and change in the realm of cognitive function. Drawing on Lyapunov Stability principles from the study of dynamic systems, we offer new perspectives on mental health mechanics. We propose that the mind is in a state of perpetual flux, constantly adjusting to a spectrum of influences to maintain cognitive balance. Our work involves identifying these points of balance and assessing their robustness using Lyapunov Stability Analysis. We examine how various factors, such as stress, environmental changes, and biological variations, can disrupt this balance, possibly leading to states of well-being or illness. Through our models, we illuminate the evolution and path of mental health conditions. Our methodology is a synthesis of theoretical models and real-world data, including neuroimaging, clinical, and psychological evaluations. Case studies in our paper demonstrate the application of our models to conditions like anxiety, depression, and bipolar disorder, revealing the fluid nature of these ailments. This work goes on to discuss the practical implications of our findings in the clinical setting. By identifying pivotal points of potential instability, our model serves as a tool for early identification of mental health concerns, guiding the creation of specific therapeutic interventions. Additionally, our work supports a tailored approach to mental health care, appreciating the individual cognitive patterns unique to each person. Our paper contributes to the burgeoning field of computational psychiatry, blending mathematical analysis with a perspective centered on the human experience. It sets the stage for future interdisciplinary research aimed at decoding the intricacies of the human psyche.
Computer Science & Engineering Department, Rajasthan Technical University, Swami Keshvanand Institute of Technology, Management & Gramothan, Jaipur, Rajasthan 302025/Jagatpura, India High-performance computing has created a new approach to science. Modeling is now a viable and respected alternative to the more traditional experiential and theoretical approaches. High performance is a key issue in data mining or in image rendering. Traditional high performance clusters have proved their worth in a variety of uses from predicting the weather to industrial design, from molecular dynamics to astronomical modeling. A multicomputer configuration, or cluster, is a group of computers that work together. A cluster has three basic elements—a collection of individual computers, a network connecting those computers, and software that enables a computer to share work among the other computers via the network. Clusters are also playing a greater role in business. Advances in clustering technology have led to high-availability and load-balancing clusters. Clustering is now used for missioncritical applications such as web and FTP servers. For permanent clusters there are, for lack of a better name, cluster kits Traditional high-performance clusters have proved their worth in a variety of uses—from predicting the weather to industrial design, from molecular dynamics to astronomical modeling. High-performance computing (HPC) has created a new approach to science—modeling is now a viable and respected alternative to the more traditional experiential and theoretical approaches. High performance is a key issue in data mining or in image rendering. Advances in clustering technology have led to high-availability and load-balancing clusters. Develop the algorithms for the Sorting that are faster and more accurate than they were ever before. The problem with the huge data sorting is that it takes a lot of time and as a result a large amount of money is required. This huge money is just wasted in the sorting of data and not in any useful work so, it is highly required that this time be as less as possible. So we came up with the some new methods to lower that Complexity. The approach we followed was High Performance Computing. , software packages that automate the installation process. A cluster kit provides all the software you are likely to need in a single distribution. Cluster kits tend to be very complete. For example, the OSCAR distribution. Open Source Cluster Application Resources is a software package that is designed to simplify cluster installation. A collection of open source cluster software, OSCAR includes everything that you are likely to need for a dedicated, high-performance cluster. OSCAR takes you completely through the installation of your cluster. In this Paper with the help of Open Source Cluster Application Resource (OSCAR) cluster kit, attempt to setup a high performance computational cluster with special concern to applications like Integration and Sorting. The ease use of cluster is possible globally and transparently managing cluster resources. Cluster computing approach nowadays is an ordinary configuration found in many organizations to target requirements of high performance computing.
<title>Abstract</title> Intrusion Detection Systems (IDS) are essential for securing computer networks against malicious activities. However, the rise of adversarial attacks seriously threatens the robustness and efficacy of IDS models. With the increasing prevalence of adversarial attacks on intrusion detection systems (IDS), it has become crucial to develop robust defence mechanisms to make sure the integrity and reliability of these systems. This paper presents a novel approach that combines Particle Swarm Optimization (PSO), Gradient Boosting Machines (GBM), genetic operators, and deep neural networks (DNN) with defence mechanisms to improve the resilience of IDS in order to stop adversarial attacks. The proposed approach starts with a feature engineering stage, where PSO and GBM are utilised to select and optimise the most informative features from the input dataset. Genetic operators are then employed to refine the feature selection process further, ensuring the creation of robust and discriminative feature subsets. In the subsequent stage, a deep neural network model is constructed with defence mechanisms, including adversarial training, input perturbation, and ensemble learning. These defence mechanisms work synergistically to monitor and improve the IDS's capacity to find and classify normal and adversarial network traffic accurately. The well-known NSL-KDD dataset is utilised to assess how successful the suggested method is. Experimental findings show that the integrated framework outperforms current techniques. Additionally, the system shows increased resistance to various adversarial techniques, such as evasion, poisoning, and adversarial samples. Overall, this study bridges the gap between adversarial attacks and intrusion detection, offering a powerful defence framework that can be integrated into existing IDS architectures to extenuate the consequence of adversarial threats and ensure the integrity and reliability of network security systems.
The integration of Blockchain (BC) technology with the Merkle-Damgård Structure (M-DS) presents a practical model for securing Electronic Health Records (EHR). This research study examines how these cryptographic mechanisms can be functional to address confidentiality, integrity, and availability concerns when working with specific clinical data. Secure EHR is developed based on the M-DS, which is notable for being capable of generating Collision-Resistant Hash Functions (CRHF). The summary of BC-based technology in the EHR environment enhances access control, allows it to be fixed, and simplifies decentralized access. A decentralized BC system stores the values that have been hashed, resulting in patient EHR using the M-DC, as per the proposed method. As an outcome of smart contracts’ ability to manage access, only persons who have been authorized to access can analyze and alter clinical information. The strategy of the model’s cryptographic mechanism highlights openness and accountability by securing data confidentiality by encryption and by regularly generating a verification record of every transaction. This work discusses an exhaustive method to secure EHR in a clinical system that develops more computer-aided and data-driven over performing a complete assessment of the mathematical models that are enabling MDC and BC Technology within the overall system of EHR security.
Potential risks and vulnerabilities in Social Networks (SN) that may threaten the confidentiality, accuracy, and accessibility of data provided by users have been identified as safety risks. Unauthorized usage of the system, breaches of data, privacy violations, cyber-attacks, and other malicious behaviors that negatively impact the SN’s reliability and security may be addressed within these. Innovative techniques that use scientifically enhanced metaheuristic algorithms are the subject of the study, which attempts to address evolving issues of security and privacy in SNs. In order to improve the security of online social networking data encryption, researchers recommend implementing the MMCSOA approach, which is the Mathematically Modified Cuckoo Search Optimization Algorithm. An enhanced tuning of elliptic curve parameters can be accomplished by adapting the Cuckoo Search method, which, in consequence, provides an encrypted environment for multi-party computing. Users’ preferences for confidentiality are considered while applying a fuzzy logic (FL) decision matrix to select individuals. Data security is improved, and users receive additional authority over data shared with MMCSOA. Research findings prove that MMCSOA is higher compared to other methods in securing users’ privacy.
Watermarking is an effective way of transferring hidden data from one place to another, or proving ownership of digital content. The hidden data can be text, audio, images GIF etc., the data is embedded in a cover object usually an image or a video sequence. Usually the watermarking system(s) rely on their hidden aspect, as their primary security measure, once this is established that the cover object is counting some hidden data, then it is generally possible to recover the hidden information. The author proposed an in-genuine technique for DICOM color image water marking by combining Multi Quadrant LSB with truly random mixed key cryptography. This system provides a high level of security by just the water marking technique, as it breaks the cover image into up to four quadrants, & does LSB replacement of two bytes each quadrant. The bit sequence as the quadrant sequence can be randomized to increase the randomness, use of truly random mixed key cryptography, by using a pre shared, variable length, truly random, private key, turns hidden data into noise, which can only be recovered by having the private key. Thus, the proposed technique truly diminishes the probability of recovering hidden data, even if it is detected that something is hidden in cover object.
Over the past decade's person re-identification is being a hot research area and an active participant in the automated video surveillance. Besides monitoring of person of interest and human-machine interaction, person re-identification is also used in broadcast media and in forensic examination. Images and videos captured from different cameras gets suffers from low quality and resolution issues which generates difficulty in extraction of useful information. Deep learning plays a vital role in person re-identification in retrieval and determining similarity among the features from data. With the advancement in the deep learning techniques, favorable performances are obtained while handling challenges obtained from diverse viewpoints, dazedness, resolutions in image, settings of camera, blocking of an object of interest and irregular background across camera views. This paper presentes the current challenges that are overcome by the usage of the deep learning architectures.
Applications based on Wireless Sensor Networks (WSN) have shown to be quite useful in monitoring a particular geographic area of interest. Relevant geometries of the surrounding environment are essential to establish a successful WSN topology. But it is literally hard because constructing a localization algorithm that tracks the exact location of Sensor Nodes (SN) in a WSN is always a challenging task. In this research paper, Distance Matrix and Markov Chain (DM-MC) model is presented as node localization technique in which Distance Matrix and Estimation Matrix are used to identify the position of the node. The method further employs a Markov Chain Model (MCM) for energy optimization and interference reduction. Experiments are performed against two well-known models, and the results demonstrate that the proposed algorithm improves performance by using less network resources when compared to the existing models. Transition probability is used in the Markova chain to sustain higher energy nodes. Finally, the proposed Distance Matrix and Markov Chain model decrease energy use by 31% and 25%, respectively, compared to the existing DV-Hop and CSA methods. The experimental results were performed against two proven models, Distance Vector-Hop Algorithm (DV-HopA) and Crow Search Algorithm (CSA), showing that the proposed DM-MC model outperforms both the existing models regarding localization accuracy and Energy Consumption (EC). These results add to the credibility of the proposed DC-MC model as a better model for employing node localization while establishing a WSN framework.
<p>Deep Learning (DL) forecasts Customer Lifetime Value (CLV) and optimises CRM in current research. ML models can be adapted and used alongside CRM methods to recognise customer behaviour anomalies amid numerous customer relationships, heterogeneous statistics, and time-sensitive data. This technique allows companies to maintain customers and improve profit, advertising, and confidence, divided by income. First, the study recommends a multi-output Deep Neural Network (DNN) model for predicting CLV. The suggested framework was measured with multi-output Decision Tree (DT) and multi-output Random Forest (RF) techniques on the same dataset. The study presents a multilayer supervised DL-based CLV prediction technique that enhances features on limited data, outperforming better-quality goods in marketing effectiveness and client lifetime value. The research explores using CLV prediction in personalized customer experiences, highlighting its potential to enhance CRM strategies by incorporating dynamic variables and current data for improved accuracy. The Deep Neural Network model has an acceptable error rate of MAPE of 10.3%, MSE of 11.6%, and RMSE of 12.29%, demonstrating reasonable complete error rates.</p>
Data exchange within a Wireless Sensor Network (WSN) needs to adhere to high security protocols. The shortest possible route for the transfer of data between two interconnected packets is selected by dynamic routing algorithms, which increases privacy while requesting additional data to be transmitted. Several applications involving remote sensing of environmental requirements, such as WSN, are becoming more prevalent. Highly dense WSNs are evolving into a vital sensor framework enabling the Internet of Things (IoT). WSNs have the potential to synthesize a great deal of actual data, and in order to guarantee that data communication is being performed successfully, they need network architectures that have successful links between Sensor Nodes (SN). The collection of data is a term WSN utilises to define a data fusion/compression technique to minimize the volume of data sent throughout the network’s bandwidth. The process of iteration of Fixed Points (FP) of contractions will be examined in the present paper, which is scheduled to take part under a Modular Metric Space (M-MS) context. The main objective of this research is to arrive at an algorithm for secure dynamic data transmission routing in WSN by integrating the demonstrated results of the Functional Equation (FE) with randomization. An implementation is a tool for evaluating and modelling a network’s architecture in which sensors can be aggregated with a high level of security and least risk in a more secure area (Sensor Field), thereby improving the accuracy of the findings collected by the entire network.
Smart cities are novel and difficult to study. Fires can kill people and destroy resources in cities near forests, farms, and open spaces. Sensor networks and UAVs are used to construct an early fire detection system to reduce fires. The suggested method uses sensors and IoT apps to monitor the surroundings. The suggested fire detection system includes UAVs, wireless sensors, and cloud computing. Image processing improves fire detection in the proposed system. Genuine detection is also improved by rules. Many current fire detection technologies are compared to the suggested system's simulation findings. The approach improves forest fire detection from 89 to 97%.
In the rapidly evolving landscape of digital communication, the security of image data transmits paramount importance. This paper introduces a novel encryption framework that integrates the principles of quantum chaos and nonlinear dynamics to fortify image encryption processes. Recognizing the vulnerabilities inherent in conventional encryption methods, our research seeks to harness the unpredictable nature of quantum chaos alongside the complex behavior of nonlinear dynamical systems to generate robust encryption keys that are highly sensitive to initial conditions. Utilizing a mathematical model that combines the unpredictability of quantum chaotic systems with the intricate patterns of nonlinear dynamics, we develop an encryption algorithm that demonstrates superior resistance to both brute-force attacks and statistical analysis. The algorithm’s efficacy is evaluated through a series of rigorous tests, including entropy analysis, correlation coefficient assessment, and resistance to known-plaintext and chosen-plaintext attacks.
At a grocery store, product supply management is critical to its employee's ability to operate productively. To find the right time for updating the item in terms of design/replenishment, real-time data on item availability are required. As a result, the item is consistently accessible on the rack when the client requires it. This study focuses on product display management at a grocery store to determine a particular product and its quantity on the shelves. Deep Learning (DL) is used to determine and identify every item and the store's supervisor compares all identified items with a preconfigured item planning that was done by him earlier. The approach is made in II-phases. Product detection, followed by product recognition. For product detection, we have used You Only Look Once Version 5 (YOLOV5), and for product recognition, we have used both the shape and size features along with the color feature to reduce the false product detection. Experimental results were carried out using the SKU-110 K data set. The analyses show that the proposed approach has improved accuracy, precision, and recall. For product recognition, the inclusion of color feature enables the reduction of error date. It is helpful to distinguish between identical logo which has different colors. We can achieve the accuracy percentage for feature level as 75 and score level as 81.
6G networks are the next frontier in wireless communication, promising unprecedented speeds, lower latencies and advanced capabilities. However, with greater connectivity comes increased security risks. This chapter provides a systematic review of security issues in 6G networks and communication systems. A comprehensive search of academic databases is conducted for studies published between 2020-2023 that examined security related issues of 6G. The studies indicate authentication, privacy and trust will be major concerns due to new technologies like reconfigurable intelligent surfaces and cell free architectures. Specific vulnerabilities include impersonation attacks, side channel leaks, and insider threats from rogue base stations. End to end encryption, blockchain, and machine learning are emerging security mechanisms to address these issues. This review synthesizes the current work on 6G security, highlights critical challenges and opportunities, and provides a framework for the future.
A Mobile Ad hoc NETwork (MANET) is a self-configuring network that is not reliant on infrastructure. This paper introduces a new multipath routing method based on the Multi-Hop Routing (MHR) technique. MHR is the consecutive selection of suitable relay nodes to send information across nodes that are not within direct range of each other. Failing to ensure good MHR leads to several negative consequences, ultimately causing unsuccessful data transmission in a MANET. This research work consists of three portions. The first to attempt to propose an efficient MHR protocol is the design of Priority Based Dynamic Routing (PBDR) to adapt to the dynamic MANET environment by reducing Node Link Failures (NLF) in the network. This is achieved by dynamically considering a node's mobility parameters like relative velocity and link duration, which enable the next-hop selection. This method works more efficiently than the traditional protocols. Then the second stage is the Improved Multi-Path Dynamic Routing (IMPDR). The enhancement is mainly focused on further improving the Quality of Service (QoS) in MANETs by introducing a QoS timer at every node to help in the QoS routing of MANETs. Since QoS is the most vital metric that assesses a protocol, its dynamic estimation has improved network performance considerably. This method uses distance, linkability, trust, and QoS as the four parameters for the next-hop selection. IMPDR is compared against traditional routing protocols. The Network Simulator-2 (NS2) is used to conduct a simulation analysis of the protocols under consideration. The proposed tests are assessed for the Packet Delivery Ratio (PDR), Packet Loss Rate (PLR), End-to-End Delay (EED), and Network Throughput (NT).
Integrating quantum computing and artificial intelligence (AI) in automotive safety presents a paradigm shift, addressing complex challenges such as real-time traffic management, accident prevention, and system optimization. Quantum computing's principles of superposition, entanglement, and parallelism enhance AI's ability to process and analyze vast datasets, enabling precise and efficient decision-making in dynamic environments. Industry leaders like BMW, Volkswagen, and Waymo demonstrate the transformative potential of quantum-enhanced systems with applications in traffic optimization, autonomous navigation, and predictive maintenance. However, hardware scalability, ethical concerns, and regulatory gaps persist.
This paper introduces a Cyber-Physical System (CPS) that can be used to improve privacy protection in enterprise Blockchain (BC) systems, particularly in Hyperledger Fabric (HF). The proposed CPS employs advanced methods to facilitate a privacy-preserving authorization mechanism. Using data masking, Homomorphic Encryption (HE), and complex digital signatures, the model ensures the confidentiality of transaction data during the authorization process. Additionally, the implementation of Secure Multiparty Computation (SMC) and Zero-Knowledge Proofs (ZKP) enhances the security of the data by preventing unauthorized personnel from accessing it and ensuring that the approving peers remain anonymous. The solution proposed in the paper addresses the problems of conventional centralized access control, which can be manipulated and has data leakage problems. Experimental results have proved the design’s practical applicability and security trade-off, providing a robust foundation for enterprises to adopt privacy-preserving BC. Thus, the HF platform integration demonstrates the model’s real-world applicability, which developed a secure, scalable, and efficient solution for handling sensitive transactions in distributed networks.
Gastric cancer (GC) is the fifth most common type of cancer worldwide and the third leading cause of cancer-related death. The Cox model and accelerated failure time models are widely used in the modeling of survival data for various diseases. The goal of this study was to compare the performance of the Cox proportional hazard (PH) model and accelerated failure time (AFT) models in determining the factors that influence gastric cancer death. The data for this study was obtained from gastric cancer patients admitted to the Tikur Anbesa specialized hospital, between January 1, 2015, and February 29, 2020. A total of 409 gastric cancer patients were studied retrospectively. Cox proportional hazard and accelerated-failure-time (AFT) models were compared to identify an appropriate survival model that determines factors that affect the time to death of gastric cancer patients. To compare the performance of all models, the AIC, BIC, and Likelihood criteria were used. The analysis was carried out using the R statistical software.
In today’s fast online information processing era, it is mandatory to deal with the security issues in the computer networks. WiFi Protected Access (WPA), IEEE802.11i, Data Encryption Standard (DES), Advanced Encryption Standard (AES) are used to achieve better security. The paper deals to explain and avoid the two types of attacks, Denial of Service (DoS) and Memory Exhaustion (ME), generated during the 4-way handshake process used for connection establishment over IEEE802.11i. Some amendments in 4-way handshake process are made to reduce these types of attacks. An enhanced 4-way Handshake Process over IEEE802.11i with Cookies Implementation is proposed with discussion. Finally, a conclusion and future work is provided with an exhausted result discussion and analysis of the work.
Cloud computing uses the internet instead of discs or memory. Computing services include servers, databases, networks, and programmes. The primary benefit of cloud computing is easy and cheap data backup and access from anywhere. Cloud storage doesn't store consumer data, raising safety concerns. Cloud backup and storage users may not know how data is transported. The user is unaware a third party is secretly accessing their data. For safety, we offer numerous encryption algorithms. This book chapter covered cryptography and cloud computing.
In this paper, we embark on a groundbreaking journey to understand one of the most profound mysteries of the human mind: consciousness. Our approach is unique because we use a branch of mathematics called Topological Data Analysis (TDA) to explore the complex workings of the brain. Imagine trying to understand the shape and connections of a vast network of roads without a map; TDA helps us create that map for the brain’s activity related to consciousness. At the heart of our study is the belief that the patterns of how brain regions connect and communicate hold the key to understanding consciousness. By applying TDA, we’re able to see these patterns in a new light, revealing the intricate landscape of brain activity in various states of consciousness, such as waking, sleeping, and dreaming. We meticulously collected and analyzed brain activity data using advanced neuroimaging techniques. Then, using TDA, we mapped out the topological structures—essentially, the shapes and connections within this data—that correspond to different conscious experiences. This mathematical lens allowed us to uncover hidden patterns and relationships within the brain’s activity, offering fresh insights into how consciousness emerges from the complex interplay of neural signals. Our findings not only deepen our understanding of consciousness but also demonstrate the power of mathematical approaches in unlocking the secrets of the human mind. This research paves the way for new explorations into consciousness and offers novel perspectives on how mathematics can help decipher the intricate workings of the brain.
Suspicious activities in the cyber-crime world are increasing, so modern security measures of Cyber-Physical Systems (CPS) are required. This proposed model for Cybersecurity is known as the Predictive Data Security System and incorporates the use of Blockchain (PDSS + BC). Using BC’s fixed ledger, this PDSS + BC objective incorporates predictive analytics intending to prevent risks to security. The PDSS integrates a BC for data storage with an ML-based Threat Prediction Model (TPM). In the computer simulation, researchers test the performance of the recommended model against the standard CPS security measures and validate its effectiveness. The empirical results proposed that the recommended PDSS + BC enhances data quality and Anomaly Detection (AD), underscoring the importance of BC in contemporary cybersecurity frameworks. With an optimal Detection Rate (DR) of 91.56% and a minimum End-to-End Delay (EED) of 765 ms, the reference for PDSS + BC is an impressive combination. When accommodated against current state-of-the-art methods, the proposed PDSS + BC performs significantly all around.
One of the most valuable resources is the forest, home to many animals and plants. Forest fire agencies worldwide have studied forest fire prevention and detection. Worldwide, natural and man-made calamities occur. Forest fires are environmental tragedies. The dense forest fire devours everything in its path. This research examines the forest fire detection and alert system to detect fires early. This research identifies forest fires before they spread to safeguard wildlife and natural resources. An Arduino microcontroller, flame sensor, ultrasonic sensor, thermistor, smoke sensor, buzzer, and GPRS are in every IoT (internet of things) device. Each IoT sensor records sensor values in the thing speak cloud. The cloud storage may pick and map forest fire threats by eliminating features from the input. MLP mapping maps forest fire danger, while AROC maps forest fire hazard. GPRS delivers cloud-based SMS warnings. Finally, forest department officials may interact.
The article has been withdrawn at the request of the author of the journal Recent Advances in Computer Science and Communications due to incoherent content. Bentham Science apologizes to the readers of the journal for any inconvenience this may have caused. The Bentham Editorial Policy on Article Withdrawal can be found at https://benthamscience.com/editorial-policies-main.php Bentham Science Disclaimer: It is a condition of publication that manuscripts submitted to this journal have not been published and will not be simultaneously submitted or published elsewhere. Furthermore, any data, illustration, structure or table that has been published elsewhere must be reported, and copyright permission for reproduction must be obtained. Plagiarism is strictly forbidden, and by submitting the article for publication the authors agree that the publishers have the legal right to take appropriate action against the authors, if plagiarism or fabricated information is discovered. By submitting a manuscript, the authors agree that the copyright of their article is transferred to the publishers if and when the article is accepted for publication.
The goal of image filtering is to remove the noise from the image in such a way that the "original" image is visible.Image filtering is a method by which we can enhance images .Image filtering methods are applied on images to remove the different types of noise that are either present in the image during capturing or injected into the image during transmission.Fuzzy Filter method for image de-noising based on Fuzzy set theory.This filter employs Fuzzy rules for deciding the gray level of a pixel within a window in the image.The Mean Filter is a linear filter which uses a mask over each pixel in the signal.Each of the components of the pixels which fall under the mask are averaged together to form a single pixel.This filter is also called as average filter.In these work Gaussian noise used and image filtering performed by fuzzy filter and mean filter.Further results have been compared for filters using Standard Deviation and Peak Signal to Noise Ratio.
Breast cancer is one of the most prevalent diseases in India’s urban regions and the second most common in the country’s rural parts. In India, a woman is diagnosed with breast cancer growth every four minutes, and a woman dies from breast cancer sickness every thirteen minutes. Over half of breast cancer patients in India are diagnosed with stage 3 or 4 illness, which has extremely low survival rates; hence, an urgent need exists for a rapid detection strategy. To forecast if a patient is at risk for breast cancer, we utilise the classification techniques of machine learning, in which the machine learning model learns from the previous information and can anticipate on the new information that is generated by the data. To create a model using Logistic Regression, Support Vector Machines, and Random Forests, this dataset was collected from the UCI repository and studied in this study. The primary goal is to improve the accuracy, precision, and sensitivity of all the algorithms that are used to categorise data in terms of the competency and viability of each and every algorithm. Random Forest has been shown to be the most accurate in classifying breast cancer, with a precision of 98.60 percent in tests. The Scientific Python Development Environment is used to carry out this machine learning study, which is written in the python programming language.
Nanomaterials reduce biodegradable pollutants before promoting standard levels. Thus, nanomaterials could efficiently and sustainably treat environmental contaminants. However, additional research is needed to determine the destiny of environment remediation nanomaterials. This review covers biological and plant-based bioremediation nanotechnologies. Nanomaterials reduce waste and harmful material degradation costs. Nanomaterials/nanoparticles immediately catalyse waste and toxic material breakdown, which is hazardous to microorganisms, and enable microorganisms degrade waste and toxic materials more efficiently and sustainably.
Building information modeling (BIM) has changed architecture, engineering, and construction professionals' built environment conceptualization, design, building, and management. BIM tool evolution, principles, components, modelling methodology and software, user interface and basic functions, construction and post-construction benefits, safety, and intellectual property are covered in this chapter. BIM integrates geometry, spatial relationships, geographical data, and material qualities for interdisciplinary communication and decision-making. This synergy boosts project efficiency by improving design accuracy, timelines, and conflict resolution. Technical knowledge and professional teamwork are needed to use BIM effectively. BIM competency needs a paradigm shift in schooling and professional growth. As AEC digitises, BIM's potential is crucial for professional proficiency.
AI and machine learning affect intelligent supply chains, implementation challenges, and psychosocial factors affecting supply chain AI and ML acceptability. Behavioural operations management grows. It structures production and manufacturing processes utilising psychologically based behavioural and cognitive aspects, challenging the idea that “human being is rational” in decision-making. Supply chain management (SCM), a subset of operations management, studies social psychology and supply chains. A related research literature review follows. This report recommends improving supply networks, especially in competitive companies. Supply chain efficiency requires multilevel and employee interaction. Human behaviour impacts supply chain decisions and performance.
Cloud computing involves virtualization, distributed computers, networking, software, and web services. Clouds have servers, datacenters, and customers. It has fault tolerance, high availability, scalability, flexibility, little user overhead, low ownership cost, on-demand services, etc. These issues demand a robust load balancing mechanism. Balanced load distribution improves resource use and task response time by preventing some nodes from being completely loaded and others idle. Load balancers match processor and network node performance. A load balancing solution that improves throughput and latency for application-based virtual topologies with variable cloud sizes will apply the divisible load scheduling theorem.
The Internet of Things (IoT) ushers in a new era of communication that depends on a broad range of things and many types of communication technologies to share information. This new age of communication will be characterised by the following characteristics: Because all of the IoT’s objects are connected to one another and because they function in environments that are not protected, it poses a significantly greater number of issues, constraints, and challenges than do traditional computing systems. This is due to the fact that traditional computing systems do not have as many interconnected components. Because of this, it is imperative that security be prioritised in a new approach, which is not something that is currently present in conventional computer systems. The Wireless Sensor Network, often known as WSN, and the Mobile Ad hoc Network are two technologies that play significant roles in the process of building an Internet of Things system. These technologies are used in a wide variety of activities, including sensing, environmental monitoring, data collecting, heterogeneous communication techniques, and data processing, amongst others. Because it incorporates characteristics of both MANET and WSN, IoT is susceptible to the same kinds of security issues that affect those other networks. An assault known as a Delegate Entity Attack (DEA) is a subclass of an attack known as a Denial of Service (DoS). The attacker sends an unacceptable number of control packets that have the appearance of being authentic. DoS assaults may take many different forms, and one of those kinds is an SD attack. Because of this, it is far more difficult to recognise this form of attack than a simple one that depletes the battery’s capacity. One of the other key challenges that arise in a network during an SD attack is that there is the need to enhance energy management and prolong the lifespan of IoT nodes. This is one of the other significant issues that arise in a network when an SD attack is occurs. It is recommended that you make use of a Random Number Generator with Hierarchical Intrusion Detection System, abbreviated as RNGHID for short. The ecosystem of the Internet of Things is likely to be segmented into a great number of separate sectors and clusters. The HIPS system has been partitioned into two entities, which are referred to as the Delegate Entity (DE) and the Pivotal Entity, in order to identify any nodes in the network that are behaving in an abnormal manner. These entities are known, respectively, as the Delegate Entity and the Pivotal Entity (PE). Once the anomalies have been identified, it will be possible to pinpoint the area of the SD attack torture and the damaging activities that have been taken place. A warning message, generated by the Malicious Node Alert System (MNAS), is broadcast across the network in order to inform the other nodes that the network is under attack. This message classifies the various sorts of attacks based on the results of an algorithm that employs machine learning. The proposed protocol displays various desired properties, such as the capacity to conduct indivisible authentication, rapid authentication, and minimum overhead in both transmission and storage. These are only a few of the desirable attributes.
Ensuring the confidentiality and integrity of healthcare records in Blockchain (BC)-based Medi-Cloud Systems (MCS) is vital, with a focus on Key Management Systems (KMS). This investigation presents a complete technique for Key Generation (KG), focusing on three key metrics: Key Generation Time (KGT), Operational Cost (OC), and Share Distribution Efficiency (SDE). The KGT’s computation of the time required to generate the encryption keys impacts Network Throughput (NT). To optimize costs, cost control measures the costs of data centers, computer power, and network services. SDE’s analysis of the key share distribution across the nodes impacts the efficiency and security of the Cyber-Physical System (CPS). To make MCS more reliable and secure, the researchers recommend the most effective method for KMS based on these data points. Integrating BC + HCS requires KMS that are secure and flexible, and the recommendation for MCS + BC ensures that this level of security is achieved.
A Mobile Ad hoc Network (MANET) is a group of low-power consumption of wireless mobile nodes that configure a wireless network without the assistance of any existing infrastructure/centralized organization. The primary aim of MANETs is to extend flexibility into the self-directed, mobile, and wireless domain, in which a cluster of autonomous nodes forms a MANET routing system. An Intrusion Detection System (IDS) is a tool that examines a network for malicious behavior/policy violations. A network monitoring system is often used to report/gather any suspicious attacks/violations. An IDS is a software program or hardware system that monitors network/security traffic for malicious attacks, sending out alerts whenever it detects malicious nodes. The impact of Dynamic Source Routing (DSR) in MANETs challenging blackhole attack is investigated in this research article. The Cluster Trust Adaptive Acknowledgement (CTAA) method is used to identify unauthorised and malfunctioning nodes in a MANET environment. MANET system is active and provides successful delivery of a data packet, which implements Kalman Filters (KF) to anticipate node trustworthiness. Furthermore, KF is used to eliminate synchronisation errors that arise during the sending and receiving data. In order to provide an energy-efficient solution and to minimize network traffic, route optimization in MANET by using Multi-Objective Particle Swarm Optimization (MOPSO) technique to determine the optimal number of clustered MANET along with energy dissipation in nodes. According to the research findings, the proposed CTAA-MPSO achieves a Packet Delivery Ratio (PDR) of 3.3%. In MANET, the PDR of CTAA-MPSO improves CTAA-PSO by 3.5% at 30% malware.
Hospital supervision or healthcare administration requires management, leadership, and the administration of hospital healthcare systems and hospital networks. Healthcare systems these days generate vast amounts of complicated information concerning patients, medical devices, electronic patient records and sickness designation, and hospital resources. The vast amount of data and information could be a significant source for interpreting and evaluating knowledge that helps to reduce costs and improve cognitive processes. Data mining is a set of techniques and methods to extend the present data in order to offer new or relevant insights and expertise to healthcare practitioners for improved decision-making. In the healthcare industry, big data consists of electronic health datasets or flat-file data which are disordered, complex, and so large that they are nearly impossible to manage with the available tools or traditional hardware and software techniques. For the healthcare data/information, there is a very large amount of data available for understanding the patterns and trends; hence, big data analytics has the potential to improve healthcare services and provide cost reductions. This chapter explores data mining applications, challenges and some future directions for health care. In particular, it discusses data mining and its applications within the major areas of healthcare. This hospital-based survey also explores the utility of various data mining techniques, such as association rule, clustering, and classification in the healthcare domain. This chapter also defines the cancer site and the morphology patterns among various patients with cancer with the help of above-defined data mining techniques.
This chapter delves into the challenges posed by the advent of 6G technology from a managerial standpoint, particularly focusing on security solutions. As the telecommunications landscape evolves rapidly, it becomes imperative for managers to navigate the intricacies of ensuring robust security measures amidst technological advancements. Through strategic insights, this chapter explores the complexities associated with 6G security and provides managerial perspectives aimed at fostering proactive and effective security strategies.
Integrating Deep Learning (DL) techniques in Convolutional Neural Networks (CNNs) with encrypted data analysis is an emerging field for enhancing data privacy and security. A significant challenge in this domain is the incompatibility of standard non-linear Activation Functions (AF) like Rectified Linear Unit (ReLU) and Hyperbolic Tangent (tanh) with Zero-Knowledge (ZK) encrypted data, which impacts computational efficiency and data privacy. Addressing this, our paper introduces the novel application of Chebyshev Polynomial Approximation (CPA) to adapt these AF to process encrypted data effectively. Utilizing the MNIST dataset, this paper conducted experiments with LeNet and various configurations of AlexNet, extending the range of the ReLU and tanH functions to optimize CPA. Our results reveal an optimal polynomial degree (α), with α = 10 for ReLU and between α = 10 and α = 15 for tanH, beyond which the benefits in accuracy plateau. This finding is crucial for ensuring the accuracy and efficiency of CNNs in processing encrypted data. This recommended study demonstrates that while the accuracy slightly decreases for plaintext data and more significantly for ciphertext data, the overall effectiveness of CPA in CNNs is maintained. This advancement enables CNNs to process encrypted data while preserving privacy and marks a significant step in developing privacy-preserving Machine Learning (ML) and encrypted data analysis.
The purpose of this chapter is to examine the factors which influence the intention to use Facebook among Generation Y consumers and its influence on purchase decision making. A quantitative research methodology was used, and the data was collected from 404 respondents in Bangalore city. Partial least square structural equation model using R software was used to analyse data collected. The findings showed that perceived usefulness, perceived enjoyment, perceived credibility, and subjective norm have a significant influence on intention to use Facebook while perceived ease of use does not have a significant influence on intention to use Facebook. Perceived enjoyment has the highest influence on intention to use Facebook followed by subjective norm, perceived credibility, and perceived usefulness. The results of this study also indicated that intention to use Facebook has a significant positive effect on consumers. The findings of this study contribute to an understanding of the importance of the selected factors in affecting the intention to use Facebook.
Population growth, urbanisation, industry, modernization, and digitalization increase residential, industrial, commercial, mining, radioactive, agricultural, hospital, and electronic wastes in the 21st century. Waste management is becoming the biggest global challenge. Waste management includes collecting, transporting, sorting, destroying, processing, recycling, controlling, monitoring, and regulating garbage, sewage, and other waste. Waste management preserves the environment, prevents pollution, and protects health. Global waste management is modern. Biological reprocessing, recycling, composting, waste-to-energy, bioremediation, incineration, pyrolysis, plasma gasification, ocean/sea disposal, etc. Waste management enhances life. This ensures future peace and wellness. Global health depends on waste management. This optimises waste management. This document discusses worldwide garbage management. It also offers the best waste management approach by critically reviewing previous researchers' findings.
Innovative legal and supply chain complexity solutions include machine learning in legal management and intelligent supply chain governance. Complex laws and supply chains necessitate data-driven initiatives. Effective supply chain governance needs transparency, accountability, and risk management. Complex data-rich legal administration requires machine learning. Legal research, document analysis, and predictive analytics benefit from machine learning. Supply chain governance requires compliance and risk management. Machine learning principles demonstrate lawyers can switch careers. Innovation in legal tech comes from AI, blockchain, and cybersecurity. This novel machine learning strategy for legal and change and training management requires careful planning and implementation.
This paper considers the issue of the search and rescue operation of humans after natural or man-made disasters. This problem arises after several calamities, such as earthquakes, hurricanes, and explosions. It usually takes hours to locate the survivors in the debris. In most cases, it is dangerous for the rescue workers to visit and explore the whole area by themselves. Hence, there is a need for speeding up the whole process of locating survivors accurately and with less damage to human life. To tackle this challenge, we present a scalable solution. We plan to introduce the usage of robots for the initial exploration of the calamity site. The robots will explore the site and identify the location of human survivors by examining the video feed (with audio) captured by them. They will then stream the detected location of the survivor to a centralized cloud server. It will also monitor the associated air quality of the selected area to determine whether it is safe for rescue workers to enter the region or not. The human detection model for images that we have used has a mAP (mean average precision) of 70.2%. The proposed approach uses a speech detection technique which has an F1 score of 0.9186 and the overall accuracy of the architecture is 95.83%. To improve the detection accuracy, we have combined audio detection and image detection techniques.
Facts say that practical cryptographic systems are now within the range. Quantum cryptography generally gives the solution which uses the various methods of polarization to leave the transmitted data undisturbed. In this work we try to improve the data security by increase the key size shared between parties involved used in quantum cryptography. Quantum cryptography uses storing the split particles involved and then measuring them and creating what they use, eliminating the problem of unsafe storage.
The internet of things (IoT) is a revolutionary technology that links living and non-living devices all around the world. As a result, the frequency of cyber-attacks against IoT deployments is expected to rise. As a result, each system must be absolutely secure; otherwise, consumers may opt not to utilize the technology. DDoS assaults that recently attacked various IoT networks resulted in massive losses. There is only one way to detect stolen data from software and malware on the IoT network that is discussed in this chapter. To categorize stolen programming with source code literary theft, the tensor flow deep neural system is offered. To communicate raucous information while simultaneously emphasizing the value of each token in terms of source code forgery, tokenization, and measurement, the malware samples were gathered using the Malign dataset. The results show that the methodology proposed for analyzing IoT cyber security threats has a higher classification efficiency than current methodologies.
This paper presents an approach to model content based metasearch engine which search all the content related images present in the dataset.Searching through the keywords in an image database require a lot of Meta data (keywords about the image) to be stored for each image in a separate database.This does not lead to an effective search mechanism.This mechanism basically works for the relevant output for the input query image.The process based on the image extraction by the implemented functions like "edge histogram", "image feature extraction i.e. sharpness, smoothness, color etc.
A combination of inherent unique stress-strain response features, Viscoelastic Materials (VM) provide an integral part in many different fields of engineering. Although practical, these materials’ highly complex time-dependent methods according to non-linear loading scenarios are impossible to model using conventional viscoelastic models such as Maxwell and Kelvin-Voigt. The present study introduces a framework that pushes over traditional Maxwell and Kelvin-Voigt approaches by employing fractional calculus in order to enhance the prediction of VM performance. The mathematical representation makes use of the Caputo fractional derivative for expressing an artificial viscoelastic polymer’s non-linear and time-dependent responses. Dynamic Mechanical Analysis (DMA) and Stress Relaxation Tests (SRT) proved the polymer possessed 1500 MPa fractional modulus and 0.65 fractional order, respectively. The resulting model involved more significant computing resources, but contrasting testing indicated that it accurately depicted stress relaxation and dynamic responses. As mentioned, the technique integrates mathematical and actual viscoelasticity for industrial uses while offering a precise basis for advanced material analysis.
WSN is defined as the group of sensor nodes which has the ability to sense various environmental parameters. It has limitation in ranges. The sensor nodes present in the network could be stationary or movable or even homogeneous or location aware. The sensed data from sensor nodes are transmitted to base station using multiple hops. It could also be done using Internet via router or gateway. Information gathered by base station increases according to the increase in sensor nodes occupying maximum capacity of present network. Wireless sensor network is an assembly of nodes from one to many or even hundreds or thousands, one of the nodes is connected to one sensor. These sensor nodes have different parts like a radio trans-receiver along with an internal or external antenna, microcontroller, power supply which will be battery oriented or connected with solar harvesting system and PCB based electronic circuit for interfacing. The trans-receiver receive and send information to and from the base station (control computer). Size of sensor node varies like size of a shoebox to the dust grain. Cost of these sensors varies according to their size, depending on the interfacing complexity of each node. Limitation of cost and size of sensor nodes results in correspondence limitation of resources used such as computational speed, power supply, and memory storage and network bandwidth. The main motto of this paper is to overcome the major problem of energy conservation in WSNs with a simple approach.
Chistyakov introduced notions of Metric Modular Spaces (MMS) recently. There are motivating chances when integrating the Fixed-Point Theorem (FPT) of MMS into the model and function of a Wireless Sensor Network (WSN), particularly when it comes to reformation network functions, routing algorithms, and data aggregation methods. In WSN data communication, security is a dynamic problem. Dynamic routing algorithms improve data security without requiring any added information by finding the shortest path for data transmission between two connected data packets. Applications requiring remote environmental condition monitoring are increasingly using WSN. Specifically, dense WSNs are evolving as a critical detecting stage for the Internet of Things (IoT). WSN can form the raw data, and to assure effective data transfer, a network model and effective node direction are required. Data aggregation is a method used by WSN to design a data fusion policy, and minimum data is transmitted inside the network. The study’s objective is the existence and uniqueness of FPT in MMS via integral type contraction. Also, the iteration of Fixed Points (FP) of Contracts in Settings in MMS was studied in this paper.
A concept that has been shown to be valuable in one circumstance and is likely to be useful in others is known as a pattern. A pattern can be interpreted in a variety of ways, and each interpretation has its own particularizations that are suited to the particular form of the pattern it represents. The term “pattern” can be used to describe anything, like a group of items that function in tandem with one another. The analysis of these patterns is important in order to improve recognition. Finding patterns in data is the primary emphasis of the field of pattern analysis, which is a subfield of artificial intelligence and computer science that employs the usage of algorithms. In the context of a data stream, the term “pattern” refers to any underlying correlations, regularities, or structures. If it finds significant patterns in the data already stored, a system may be able to anticipate generating predictions based on fresh information arriving from a source that is analogous to the one it is currently using.
Autonomous systems that cannot adapt to real-world environments have proliferated due to rapid technological improvement. These programmes will free people from repetitive, inefficient chores. The monotonous, dusty, and dangerous conditions of the unmanned aerial vehicle (UAV) pose a severe threat to human health and safety. Autonomous systems improve supply chain, tracking, and hazardous climate control. This chapter proposes merging compression and track architecture to improve UAV performance. The UAV and pipe increase trackwheel friction. The UAV doesn't move in the tube. Cleaning commences when sensors detect the mouth. Dirt sensors prevent the washing process from working. This chapter focuses on drain cleaning automation. Device automation addresses mobility and space. This study supports this method for garbage disposal and filtering. The technology removes manual cleaning and relies on human control of system movement.
In the escalating race between cryptographic security and quantum computing capabilities, the need for robust encryption methodologies that can withstand the prowess of quantum algorithms is more pressing than ever. This paper introduces a novel cryptographic framework, grounded in the principles of quantum mechanics and the entropic uncertainty principle, to forge a path towards quantum-resilient encryption. At the heart of this approach lies the integration of the entropic uncertainty inherent in quantum states, a fundamental aspect often overlooked in traditional cryptographic strategies. By harnessing this intrinsic uncertainty of quantum mechanics, we propose a mathematical framework that not only challenges the conventional paradigms of encryption but also sets a new benchmark for security in the quantum computing era. The paper delves into the theoretical underpinnings of quantum mechanics relevant to cryptography, with a particular focus on the entropic uncertainty principle. This principle, which posits a natural limit on the precision with which certain pairs of physical properties can be known, serves as the cornerstone of our proposed encryption method. We meticulously develop and outline a mathematical model that leverages this principle, ensuring that the encrypted information remains secure against the formidable computational capabilities of quantum algorithms. We contrast our approach with existing cryptographic methods, highlighting the enhanced security features offered by the entropic uncertainty-based model. The findings underscore the potential of this framework to serve as a resilient encryption mechanism in a landscape increasingly dominated by quantum computing technologies. This research paves the way for a new era of encryption, one that embraces the uncertainty of quantum mechanics as its shield against the threats posed by quantum computing.
Distributed Power Generation and Energy Storage Systems (DPG-ESSs) are crucial to securing a local energy source. Both entities could enhance the operation of Smart Grids (SGs) by reducing Power Loss (PL), maintaining the voltage profile, and increasing Renewable Energy (RE) as a clean alternative to fossil fuel. However, determining the optimum size and location of different methodologies of DPG-ESS in the SG is essential to obtaining the most benefits and avoiding any negative impacts such as Quality of Power (QoP) and voltage fluctuation issues. This paper’s goal is to conduct comprehensive empirical studies and evaluate the best size and location for DPG-ESS in order to find out what problems it causes for SG modernization. Therefore, this paper presents explicit knowledge of decentralized power generation in SG based on integrating the DPG-ESS in terms of size and location with the help of Metaheuristic Optimization Algorithms (MOAs). This research also reviews rationalized cost-benefit considerations such as reliability, sensitivity, and security studies for Distribution Network (DN) planning. In order to determine results, various proposed works with algorithms and objectives are discussed. Other soft computing methods are also defined, and a comparison is drawn between many approaches adopted in DN planning.
This paper introduces Explainable Secure-Net, a new framework that leverages machine learning to aid in the automatic design of cryptographic key-exchange protocols whilst providing transparent human-readable explanations for each decision made. Classical techniques are based on expert-designed algebraic rules and usually lead to long and intricate manual proofs, whereas Explainable Secure-Net encodes some core discrete-mathematical ingredients e.g., finite-field operations and protocol graph structures into a graph neural network. For each step of the protocol that the model proposes, a light-weight explanation module serves to highlight what drove its choice with respect to distinguishing features, keeping the design process transparent and readily auditable. For security, all candidate protocols are run through a hybrid verification process which marries formal symbolic verification to large scale statistical testing. On a 1,500 benchmarks of synthetic protocol examples, Explainable Secure-Net achieves 94 % synthesis accuracy and explores more than 95 % of the potential key-exchange space, while keeping the false-negative vulnerability rate below 1 % and providing 128-bit security guarantees. Our findings show that it is feasible to speed protocol innovation without a loss in terms of rigor and interpretability. We argue that Explainable Secure-Net is an important first step towards machine-augmented cryptographic design tools that can automatically propose, explain, and certify secure protocol definitions for a variety of cryptographic tasks.
Wireless networks are gaining popularity now a days. A Vehicular Ad-hoc network that is able to configure all network devices, which means all devices work as host and as a router in network. For conveying all information only nodes help to each other. Vehicular ad hoc networks mostly formed temporary and comes in less infrastructure networks. Performance unit reduces caused by unstable channel position and network connection and mobility and resource bound. Improving in other performance unit, respective cross-layering techniques which are used where some other layers from protocol stack communications with each other via exchange of information. AODV is all good reactive ad hoc routing protocol. In this work, we recommend moderated version of AODV routing protocol, which all depend on route discovery through physical layers rather the minimum hop count approach that are algorithm of default distance vector. Research will elaborate how recommend model uses the received SINR and RSSI value to find its route. We will send this RSSI value to the network layer to find distance of node which is in range. It will calculate the optimal path. The focus on parameters like traffic rate of transfer, interruption, packet loss, time, link stableness, for increase overall lifetime of a network through optimal usage of battery.
This chapter provides an analysis of the various kinds of distracting noise that can be seen in degraded complex images, such as those found in newspapers, blogs, and websites. A complicated image that had been deteriorated as a result of noise such as salt and pepper noise, random valued impulse noise, speckle noise, and Gaussian noise, amongst others, was the result. There is an extraordinarily high demand for saving the text that can be read from complicated images that have been degraded into a form that can be read by computers for later use.
The Internet is slowly shaping to be the primary information source that fulfils all the needs of a person. Whenever someone plans to buy a product, they tend to consult with the reviews online to get a clear idea of the product in terms of its various aspects. The problem is that the information available about a single product is so much in volume that the users not be able to extract the information they require from this massive amount of data. The paper proposes a system that generates a temporal aspect based text summary of user opinions that are collected from different sources across the Internet with their time-stamp. These comments are broken into sentences and sub-sentences after predefined based classification. Then, Sentiment analysis is performed. The time relationship is taken into account, and the causal relationship is identified at the deflection points or the time frames during which there is a significant opinion change. The major advantage of this system is that the changes in user opinions with time can be traced and the cause of this sentiment change can be found out in addition to offering customers a quick, convenient and easy way to consume information about a product to help them decide whether or not to purchase it. It also helps enterprises to get relevant insights related to their products based on the customer reviews online.
The trend of learning from videos instead of documents has increased. There could be hundreds and thousands of videos on a single topic, with varying degrees of context, content, and depth of the topic. The literature claims that learners are nowadays less interested in viewing a complete video but prefers the topic of their interests. This develops the need for indexing of video lectures. Manual annotation or topic-wise indexing is not new in the case of videos. However, manual indexing is time-consuming due to the length of a typical video lecture and intricate storylines. Automatic indexing and annotation is, therefore, a better and efficient solution. This research aims to identify the need for automatic video indexing for better information retrieval and ease users navigating topics inside a video. The automatically identified topics are referred to as "Index Points." 137-layer YoloV4 Darknet Neural Network creates a custom object detector model. The model is trained on approximately 6000 video frames and then tested on a suite of 50 videos of around 20 hours of run time. Shot Boundary detection is performed using Structural Similarity fused with a Binary Search Pattern algorithm which outperformed the state-of-the-art SSIM technique by reducing the processing time to approximately 21% and providing around 96% accuracy. Generation of accurate index points in terms of true positives and false negatives is detected through precision, recall, and F1 score, which varies between 60-80% for every video. The results show that the proposed algorithm successfully generates a digital index with reasonable accuracy in topic detection.