
Associate Professor and Vice Principal at Bharati Vidyapeeth's College of Engineering for Women Pune Maharashtra India
The number of vehicles on the roads is increasing in proportion to the economic revolution and economic growth. Using a large number of vehicles will increase violations of the law, cause accidents and lead to crime. Traffic flow monitoring is needed to overcome these problems. Intelligent machines can play an important role in traffic management by identifying the bus. In this research study, a project has been developed for license detection and recognition utilizing convolutional neural network (CNN) which is a deep learning method.The system consists of two parts: License area verification and license characters verification. During the search, images of the vehicle were recorded on digital cameras. The system then distributes the license area of the image frame. After removing the trace area, a method is adopted to convert the low image to high image. This technique is used with CNN layers to rebuild the quality of pixels of the input image. Segment each character on the board using the bounding box. In the recognition process, CNN technology is used for extraction and classification.
Concerns regarding their opacity and potential ethical ramifications have been raised by the spread of algorithmic decisionmaking systems across a variety of fields. By promoting the use of interpretable machine learning models, this research addresses the critical requirement for openness and moral responsibility in these systems. Interpretable models provide a transparent and intelligible depiction of how decisions are made, as opposed to complicated black-box algorithms. Users and stakeholders need this openness in order to understand, verify, and hold accountable the decisions made by these algorithms. Furthermore, interpretability promotes fairness in algorithmic results by making it easier to detect and reduce biases. In this article, we give an overview of the difficulties brought on by algorithmic opacity, highlighting how crucial it is to solve these difficulties in a variety of settings, including those involving healthcare, banking, criminal justice, and more. From linear models to rule-based systems to surrogate models, we give a thorough analysis of interpretable machine learning techniques, highlighting their benefits and drawbacks. We suggest that incorporating interpretable models into the design and use of algorithms can result in a more responsible and moral application of AI in society, ultimately benefiting people and communities while lowering the risks connected to opaque decision-making processes.
The primary focus of this study is the validation of composite additives with the help of additional optimization methods and the analysis of its effect on the combustion characteristics of compression ignition (CI) engines. Previous work on the identification of the correct multi additive combination by Taguchi and the TOPSIS optimization method has shown substantial improvements in the performance and emission characteristics of CI engines. The same work was extended using the GRA Optimization method with the Multi-Criteria Decision-Making (MCDM) optimization technique known as the Analytic Hierarchy Process (AHP) to validate the optimization results from the previous optimization work. Remarkably, all optimization methods yielded consistent results, pointing to the superiority of the composite additive sample ‘D8EH6E4 hence supporting the outcome of previous work. Subsequent testing and comparison of this novel composite additive with baseline diesel fuel for combustion characteristics analysis demonstrated notable improvements in combustion parameters, including a 25 % reduction in the rate of pressure rise, an 18 % decrease in net heat release rate, and a 6 % decrease in mean gas temperature.
The increasing demand for electric vehicles (EVs) presents significant challenges for energy grids, particularly in balancing demand and supply during peak charging periods. This paper proposes an Adaptive Energy Management System (EMS) for EV charging stations that leverages artificial intelligence (AI) techniques to optimize power distribution and enhance grid stability. By integrating fuzzy logic and reinforcement learning algorithms, the proposed system dynamically adjusts charging power allocation based on real-time grid conditions and EV battery levels. The EMS ensures efficient energy use, minimizes grid overload risks, and enables seamless integration with renewable energy sources. Simulation results demonstrate the system’s ability to maintain grid stability while maximizing charging efficiency. This adaptive approach paves the way for future smart grid applications, offering scalability and robustness for large-scale EV deployments.
The accessibility of future combination gadgets, for example, a Fusion Nuclear Science Facility incredibly relies upon long working lifetimes of plasma confronting segments in their diverters. Material-Plasma Exposure will use another high-force plasma source idea dependent RF innovation. This spring idea permit test to swathe the whole ordinary plasma surroundings in the diverter of a hope combination reactor. The option to examine disintegration and re-affidavit for pertinent calculations with applicable thrilling and attractive fields before the objective. Material-Plasma Exposure is being intended to take into consideration the presentation of from the earlier neutron-lighted examples. The objective exchange container has been intended to straight plasma generator with the end goal that it very well may be moved to posting for more itemized surface examination. Material-Plasma Exposure is being created in an arranged methodology with progressively expanded abilities. After the underlying improvement stride of the helicon source and the source idea is being tried in the Proto-Material-Plasma Exposure gadget. First warming with microwaves brought about a superior ionization spoke to by privileged electron solidity on pivot, when contrasted with the helicon plasma just without warming.
The accessibility of future combination gadgets, for example, a Fusion Nuclear Science Facility incredibly relies upon long working lifetimes of plasma confronting segments in their diverters. Material-Plasma Exposure will use another high-force plasma source idea dependent RF innovation. This spring idea permit test to swathe the whole ordinary plasma surroundings in the diverter of a hope combination reactor. The option to examine disintegration and re-affidavit for pertinent calculations with applicable thrilling and attractive fields before the objective. Material-Plasma Exposure is being intended to take into consideration the presentation of from the earlier neutron-lighted examples. The objective exchange container has been intended to straight plasma generator with the end goal that it very well may be moved to posting for more itemized surface examination. Material-Plasma Exposure is being created in an arranged methodology with progressively expanded abilities. After the underlying improvement stride of the helicon source and the source idea is being tried in the Proto-Material-Plasma Exposure gadget. First warming with microwaves brought about a superior ionization spoke to by privileged electron solidity on pivot, when contrasted with the helicon plasma just without warming.
The purpose of this study is to use an optimal hybrid ML model with oversampling techniques (SMOTE) and feature selection techniques (SA) to help predict which employees may churn. Is to investigate. It is integrated with the classification algorithm. KNN, Naive Bayes, MLP, LR, etc. The focus is on the true positive accuracy predicted by the model. The dataset was split in half, with 70% used to train the algorithm and 30% used to test it, resulting in an accuracy of ~93% percent. We compared these results between the features selected by the model in this study and those previously listed by domain experts to see which one yielded the more reliable results. The future for reliable results. This helps the HR system adopt the right scenarios in real time, correctly predict potential employees leaving the company, and know why they are doing so.
Since the start of the covid 19 pandemic, a wide range of medications have been produced and are currently being utilized to treat the disease. Tulsi, in addition to all of the chemical-based medications, is an herbal therapy that is particularly effective in the treatment of this ailment. Tulsi has been used to heal ailments and infections for millennia, particularly in India. Because we use tulsi for medicinal purposes, it's vital to monitor its health in order to reap the full benefits of its herbal properties. Plant diseases harm the health and growth of the plant. Disease detection in plants is crucial so that it can be treated before it spreads throughout the plant. To detect illnesses in tulsi leaves, we propose employing a model based on convolution neural networks. Image processing and CNN are widely employed. The prepared model extracts the image's key features and categorizes it into different disorders. The model has a 75 percent accuracy rate.
Recurrent neural networks (RNNs) and long short-term memory (LSTM) networks are two methods for music production that have been studied by researchers. A lot of LSTM networks have been employed to create character-level musical scores. However, in order to produce pleasing and grammatically accurate sheet music, these LSTM models necessitate a significant amount of training time. As an alternative to LSTM models in this study, we use Gated Recurrent Unit (GRU) networks, which have three gates and do not maintain an internal cell state. Peer evaluations of the resulting music's quality use qualitative criteria. The suggested GRU-LSTM model is compared subjectively with another model without similar techniques.
Now a days every mankind is suffering due to infections. Ayurveda, the science of life helped to take preventive measures which boost our immunity. It is plant-based science. Many medicinal plants found useful in daily life of common people for boosting immunity. Identifying the plant species having medicinal plant is challenging, it requires botanical expert. In the process of manual identification, botanical experts use various plant features as the identification keys, which are examined adaptively and progressively to identify plant species. The shortage of experts and trained taxonomist created global taxonomic impediment problem which is one of the major challenges. Various researchers have worked in the field of automatic classification of plants since the last decade. The leaf is considered as primary input as it is available throughout the whole year. The research paper mainly focuses on the study of transfer learning approach for medicinal plant classification, which reuse already developed model at the starting point for model on a second task. Transfer learning approach is a black box approach used for image classification and many more applications by extracting features from an image. Some of the transfer learning models are MobileNet-V1, VGG-19, ResNet-50, VGG-16. Here it uses Mendeley dataset of Indian medicinal plant species which is freely available. Output layer classifies the species of leaves. The result provides evaluation and variations of above listed features extracted models. MobileNetV1 achieves maximum accuracy of 98%.
The control of harmful emissions from diesel engines has always been one of the primary elements in obtaining improvement in air quality and number of innovations has been performed in this domain in past many years. In the present work effect of Dimethyl carbonate (DMC) as fuel additive on CI engine is investigated regarding reduction in Smoke and NOx emissions and effect of engine parameter on effective of DMC on them. Also, present work tried to optimize the percentage of Dimethyl carbonate (DMC) and operating parameter to achieve better engine economy and emissions control using Taguchi method. Present study evaluated the impacts different blending ratio of DMC (0 - 20%) in diesel fuel. This DMC blends are tested with three significant parameters as compression ratio (CR 15:1 to 19:1), fuel injection pressure (FIP, 200 to 280 bar), fuel injection timing (FIT, 21° to 24°, b°TDC). After analyzing SN Ratio and Contour Interaction Plots, the optimum effect of dimethyl carbonate is obtained at 5-10% blending with diesel with engine operating conditions of Compression Ratio at 18, Injection Pressure at 250 Bar and Injection Timing at 23 °bTDC. It is also observed that single objective method used here was not able identify the exact optimize combinations of DMC with engine operating parameter hence selection of suitable multi objective method of optimization is necessary for further study and optimization process.
Using predictive analytics is a key part of making the computer supply chain safer because it helps companies find and stop threats before they happen. Predictive analytics uses complex algorithms and machine learning to look through huge amounts of data for trends and outliers that could point to cyber dangers. Businesses can stay ahead of cyber attackers and keep their digital assets safe with this method. In the online supply chain, one of the best things about prediction analytics is that it can find new threats before they become full-blown attacks. Predictive analytics looks at past data, current trends, and other factors to be able to guess possible computer dangers and weak spots. In turn, this lets businesses take strategic steps to lower these risks, like fixing security holes or adding more protections. Predictive analytics can also improve the cyber supply chain's ability to respond to incidents. Predictive analytics looks at data from many places, like network logs, endpoint devices, and security monitors, to help find and rank possible security events. This lets companies react quickly and effectively to cyberattacks to lessen their effects. The prediction analytics is a great way to make the online supply chain safer. Using advanced algorithms and machine learning, businesses can find and stop cyber dangers before they happen, keep their digital assets safe, and boost their total security.
Electrospinning is a easy and versatile method to synthesize the Nanofibers of different Polymers and compounds. The main merits of this process are we can get continuous and ultra thin fibers. Due to this we can use this process for mass production. This process overcome so many limitations of other processes.Hence in this present work manufacturing of fibers and optimization of process parameters has been carried out. There are near about 16 process parameter of electrospining process. From this four parameters are selected (distance between spinneret and drum collector, voltage, flow rate and viscosity). Then characterization of these manufactured nanofibers has been done by using SEM. Then applying the technique of DOE and ANOVA the effect of these parameters on the diameter of nanofibers has been predicted.
ElectrospinningT isT aT easyT andT versatileT methodT toT synthesizeT theT NanofibersT ofT differentT PolymersT andT compounds.T TheT mainT meritsT ofT thisT processT areT weT canT getT continuousT andT ultraT thinT fibers.T DueT toT thisT weT canT useT thisT processT forT massT production.T ThisT processT overcomeT soT manyT limitationsT ofT otherT processes. HenceT inT thisT presentT workT manufacturingT ofT fibersT andT optimizationT ofT processT parametersT hasT beenT carriedT out.T ThereT areT nearT aboutT 16T processT parameterT ofT electrospiningT process.T FromT thisT fourT parametersT areT selectedT (distanceT betweenT spinneretT andT drumT collector,T voltage,T flowT rateT andT viscosity).T ThenT characterizationT ofT theseT manufacturedT nanofibersT hasT beenT doneT byT usingT SEM.T ThenT applyingT theT techniqueT ofT DOET andT ANOVAT theT effectT ofT theseT parametersT onT theT diameterT ofT nanofibersT hasT beenT predicted.
New developments in material science have invented high-strength temperature-resistant (HSTR) materials with many extraordinary qualities. Now a day's these advanced and smart materials such as nickel-based super alloys, stainless steel and tool steel are the demands of automotive, aerospace, electronics, medical devices and communication industries for production of more durable and reliable products. Machining of these materials is a challenge and traditional manufacturing techniques are often found unfit for the purpose. One needs to use non-traditional or advanced manufacturing techniques in general and advanced machining processes in particular. ECM provides the alternative to traditional or conventional machining processes without a direct contact between the tool and the workpiece, with high material removal rates, irrespective of diverse mechanical properties of the workpiece. This review paper deals with the research on electrochemical machining (ECM) process which is carried out to improve the electrochemical machining performance and to study current developments in ECM. Different approaches to improve the dimensional control and process performance of ECM are reviewed. Keywords: Electrochemical machining, machining performance, surface roughness, metal removal rate, dimensional control Cite this Article Avinash M. Pawar, Sachin S. Chavan, S.T. Chavan et al. Innovative Approaches to Improve Electrochemical Machining Performance. Journal of Aerospace Engineering & Technology . 2017; 7(2): 9–16p.
Semiconductor Fabrication is a business of high capital speculation and quick evolving nature. To be serious, the creation in a fabrication should be viably arranged and planned beginning from the inclining up stage, with the goal that the business objectives, for example, on-time conveyance, high yield volume and viable utilization of capital concentrated hardware can be accomplished. Reproduction gives a successful tool to characterizing the way from serious ideas to true arrangements. More consideration is presently being centred on the precision of information gathered, implies for separating and bringing in information to the models, and staying up to date with changes in the fabrication. The directors and architects are these days properly worried about whether the model is a decent portrayal of the fabrication, and whether the outcomes are right. This is tended to through check and approval. The re-enactment group does approval by looking at the spreadsheet models and fabrication information, however formal systems have not been applied with the end goal of approval. The check and approval techniques are not officially recorded either. Additionally, the administration chose to explore different avenues regarding new programming called Lucent AP.
In recent times, agriculture have gained lot of attention of researchers. More precisely, crop prediction is trending topic for research as it leads agri-business to success or failure. Crop prediction totally rest on climatic and chemical changes. In the past which crop to promote was elected by rancher. All the decisions related to its cultivation, fertilizing, harvesting and farm maintenance was taken by rancher himself with his experience. But as we can see because of constant fluctuations in atmospheric conditions coming to any conclusion have become very tough. Picking correct crop to grow at right times under right circumstances can help rancher to make more business. To achieve what we cannot do manually we have started building machine learning models for it nowadays. To predict the crop deciding which parameters to consider and whose impact will be more on final decision is also equally important. For this we use feature selection models. This will alter the underdone data into more precise one. Though there have been various techniques to resolve this problem better performance is still desirable. In this research we have provided more precise & optimum solution for crop prediction keeping Satara, Sangli, Kolhapur region of Maharashtra. Along with crop & composts to increase harvest we are offering industrialization around so rancher can trade the yield & earn more profit. The proposed solution is using machine learning algorithms like KNN, Random Forest, Naïve Bayes where Random Forest outperforms others so we are using it to build our final framework to predict crop.
Heat exchangers are the apparatus which is widely used in various industries. They transmit heat among two or more fluid streams. Theoretical analysis done by Kern’s method has been done on Shell and tube heat exchanger with segmental baffle and then experimented in CFD to check Heat Transfer rate and pressure drop with varying number of baffles which is 6, 8 and 10, and by keeping all the rest of parameters constant. It is found that as the number of baffles increase heat transfer rate increases also pressure drop is increases significantly. To optimize model in next step, CFD analysis is has been done by varying baffle cut to 25%, 35% and 45% by keeping same number of baffles which found best in heat transfer rate in earlier Kern’s method. Other parameters like velocity, temperature, pressure and baffle numbers are kept constant. Results obtained from numerical solution are analysed extensively to get the effect of baffle cut on heat transfer rate and pressure drop on shell side. The pressing need of the heat exchanger industry, the tradeoff optimization between the pressure drop and the heat transfer coefficient has been studied to provide an idea to get effects of the change in two parameters, namely, baffle spacing and baffle cut simultaneously and observe how these two will affect the performance of an STHX. An attempt to find the optimum geometric configuration has been carried out with number of baffles. Increasing baffles for same length of shell and baffle cut increases from 25% to 35%, heat transfer rate increases. It is beneficial in elucidating that a higher heat transfer coefficient can be obtained for fewer baffles when coupled with the appropriate baffle cut. So selection of correct optimized shell and tube heat exchanger. The work is carried out for copper tube bundles and steel shell. So optimization of baffle cut with correct number of baffles and reduction in pressure drop studied with CFD tool.
Hybrid renewable energy resource (RER) microgrids offer a sustainable solution to integrating multiple energy sources like solar and wind into modern power systems. However, managing the intermittent nature of these resources presents significant challenges in maintaining stable and efficient power distribution. This study explores innovative control approaches for optimizing power management in hybrid RER microgrids. By utilizing advanced algorithms, including real-time optimization and machine learning techniques, the proposed framework ensures efficient energy distribution, reduces dependency on the grid, and enhances system stability. The research focuses on integrating these control strategies to balance energy supply and demand while maximizing the utilization of renewable energy sources. Simulation results demonstrate the effectiveness of the proposed methods in improving system efficiency and reliability under varying environmental conditions, contributing to the broader adoption of hybrid RER microgrids.
The development of analytical methods facilitates comprehension of the crucial process factors and reduces their impact on precision and accuracy. A confirmed systematic technique guarantees accurate, consistent, and dependable data. The metrics shown here are in accordance with ICH criteria and include linearity, range, robustness, accuracy, precision, specificity, and limit of detection, as well as the limit of quantitation. The selective method will produce repeatable, dependable, and consistent results sufficient for the intended purpose, thanks to method validation. As a result, it's essential to specify exactly what the technique is to be employed for as well as under what circumstances. Therefore, one of the most important steps a laboratory should take to develop trustworthy analytical methods is method validation. Keywords: ICH recommendations, validation, accuracy, specificity, and precision.