The motto behind the keenness of urban areas has become unmistakable in current cities because of the rise of installed and associated shrewd gadgets, frameworks, and innovations. It is possible to connect every object to the Internet. As a result, in the impending Internet of Things era, the “Internet of Vehicles-IoV” will play a critical role in newly developed dazzling urban communities. The IoV can possibly address different traffic and street wellbeing issues successfully to forestall deadly crashes. In any case, a specific test of IoV, particularly in “Vehicle-to-Vehicle” as well as “Vehicle-to-Infrastructure” communications, guarantees quick, in order to transmit securely and exact performance of the information. The above effort is modifying Blockchain innovation for continuous application called RTA to meet “Vehicle-to-Every thing-V2x” communications problems in order to overcome these challenges. As a result, the main goal of the study is to develop a Blockchain-based IoT framework for establishing communication security and creating a completely decentralized computing platform. Research methodology used here is divided into two major sections. In the part 1 presented below, authors discuss the traceability and optimization over Merkle trees. The second section deals with implementing an actual blockchain with our optimized Merkle tree as the underlying technology to represent a distributed trust based ledger.
Alzheimer Disease is a chronic neurological brain disease. Early diagnosis of Alzheimer illness may the prevent the occurrence of memory cellular injury. Neuropsychological tests are commonly used to diagnose Alzheimer’s disease. The above technique, has a limited specificity and sensitivity. This article suggests solutions to this issue an early diagnosis model of Alzheimer’s disease based on a hybrid meta-heuristic with a multi-feed-forward neural network. The proposed Alzheimer’s disease detection model includes four major phases: pre-processing, feature extraction, feature selection and classification (disease detection). Initially, the collected raw data is pre-processed using the SPMN12 package of MATLAB. Then, from the pre-processed data, the statistical features (mean, median and standard deviation) and DWT are extracted. Then, from the extracted features, the optimal features are selected using the new Hybrid Sine cosine firefly (HSCAFA). This HSCAFA is a conceptual improvement of standard since cosine optimization and firefly optimization algorithm, respectively. Finally, the disease detection is accomplished via the new regression- based multi-faith neighbors’ network (MFNN). The final detected outcome is acquired from regression-based MFNN. The proposed methodology is performed on the PYTHON platform and the performances are evaluated by the matrices such as precision, recall, and accuracy.
Smartphones are constantly changing in today's world, and as a result, security has become a major concern. Security is a vital aspect of human life, and in a world where security is lacking, it becomes a concern for mobile users' safety. Malware is one of the most serious security risks to smartphones. Mobile malware attacks are becoming more sophisticated and widespread. Malware authors consider the open-source Android platform to be their preferred target as it came to lead the market. State-of-the-art mobile malware detection solutions in the literature use a variety of metrics and models, making cross-comparison difficult. In this paper various existing methods are compared and a significant effort is made to briefly address android malwares, various methods for detecting android malwares and to give a clear image of the progress of the android platform and various malware detection classifiers.
Internet of Things (IoT) enhances the global connectivity to all the remote sensing devices. It enables the connectivity of communication and processing the real-time data that has been collected from an enormous number of connected sensing devices. There is an increase in the IoT technology that leads to various malicious attacks. It is more important to overcome the malicious attacks, mainly to stop attackers or intruders from taking all the control of devices. Ensuring the safety and accuracy of the sensing devices is a serious task. It is very much important to enabling the authenticity and integrity to obtain the safety of the devices. Dynamic tree chaining, Geometric star chaining and Onion encryption are the three solutions that has been proposed in this project for in order to enable authenticity and integrity with information hiding for secure communication. The simulation results are driven displays that the proposed system is very stable and much better than other existing solution in means of security, space and time.
Nowadays, the Internet of Things (IoT) has been used widely in our daily day to day life, starting from health care devices, hospital management appliances to a smart city. Most of the IoT devices have limited resources and limited storing capability. All the sensed information must have to be transmitted and to store in the cloud. To make a decision and for making analysis all the data stored in the cloud has to be retrieved. Making certain the credibility and security of the sensed information are much necessary and very important for the use of IoT devices. We tend to examine the proposed technique to be much secure than the existing one. In IoT, if the security is not ensured, then it may result in a variety of unsought issues. This survey resembles the overall safety aspects of IoT and debates the overall issues in the security of IoT.
In the direction of computer globalization and digitization, India is rapidly developing education and information technology. People are taught how to invest in deposits, postal investments, government bonds, gold systems and bonds, and the private sector. The world in which we now live has been completely transformed by technology. The study indicates that there are more than 4 billion active Internet users worldwide, or nearly half of the world's population. Our lives are now faster, easier to manage, and more enjoyable thanks to modern technology. This paper focuses in experiencing and developing a stock application using PHP, React JS, NodeJS and CSS . All the stock data is stored in a MYSQL database. On the other side for developing machine language application python code is used to convert the data into csv format for machine learning algorithms. The investor is presented with a login screen in the python environment where they must enter their user name and password. The stock dashboard shows the investor's current stock holdings, as well as online stocks' their current price, percentage change in stocks, sensex, nifty, bonus, rights, IPO's, annual report etc. statistical methods are used as software modules for the investor, and with a single click of a button, they can compare and contrast their own stocks with online stocks, as well as the trend in the stock market position in order to decide whether to buy, hold, or sell the stocks. Data visualization component is used for comparison of various stocks, and by clicking of a button, stock prediction are displayed whether to hold, buy or sell in future according to the market trend. The trader must log in using their user name and password. The trader will browse the client current market price of all stocks, buying and selling stocks, contract note, client margin, e-off market transactions, ledgers, journals, commission of buying and selling stocks, and so on. In future strategy the stock application programming is converted by a portable mobile application by using python packages like Kivy, PyQt, or even Beeware's Toga library.
Social engineering is a method of information security that allows for system or network access. When victims are unaware of techniques, models, and frameworks to prevent them, social engineering attacks happen. In order to stop social engineering attacks, the current research describes user studies, constructs, assessment, concepts, frameworks, models, and techniques. Sadly, there isn't any specific prior research on mitigating social engineering attacks that thoroughly and efficiently analyzes it. Health campaigns, human security sensor frameworks, user-centric frameworks, and user vulnerability models are examples of current social engineering attack prevention techniques, models, and frameworks. Guidance is required to examine cybersecurity as super-recognizers, possibly acting as police for a secure system, for the human as a security sensor architecture. This research aims to critically and systematically analyze earlier material on social engineering attack prevention strategies, models, and frameworks. Based on Bryman & Bell's methodology for conducting literature reviews, we carried out a systematic review of the available research. Using a protocol, we discovered a novel strategy to stop social engineering assaults in addition to approaches, frameworks, models, and assessments, based on our review. We discovered that the protocol can successfully stop social engineering assaults, including health campaigns, the susceptibility of social engineering victims, and co-utile protocol, which can control information sharing on a social network. This comprehensive evaluation of the research is what we're presenting in order to suggest safeguards against social engineering assaults.
Teaching and learning computer programming is challenging for many undergraduate first-year computer science students. During introductory programming courses, novice programmers need to learn some basic algorithms, gain algorithmic thinking, improve their logical and problem-solving thinking skills, and learn data types, data structures, and the syntax of the chosen programming language. In literature, we can find various methods of teaching programming that can motivate students and reduce students’ cognitive load during the learning process of computer programming, e.g., using robotic kits, microcontrollers, microworld environments, virtual worlds, serious games, interactive animations, and visualizations. In this paper, we focus mainly on algorithm visualizations, especially on the different models of data structures that can be effectively used in educational visualizations. First, we show how a vector (one-dimensional array), a matrix (two-dimensional array), a singly linked list, and a graph can be represented by various models. Next, we also demonstrate some examples of interactive educational algorithm animations for teaching and learning elementary algorithms and some sorting algorithms, e.g., swapping two variables, summing elements of the array, mirroring the array, searching the minimum or maximum of the array, searching the index of minimum or maximum of the array, sorting elements of the array using simple exchange sort, bubblesort, insertion sort, minsort, maxsort, quicksort, or mergesort. Finally, in the last part of the paper, we summarize our experiences in teaching algorithmization and computer programming using algorithm animations and visualizations and draw some conclusions.
To acquire algorithmic thinking is a long process that has a few steps. The most basic level of algorithmic thinking is when students recognize the algorithms and various problems that can be solved with algorithms. At the second level, students can execute the given algorithms. At the third level of algorithmic thinking, students can analyze the algorithms, they recognize which steps are executed in sequences, conditions or loops. At the fourth level, students can create their algorithms. The last three levels of algorithmic thinking are: the implementation of the algorithms in a programming language, modifying and improving the algorithms, and creating complex algorithms. In preliminary research related to algorithmic thinking, we investigated how first-year undergraduate computer science students of J. Selye University can solve problems associated with the second, third and fourth level of algorithmic thinking. We chose these levels because these levels do not require to know any programming language. The tasks that students had to solve were for example: what will be the route of a robot when it executes the given instructions, how many times we need to cross a river to carry everyone to another river-bank. To solve these types of tasks requires only good algorithmic thinking. The results showed that students reached 81.4% average score on tasks related to the execution of given algorithms, 72.3% average score on tasks where they needed to analyze algorithms, and 66.2% average score on tasks where students needed to create algorithms. The latter type of tasks were mostly various river-crossing problems. Even though, that students reached a 66.2% average score on these tasks, if we had accepted only solutions with the optimal algorithms (minimal number of river crossing), they would have reached only a 21.3% average score, which is very low. To help students find the optimal algorithms of river crossing puzzles, we developed several interactive web-based animations. In the last part of this paper, we describe these animations, we summarize how they were created and how they can be used in education. Finally, we conclude and briefly mention our plans related to our future research.
Along with computer technology, the demand of digital image processing is too high and it is used massively in every sector like organization, business, medical etc. Image segmentation enables us to analyze any given image in order to extract information from the image. There are numerous algorithm and techniques have been industrialized in the field of image segmentation. Segmentation has become one of the prominent tasks in machine vision. Machine vision enables the machine to vision the real world problems like human does and also act accordingly to solve the problem so it is utmost important to come up with the techniques that can be applied for the image segmentations. Invention of modern segmentation methods like instance, semantic and panoptic segmentation have advances the concept of machine vision. This paper focuses on the various methods of image segmentation along with its advantages and disadvantages.
Digital data is growing enormously as the year passes and therefore there is a need of mechanism to protect the digital contents. Image watermarking is one of the important tools for the human to provide copyright protection and authorship. For achieving the ideal balance between imperceptibility and robustness, a robust blind color image watermarking employing deep artificial neural networks (DANN), LWT and the YIQ color model has been presented. In the suggested watermarking method, an original 512-bit watermark is applied for testing and a randomly generated watermark of the same length is used for training. PCA is used to extract 10 statistical features with significant values out of 18 statistical features, and binary classification is used to extract watermarks here. For the four images Lena, Peppers, Mandril, and Jet, it displays an average imperceptibility of 52.48 dB. For the threshold value of 0.3, it does an excellent job of achieving good balance between robustness and imperceptibility. Except for the gaussian noise, rotation, and average filtering attacks, it also demonstrates good robustness against common image attacks. The results of the experiment demonstrate that the suggested watermarking method outperforms competing methods.
The memory card game is a game that probably everyone played in childhood. The game consists of n pairs of playing cards, whereas each card of a pair is identical. At the beginning of the game, the deck of cards is shuffled and laid face down. In every move of the game, the player flips over two cards. If the cards match, the pair of cards is removed from the game; otherwise, the cards are flipped back over. The game ends when all pairs of cards have been found. The game could be played by one, two, or more players. First, this paper shows an optimal algorithm for solving a single-player memory card game. In the algorithm, we defined four steps where the user needed to remember the earlier shown pairs of cards, which cards were already shown, and the locations of the revealed cards. We marked the memories related to these steps as M1, M2, M3, and M4. Next, we made some simulations as we changed the M1, M2, M3, and M4 memories from no user memory (where the player does not remember the cards or pairs of cards at all) to a perfect user memory (where the player remembers every earlier shown card or pair of cards). With every memory setting, we simulated 1000 gameplays. We recorded how many cards or pairs of cards the player would need to remember and how many moves were required to finish the game. Finally, we evaluated the recorded data, illustrated the results on graphs, and drew some conclusions.
Abstract- The role of support vector machine in the evaluation of English teaching effect is very important, but there is a problem of inaccurate evaluation of results. The traditional English teaching mode cannot solve the accuracy and efficiency of the effect evaluation of students' English teaching and cannot meet the requirements of English teaching effect evaluation. Therefore, this paper proposes a neural network algorithm to innovate and optimize the analysis of support vector machines. Firstly, the relevant theories are used to construct a multi-index English teaching effect evaluation system with teachers and students as the main body, and the indicators are divided according to the data requirements of English teaching effect evaluation indicators to reduce the support vector machine in the interfering factor. Then, the neural network algorithm is used to solve the optimal solution of kernel function parameters and regularization parameters of the support vector machine, and the support vector machine scheme is formed, and the support vector machine results are carried out Comprehensive analysis. MATLAB simulation shows that the evaluation accuracy of the English teaching effect of the neural network algorithm and the support vector machine under certain evaluation criteria Optimal, short evaluation time.
A new feature generates customer delight by using modern computer vision techniques to drive new search paradigms through visual discovery.
Pneumonia is an acute pulmonary infection that can be caused by bacteria, viruses, or fungi. It infects the lungs, causing inflammation of the air sacs and pleural effusion: a condition in which the lung is filled with fluid. The diagnosis of pneumonia is tasking as it requires a review of Chest X-ray (CXR) by specialists, laboratory tests, vital signs, and clinical history. Utilizing CXR is an important pneumonia diagnostic method for the evaluation of the airways, pulmonary parenchyma, and vessels, chest walls among others. It can also be used to show changes in the lungs caused by pneumonia. This study aims to employ transfer learning, and ensemble approach to help in the detection of viral pneumonia in chest radiographs. The transfer learning model used was Inception network, ResNet-50, and InceptionResNetv2. With the help of our research, we were able to show how well the ensemble technique, which uses InceptionResNetv2 and the utilization of the Non-local Means Denoising algorithm, works. By utilizing these techniques, we have significantly increased the accuracy of pneumonia classification, opening the door for better diagnostic abilities and patient care. For objective labeling, we obtained a selection of patient chest X-ray images. In this work, the model was assessed using state-of-the-art metrics such as accuracy, sensitivity, and specificity. From the statistical analysis and scikit learn python analysis, the accuracy of the ResNet-50 model was 84%, the accuracy of the inception model was 91% and lastly, the accuracy of the InceptionResNetv2 model was 96%.
Sentiment analysis and opinion mining is a branch of computer science that has gained considerable growth over the last decade. This branch of computer science deals with determining the emotions, opinions, feelings amongst others of a person on a particular topic. Social media has become an outlet for people to voice out their thoughts and opinions publicly about various topics of discussion making it a great domain to apply sentiment analysis and opinion mining. Sentiment analysis and opinion mining employ Natural Language Processing (NLP) in order to fairly obtain the mood of a person’s opinion about any specific topic or product in the case of an ecommerce domain. It is a process involving automatic feature extractions by mode of notions of a person about service and it functions on a series of different expressions for a given topic based on some predefined features stored in a database of facts. In an ecommerce system, the process of analyzing the opinions of customers about products is vital for business growth and customer satisfaction. This proposed research will attempt to implement a model for sentiment analysis and opinion mining on Twitter feeds. In this paper, we address the issues of combining sentiment classification and the domain constraint analysis techniques for extracting opinions of the public from social media. The dataset that was employed in the paper was gotten from Twitter through the tweepy API. The TextBlob library was used for the analysis of the tweets to determine their sentiments. The result shows that more tweets were having a positive subjectivity and polarity on the subject matter.
Currently, the use of internet-connected applications for storage by different organizations have rapidly increased with the vast need to store data, cybercrimes are also increasing and have affected large organizations and countries as a whole with highly sensitive information, countries like the United States of America, United Kingdom and Nigeria. Organizations generate a lot of information with the help of digitalization, these highly classified information are now stored in databases via the use of computer networks. Thus, allowing for attacks by cybercriminals and state-sponsored agents. Therefore, these organizations and countries spend more resources analyzing cybercrimes instead of preventing and detecting cybercrimes. The use of network forensics plays an important role in investigating cybercrimes; this is because most cybercrimes are committed via computer networks. This paper proposes a new approach to analyzing digital evidence in Nigeria using a proactive method of forensics with the help of deep learning algorithms - Convolutional Neural Networks (CNN) to proactively classify malicious packets from genuine packets and log them as they occur.
Due to the rapid growth in the field of science and technology, IoT (Internet of Things) has become emerging technique for connecting heterogeneous technologies related to our daily needs that can affect our lives tremendously. It allows the devices to be connected to each other and controlled or monitored through handheld devices. The IoT network is a heterogeneous network that links several small hardware restriction devices, and where conventional security architectures and techniques cannot be used. So, providing protection to the IoT network involves a diverse range of specialized techniques and architectures. This paper focuses on the requirements of defense, current state of the art and future directions in the field of IoT.
We designed a mobile application to deal with Ischemic Heart Disease (IHD) (Heart Attack) An Android based mobile application has been used for coordinating clinical information taken from patients suffering from Ischemic Heart Disease (IHD). The clinical information from 787 patients has been investigated and associated with the hazard factors like Hypertension, Diabetes, Dyslipidemia (Abnormal cholesterol), Smoking, Family History, Obesity, Stress and existing clinical side effect which may propose basic non-identified IHD. The information was mined with information mining innovation and a score is produced. Effects are characterized into low, medium and high for IHD. On looking at and ordering the patients whose information is acquired for producing the score; we found there is a noteworthy relationship of having a heart occasion when low and high and medium and high class are analyzed; p=0.0001 and 0.0001 individually. Our examination is to influence straightforward way to deal with recognize the IHD to risk and careful the population to get themselves assessed by a cardiologist to maintain a strategic distance from sudden passing. As of now accessible instruments has a few confinements which makes them underutilized by populace. Our exploration item may decrease this constraint and advance hazard assessment on time.
Malwares are one of the most dangerous security threats in today’s world of fast growing technology. Now, it is not impossible to remotely lock down a system’s files for ransoms even when it is located overseas. This threat was accelerated when the world was introduced to cryptocurrency (for e.g., Bitcoins). It allowed the attackers to hide their tracks more efficiently. From a simple idea of testing the efficiency of a computer system to the most critical and sophisticated cyber-attack, malwares has evolved over the years and appeared time to time. Even with the smartest technologies today where we are trying to include Machine learning and Deep learning to every field of our life, the attackers are already developing more sophisticated malwares using the same Machine learning and Deep learning techniques. This raises the question on the security of the cyber-world and how we are able to protect it. In this work, we are presenting an analysis on a recent and most critical Windows malware called “LockerGoga”. Both static and dynamic analyses are performed on the malware to understand the behavior and characteristics of the malware.
Department Of Mathematics, National University Of Skills (nus), Tehran, Iran.
Police Academy, Egypt