Computer Science Applications articles list

A state-of-the-art analysis of android malware detection methods

Smartphones are constantly changing in today's world, and as a result, security has become a major concern. Security is a vital aspect of human life, and in a world where security is lacking, it becomes a concern for mobile users' safety. Malware is one of the most serious security risks to smartphones. Mobile malware attacks are becoming more sophisticated and widespread. Malware authors consider the open-source Android platform to be their preferred target as it came to lead the market. State-of-the-art mobile malware detection solutions in the literature use a variety of metrics and models, making cross-comparison difficult. In this paper various existing methods are compared and a significant effort is made to briefly address android malwares, various methods for detecting android malwares and to give a clear image of the progress of the android platform and various malware detection classifiers.

Jebin Bose S

Enabling authenticity and integrity with information hiding for secure communication in internet of things

Internet of Things (IoT) enhances the global connectivity to all the remote sensing devices. It enables the connectivity of communication and processing the real-time data that has been collected from an enormous number of connected sensing devices. There is an increase in the IoT technology that leads to various malicious attacks. It is more important to overcome the malicious attacks, mainly to stop attackers or intruders from taking all the control of devices. Ensuring the safety and accuracy of the sensing devices is a serious task. It is very much important to enabling the authenticity and integrity to obtain the safety of the devices. Dynamic tree chaining, Geometric star chaining and Onion encryption are the three solutions that has been proposed in this project for in order to enable authenticity and integrity with information hiding for secure communication. The simulation results are driven displays that the proposed system is very stable and much better than other existing solution in means of security, space and time.

Jebin Bose S

Efficient and secure data transfer in iot

Nowadays, the Internet of Things (IoT) has been used widely in our daily day to day life, starting from health care devices, hospital management appliances to a smart city. Most of the IoT devices have limited resources and limited storing capability. All the sensed information must have to be transmitted and to store in the cloud. To make a decision and for making analysis all the data stored in the cloud has to be retrieved. Making certain the credibility and security of the sensed information are much necessary and very important for the use of IoT devices. We tend to examine the proposed technique to be much secure than the existing one. In IoT, if the security is not ensured, then it may result in a variety of unsought issues. This survey resembles the overall safety aspects of IoT and debates the overall issues in the security of IoT.

Jebin Bose S

Experiences in developing stock market application using php and integrating data on machine learning for data analysis

In the direction of computer globalization and digitization, India is rapidly developing education and information technology. People are taught how to invest in deposits, postal investments, government bonds, gold systems and bonds, and the private sector. The world in which we now live has been completely transformed by technology. The study indicates that there are more than 4 billion active Internet users worldwide, or nearly half of the world's population. Our lives are now faster, easier to manage, and more enjoyable thanks to modern technology. This paper focuses in experiencing and developing a stock application using PHP, React JS, NodeJS and CSS . All the stock data is stored in a MYSQL database. On the other side for developing machine language application python code is used to convert the data into csv format for machine learning algorithms. The investor is presented with a login screen in the python environment where they must enter their user name and password. The stock dashboard shows the investor's current stock holdings, as well as online stocks' their current price, percentage change in stocks, sensex, nifty, bonus, rights, IPO's, annual report etc. statistical methods are used as software modules for the investor, and with a single click of a button, they can compare and contrast their own stocks with online stocks, as well as the trend in the stock market position in order to decide whether to buy, hold, or sell the stocks. Data visualization component is used for comparison of various stocks, and by clicking of a button, stock prediction are displayed whether to hold, buy or sell in future according to the market trend. The trader must log in using their user name and password. The trader will browse the client current market price of all stocks, buying and selling stocks, contract note, client margin, e-off market transactions, ledgers, journals, commission of buying and selling stocks, and so on. In future strategy the stock application programming is converted by a portable mobile application by using python packages like Kivy, PyQt, or even Beeware's Toga library.

Dr.N.R.Ananthanarayanan

A systematic evaluation of research on social engineering attacks prevention

Social engineering is a method of information security that allows for system or network access. When victims are unaware of techniques, models, and frameworks to prevent them, social engineering attacks happen. In order to stop social engineering attacks, the current research describes user studies, constructs, assessment, concepts, frameworks, models, and techniques. Sadly, there isn't any specific prior research on mitigating social engineering attacks that thoroughly and efficiently analyzes it. Health campaigns, human security sensor frameworks, user-centric frameworks, and user vulnerability models are examples of current social engineering attack prevention techniques, models, and frameworks. Guidance is required to examine cybersecurity as super-recognizers, possibly acting as police for a secure system, for the human as a security sensor architecture. This research aims to critically and systematically analyze earlier material on social engineering attack prevention strategies, models, and frameworks. Based on Bryman & Bell's methodology for conducting literature reviews, we carried out a systematic review of the available research. Using a protocol, we discovered a novel strategy to stop social engineering assaults in addition to approaches, frameworks, models, and assessments, based on our review. We discovered that the protocol can successfully stop social engineering assaults, including health campaigns, the susceptibility of social engineering victims, and co-utile protocol, which can control information sharing on a social network. This comprehensive evaluation of the research is what we're presenting in order to suggest safeguards against social engineering assaults.

Mahesh Donga

Models of data structures in educational visualizations for supporting teaching and learning algorithms and computer programming

Teaching and learning computer programming is challenging for many undergraduate first-year computer science students. During introductory programming courses, novice programmers need to learn some basic algorithms, gain algorithmic thinking, improve their logical and problem-solving thinking skills, and learn data types, data structures, and the syntax of the chosen programming language. In literature, we can find various methods of teaching programming that can motivate students and reduce students’ cognitive load during the learning process of computer programming, e.g., using robotic kits, microcontrollers, microworld environments, virtual worlds, serious games, interactive animations, and visualizations. In this paper, we focus mainly on algorithm visualizations, especially on the different models of data structures that can be effectively used in educational visualizations. First, we show how a vector (one-dimensional array), a matrix (two-dimensional array), a singly linked list, and a graph can be represented by various models. Next, we also demonstrate some examples of interactive educational algorithm animations for teaching and learning elementary algorithms and some sorting algorithms, e.g., swapping two variables, summing elements of the array, mirroring the array, searching the minimum or maximum of the array, searching the index of minimum or maximum of the array, sorting elements of the array using simple exchange sort, bubblesort, insertion sort, minsort, maxsort, quicksort, or mergesort. Finally, in the last part of the paper, we summarize our experiences in teaching algorithmization and computer programming using algorithm animations and visualizations and draw some conclusions.

Ladislav Végh

Using interactive web-based animations to help students to find the optimal algorithms of river crossing puzzles

To acquire algorithmic thinking is a long process that has a few steps. The most basic level of algorithmic thinking is when students recognize the algorithms and various problems that can be solved with algorithms. At the second level, students can execute the given algorithms. At the third level of algorithmic thinking, students can analyze the algorithms, they recognize which steps are executed in sequences, conditions or loops. At the fourth level, students can create their algorithms. The last three levels of algorithmic thinking are: the implementation of the algorithms in a programming language, modifying and improving the algorithms, and creating complex algorithms. In preliminary research related to algorithmic thinking, we investigated how first-year undergraduate computer science students of J. Selye University can solve problems associated with the second, third and fourth level of algorithmic thinking. We chose these levels because these levels do not require to know any programming language. The tasks that students had to solve were for example: what will be the route of a robot when it executes the given instructions, how many times we need to cross a river to carry everyone to another river-bank. To solve these types of tasks requires only good algorithmic thinking. The results showed that students reached 81.4% average score on tasks related to the execution of given algorithms, 72.3% average score on tasks where they needed to analyze algorithms, and 66.2% average score on tasks where students needed to create algorithms. The latter type of tasks were mostly various river-crossing problems. Even though, that students reached a 66.2% average score on these tasks, if we had accepted only solutions with the optimal algorithms (minimal number of river crossing), they would have reached only a 21.3% average score, which is very low. To help students find the optimal algorithms of river crossing puzzles, we developed several interactive web-based animations. In the last part of this paper, we describe these animations, we summarize how they were created and how they can be used in education. Finally, we conclude and briefly mention our plans related to our future research.

Ladislav Végh

A review on image segmentation

Along with computer technology, the demand of digital image processing is too high and it is used massively in every sector like organization, business, medical etc. Image segmentation enables us to analyze any given image in order to extract information from the image. There are numerous algorithm and techniques have been industrialized in the field of image segmentation. Segmentation has become one of the prominent tasks in machine vision. Machine vision enables the machine to vision the real world problems like human does and also act accordingly to solve the problem so it is utmost important to come up with the techniques that can be applied for the image segmentations. Invention of modern segmentation methods like instance, semantic and panoptic segmentation have advances the concept of machine vision. This paper focuses on the various methods of image segmentation along with its advantages and disadvantages.

Manoj Kumar Pandey

Deep artificial neural network based blind color image watermarking

Digital data is growing enormously as the year passes and therefore there is a need of mechanism to protect the digital contents. Image watermarking is one of the important tools for the human to provide copyright protection and authorship. For achieving the ideal balance between imperceptibility and robustness, a robust blind color image watermarking employing deep artificial neural networks (DANN), LWT and the YIQ color model has been presented. In the suggested watermarking method, an original 512-bit watermark is applied for testing and a randomly generated watermark of the same length is used for training. PCA is used to extract 10 statistical features with significant values out of 18 statistical features, and binary classification is used to extract watermarks here. For the four images Lena, Peppers, Mandril, and Jet, it displays an average imperceptibility of 52.48 dB. For the threshold value of 0.3, it does an excellent job of achieving good balance between robustness and imperceptibility. Except for the gaussian noise, rotation, and average filtering attacks, it also demonstrates good robustness against common image attacks. The results of the experiment demonstrate that the suggested watermarking method outperforms competing methods.

Manoj Kumar Pandey

Simulations of solving a single-player memory card game with several implementations of a human-like thinking computer algorithm

The memory card game is a game that probably everyone played in childhood. The game consists of n pairs of playing cards, whereas each card of a pair is identical. At the beginning of the game, the deck of cards is shuffled and laid face down. In every move of the game, the player flips over two cards. If the cards match, the pair of cards is removed from the game; otherwise, the cards are flipped back over. The game ends when all pairs of cards have been found. The game could be played by one, two, or more players. First, this paper shows an optimal algorithm for solving a single-player memory card game. In the algorithm, we defined four steps where the user needed to remember the earlier shown pairs of cards, which cards were already shown, and the locations of the revealed cards. We marked the memories related to these steps as M1, M2, M3, and M4. Next, we made some simulations as we changed the M1, M2, M3, and M4 memories from no user memory (where the player does not remember the cards or pairs of cards at all) to a perfect user memory (where the player remembers every earlier shown card or pair of cards). With every memory setting, we simulated 1000 gameplays. We recorded how many cards or pairs of cards the player would need to remember and how many moves were required to finish the game. Finally, we evaluated the recorded data, illustrated the results on graphs, and drew some conclusions.

Ladislav Végh

Evaluation model of english teaching effect based on neural network algorithm and support vector machine

Abstract- The role of support vector machine in the evaluation of English teaching effect is very important, but there is a problem of inaccurate evaluation of results. The traditional English teaching mode cannot solve the accuracy and efficiency of the effect evaluation of students' English teaching and cannot meet the requirements of English teaching effect evaluation. Therefore, this paper proposes a neural network algorithm to innovate and optimize the analysis of support vector machines. Firstly, the relevant theories are used to construct a multi-index English teaching effect evaluation system with teachers and students as the main body, and the indicators are divided according to the data requirements of English teaching effect evaluation indicators to reduce the support vector machine in the interfering factor. Then, the neural network algorithm is used to solve the optimal solution of kernel function parameters and regularization parameters of the support vector machine, and the support vector machine scheme is formed, and the support vector machine results are carried out Comprehensive analysis. MATLAB simulation shows that the evaluation accuracy of the English teaching effect of the neural network algorithm and the support vector machine under certain evaluation criteria Optimal, short evaluation time.

LOKESH N S

Https://tech.ebayinc.com/engineering/how-ebays-new-search-feature-was-inspired-by-window-shopping/

A new feature generates customer delight by using modern computer vision techniques to drive new search paradigms through visual discovery.

Senthilkumar Gopal

A natural language processing approach to determine the polarity and subjectivity of iphone 12 twitter feeds using textblob

Sentiment analysis and opinion mining is a branch of computer science that has gained considerable growth over the last decade. This branch of computer science deals with determining the emotions, opinions, feelings amongst others of a person on a particular topic. Social media has become an outlet for people to voice out their thoughts and opinions publicly about various topics of discussion making it a great domain to apply sentiment analysis and opinion mining. Sentiment analysis and opinion mining employ Natural Language Processing (NLP) in order to fairly obtain the mood of a person’s opinion about any specific topic or product in the case of an ecommerce domain. It is a process involving automatic feature extractions by mode of notions of a person about service and it functions on a series of different expressions for a given topic based on some predefined features stored in a database of facts. In an ecommerce system, the process of analyzing the opinions of customers about products is vital for business growth and customer satisfaction. This proposed research will attempt to implement a model for sentiment analysis and opinion mining on Twitter feeds. In this paper, we address the issues of combining sentiment classification and the domain constraint analysis techniques for extracting opinions of the public from social media. The dataset that was employed in the paper was gotten from Twitter through the tweepy API. The TextBlob library was used for the analysis of the tweets to determine their sentiments. The result shows that more tweets were having a positive subjectivity and polarity on the subject matter.

Dr. Chandrashekhar Uppin

A proactive approach to network forensics intrusion (denial of service flood attack) using dynamic features, selection and convolution neural network

Currently, the use of internet-connected applications for storage by different organizations have rapidly increased with the vast need to store data, cybercrimes are also increasing and have affected large organizations and countries as a whole with highly sensitive information, countries like the United States of America, United Kingdom and Nigeria. Organizations generate a lot of information with the help of digitalization, these highly classified information are now stored in databases via the use of computer networks. Thus, allowing for attacks by cybercriminals and state-sponsored agents. Therefore, these organizations and countries spend more resources analyzing cybercrimes instead of preventing and detecting cybercrimes. The use of network forensics plays an important role in investigating cybercrimes; this is because most cybercrimes are committed via computer networks. This paper proposes a new approach to analyzing digital evidence in Nigeria using a proactive method of forensics with the help of deep learning algorithms - Convolutional Neural Networks (CNN) to proactively classify malicious packets from genuine packets and log them as they occur.

Dr. Chandrashekhar Uppin

A comprehensive review for security analysis of iot platforms

Due to the rapid growth in the field of science and technology, IoT (Internet of Things) has become emerging technique for connecting heterogeneous technologies related to our daily needs that can affect our lives tremendously. It allows the devices to be connected to each other and controlled or monitored through handheld devices. The IoT network is a heterogeneous network that links several small hardware restriction devices, and where conventional security architectures and techniques cannot be used. So, providing protection to the IoT network involves a diverse range of specialized techniques and architectures. This paper focuses on the requirements of defense, current state of the art and future directions in the field of IoT.

Dr. Chandrashekhar Uppin

Smartphone based ischemic heart disease (heart attack) risk prediction using clinical data and data mining approaches

We designed a mobile application to deal with Ischemic Heart Disease (IHD) (Heart Attack) An Android based mobile application has been used for coordinating clinical information taken from patients suffering from Ischemic Heart Disease (IHD). The clinical information from 787 patients has been investigated and associated with the hazard factors like Hypertension, Diabetes, Dyslipidemia (Abnormal cholesterol), Smoking, Family History, Obesity, Stress and existing clinical side effect which may propose basic non-identified IHD. The information was mined with information mining innovation and a score is produced. Effects are characterized into low, medium and high for IHD. On looking at and ordering the patients whose information is acquired for producing the score; we found there is a noteworthy relationship of having a heart occasion when low and high and medium and high class are analyzed; p=0.0001 and 0.0001 individually. Our examination is to influence straightforward way to deal with recognize the IHD to risk and careful the population to get themselves assessed by a cardiologist to maintain a strategic distance from sudden passing. As of now accessible instruments has a few confinements which makes them underutilized by populace. Our exploration item may decrease this constraint and advance hazard assessment on time.

Dr. Chandrashekhar Uppin

Dynamic analysis of a window-based malware using automated sandboxing

Malwares are one of the most dangerous security threats in today’s world of fast growing technology. Now, it is not impossible to remotely lock down a system’s files for ransoms even when it is located overseas. This threat was accelerated when the world was introduced to cryptocurrency (for e.g., Bitcoins). It allowed the attackers to hide their tracks more efficiently. From a simple idea of testing the efficiency of a computer system to the most critical and sophisticated cyber-attack, malwares has evolved over the years and appeared time to time. Even with the smartest technologies today where we are trying to include Machine learning and Deep learning to every field of our life, the attackers are already developing more sophisticated malwares using the same Machine learning and Deep learning techniques. This raises the question on the security of the cyber-world and how we are able to protect it. In this work, we are presenting an analysis on a recent and most critical Windows malware called “LockerGoga”. Both static and dynamic analyses are performed on the malware to understand the behavior and characteristics of the malware.

Dr. Chandrashekhar Uppin

Emerging technological tools and services to building world class paper less library information &management system [lims]

Library automation is just not a book inventory where hold, issue and receiving of books by using technological tools and services. Our applied research was found that most of the library administrative functionalities such as ‘Acquisition and Accessioning’, ‘auto Indexing & Classification’ and auto Cataloging (Books & Non books materials)’, inventory with real-time OPAC facilities and many more library science concepts are still missing constructs in Universities/College libraries across the country. In this modern era the concept of eLibrary is more popular because availability and accessibility of digitized content sharing through IT/ICT infrastructure is huge. During our research we found that cognizance of Library automation was completely ignored and focused on only talking and establishment of eLibrary. As we all know that “Physical Library” is not a substitute for “eLibrary”. In fact eLibrary is part of a Physical Library to share authenticated digitized content through IT/ICT infrastructure. After a decade of our applied research in the area of Library science, eventually we recorded a lot of findings based on our survey and discussion with senior researchers and Librarians. Our serious and consistent effort makes to succeeds in designing comprehensively effective and efficient operational strategies to build a “world class Library Automation and Paper less Library Management System” for Universities/College libraries. This paper emphasizes about the comprehensive real-time architecture and operational modules and their effectiveness to achieve the user’s satisfaction (flow of functionalities as per the exact need of the Library Management system). This dealt with how emerging technological tools and services are effectively integrated for designing new strategies in the area of library science includes various automation process and security concepts (using Barcode/RFID). Eventually, our dream comes true in building Use of Emerging Technological Tools and Services to building world class Paper less Library Information &Management System [LIMS]. presently deployed and use of this software product in more than 300 satisfied and client locations in INDIA, this product popularly named as “eLib” by AarGees Business Solution, Hubli, India. Though, our research is still on and continuing for further development to build “Global knowledge sharing Centre”.

Dr. Chandrashekhar Uppin

Comparing machine learning classification models on a loan approval prediction dataset

In the last decade, we have observed the usage of artificial intelligence algorithms and machine learning models in industry, education, healthcare, entertainment, and several other areas. In this paper, we focus on using machine learning algorithms in the loan approval process of financial institutions. First, we briefly review some prior research papers that dealt with loan approval predictions using machine learning models. Next, we analyze the loan approval prediction dataset we downloaded from Kaggle, which was used in this paper to compare several machine learning classification models. During this analysis, we observed that credit scores and loan terms are the attributes that probably most affect the result. Next, we divided the dataset into a training set (80%) and a test set (20%). We trained 27 various machine learning models in MATLAB. Three models were optimized with Bayesian optimization to find the best hyperparameters with minimum error. We used 5-fold cross-validation for the validations to prevent overfitting during the training. In the following step, we used the test set on trained models to measure the models' accuracy on unseen data. The result showed that the best accuracy both on validation and test data, more than 98%, was reached with neural networks and ensemble classification models.

Ladislav Végh

Technology and isolation in the information period

Technology and Isolation in the Information Period explores the social, political, and legal implications of the group and use of personal information in computer databases. In the Information Period, our lives are documented in digital case books maintained by hundreds (perhaps thousands) of businesses and government agencies. These case books are composed of bits of our personal information, which when assembled together begin to paint a portrait of our personalities. Technology has changed our working practices and now allows us to be connected 24/7. We have the power to Skype clients around the world and email or Tweet work colleagues at weekends, but is there a danger that having connectivity so readily available hinders our efforts to gain a better work/life balance? We often read articles about the correct way to manage technology and how to achieve a happy balance of relaxing away from work and being ‘Always On’ and ‘Always available’. For instance, many people choose a job that allows them to detach from the workplace on evenings and weekends, yet technology makes it difficult for others to switch off. Now, we are huge believers in the ability to use the internet, email, Facebook, Twitter, texting and to pick up the phone and talk to people, but because of technology more people are becoming distracted and losing focus. They can’t escape from the workplace and feel that the office follows them around via their Smartphone which demands attention 24/7. It wasn’t so long ago that when we took a holiday, we would plan ahead, make sure everything was order, inform clients of our absence and brief our teams so we could disappear off to distant shores and happily sit in the sun for a relaxing fortnight, avoiding drinking the local water, eating strange local delicacies and fending of mosquitoes. But I digress … Having technology at hand means that we still have an element of control at our workplace and the ability to deal with issues if they occur. The downside to this is managing that work/life balance again. On one hand the internet is a lifeline, but it can also become a ball and chain if we don’t take the time to unplug from it. Technology allows us to be in two places at once, but when employees struggle to find the right balance between their work and personal lives there is a chance that stress levels can increase leading to a potential loss of productivity and happiness in general. In a recent survey 70% of workers said that technology brings the stress of work into their personal lives. Many Researchers have recommended drawing a line between work communications and home. This not only benefits your well being, but benefits your employer too, as you will be coming back to work refreshed and recharged.

Sunita Singh

 < 1 2 3 >