Wise Apply on a Machine Learning-Based College Recommendation Data System

Wise Apply on a Machine Learning-Based College Recommendation Data System

Jyoti P. Kanjalkar, Gaurav N. Patil, Gaurav R. Patil, Yash Parande, Bhavesh Dilip Patil, Pramod Kanjalkar
Copyright: © 2024 |Pages: 12
DOI: 10.4018/979-8-3693-0049-7.ch018
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

This chapter presents a college recommendation system using machine learning with the features of branch, caste, location, and fees. The system aims to provide personalized recommendations to students based on their preferences and past academic performance. The dataset used in the study consists of information about various colleges, including their location, fees, available branches, and the percentage of students belonging to different castes. The system uses a combination of machine learning algorithms, including decision trees and random forests, to provide accurate and efficient recommendations. The Adaboost algorithm is used to find colleges with similar features to the student's preferences, while decision trees and random forests are used to make predictions based on past data. The proposed system is evaluated using metrics such as accuracy, precision, recall, and F1 score. The results show that the system provides highly accurate and personalized recommendations to students.
Chapter Preview
Top

1. Introduction

Machine learning, or ML, is a subfield of AI concerned with teaching computers to learn new skills without being explicitly programmed to do so (Girase et al., 2017). The goal of machine learning is to give computers the ability to learn from experience and steadily improve their results (Jain et al., 2018). It's useful for solving difficult problems, making accurate predictions, and spotting trends (Singh et al., 2020). Machine learning is a rapidly evolving field with applications in various industries, from healthcare and finance to marketing and cybersecurity (Sharma et al., 2019). As technology advances, machine learning continues to play a crucial role in automating tasks, extracting insights from data, and enabling intelligent decision-making (Tian et al., 2019).

When an algorithm is trained on a labelled dataset, also known as “supervised learning,” the output label for each input data point is known beforehand (Abdullahi et al., 2023). The purpose of supervised learning is to train an algorithm to accurately predict or classify data it has never seen before by exposing it to examples of that data and its expected outcomes (the inputs) (Angeline et al., 2023). Supervised learning is a foundational concept in machine learning and is widely used in various fields due to its ability to make predictions based on labeled data (Anand et al., 2023). It forms the basis for many practical applications and continues to be an active area of research and development (Rajasekaran et al., 2023).

Unsupervised learning is a type of machine learning where the algorithm is given unlabelled data and must find patterns, relationships, or structures within the data on its own (Rajest, et al., 2023a). Unlike supervised learning, there are no predefined output labels to guide the learning process. The goal of unsupervised learning is often to explore the hidden structure within the data or to group similar data points together (Rajest, et al., 2023b). Unsupervised learning is a powerful approach for exploring and understanding the underlying structure of data (Regin et al., 2023). It is particularly useful when the data lacks labeled examples or when the objective is to uncover hidden patterns or relationships (Sivapriya et al., 2023).

In the machine learning paradigm known as Reinforcement Learning (RL), an agent learns to make decisions by seeing and responding to its surroundings (Sohlot et al., 2023). The agent learns the best course of action through repeated exposure to feedback in the form of incentives and punishments. Human and animal learning processes serve as inspiration for reinforcement learning. Key components and concepts of reinforcement learning include:

Agent: The entity that makes decisions and takes actions in an environment. The goal of the agent is to maximize its cumulative reward over time.

Environment: The external system with which the agent interacts. The agent's actions cause a change in the environment, which in turn affects the agent.

State: A representation of the current situation or configuration of the environment. The state provides the context for the agent’s decision-making process.

Action: The set of possible moves or decisions that the agent can take in a given state. Actions influence the state of the environment.

Reward: A numerical signal that the environment provides to the agent as feedback after it takes an action in a certain state. The reward indicates the immediate benefit or cost associated with the action.

Policy: The strategy or mapping from states to actions that the agent follows to make decisions. The objective is to learn a strategy that maximises future rewards.

Value Function: The value function predicts how much the agent stands to gain over time if they begin in a given state and implement a certain policy. It aids the agent in weighing the relative attractiveness of potential future situations.

Exploration and Exploitation: Balancing exploration (trying new actions to discover their effects) and exploitation (choosing actions that are known to yield high rewards) is a crucial challenge in reinforcement learning.

Complete Chapter List

Search this Book:
Reset