Post by account_disabled on Sept 10, 2023 4:26:53 GMT -7
Machine learning and deep learning are widely known and familiar terms, but as famous as they are, they are also widely misunderstood. Here, we take a step back and look at the basics of machine learning and deep learning, and look at some of the most common machine learning algorithms. We also explain how these algorithms relate to other pieces of the puzzle for creating predictive models from historical data.
ⓒ Getty Image Bank
What is a machine learning algorithm?
Machine learning is a method of Phone Number List automatically creating models from data, and machine learning algorithms are the engines that drive machine learning. In other words, it is an algorithm that turns a data set into a model. Which algorithm - supervised, unsupervised, classification, or regression - is most effective depends on the type of problem being solved, available computing resources, and the nature of the data.
How machine learning worksA typical programming algorithm tells the computer what to do in a simple way. For example, a sorting algorithm converts unsorted data into data that is sorted according to certain criteria, such as the numeric or alphabetical order of one or more fields in the data.
Linear regression algorithms fit a straight line or other function with linear parameters, such as a polynomial, to numerical data. A commonly used method for this is to perform matrix inversion to minimize the squared error between the line and the data. The reason we use squared error as a measure is because it doesn't matter whether the regression line is above or below the data points. What is important is the distance between the line and the point.
Nonlinear regression algorithms that fit a curve whose parameters are not linear to the data are slightly more complex. This is because, unlike linear regression problems, it cannot be solved using deterministic methods. Instead, nonlinear regression algorithms implement some kind of iterative minimization process, often a variation of the steepest descent method.
Fast descent basically computes the squared error and its gradient at the current parameter values, chooses a step size, i.e. the learning rate, follows the gradient direction "down", and then computes the squared error and its gradient at the new parameter values. Calculate again. If you're lucky, this process will eventually converge. There are several variants of rapid descent that seek to improve its convergence properties.
Machine learning algorithms are more complex than nonlinear regression, and one of the reasons is that machine learning is not limited to fitting specific mathematical functions such as polynomials. Two representative categories of problems that machine learning solves are regression and classification. Regression is used for numerical data and classification is used for non-numeric data. The former is a problem such as what is the expected income of a person with a specific address and occupation, and the latter is a problem such as whether the loan applicant will not repay this loan. Forecasting problems (“What will the opening price of Microsoft stock be tomorrow?”) are a subset of regression problems for time series data. Classification problems can also be divided into binary (yes/no) or multi-category problems.
ⓒ Getty Image Bank
What is a machine learning algorithm?
Machine learning is a method of Phone Number List automatically creating models from data, and machine learning algorithms are the engines that drive machine learning. In other words, it is an algorithm that turns a data set into a model. Which algorithm - supervised, unsupervised, classification, or regression - is most effective depends on the type of problem being solved, available computing resources, and the nature of the data.
How machine learning worksA typical programming algorithm tells the computer what to do in a simple way. For example, a sorting algorithm converts unsorted data into data that is sorted according to certain criteria, such as the numeric or alphabetical order of one or more fields in the data.
Linear regression algorithms fit a straight line or other function with linear parameters, such as a polynomial, to numerical data. A commonly used method for this is to perform matrix inversion to minimize the squared error between the line and the data. The reason we use squared error as a measure is because it doesn't matter whether the regression line is above or below the data points. What is important is the distance between the line and the point.
Nonlinear regression algorithms that fit a curve whose parameters are not linear to the data are slightly more complex. This is because, unlike linear regression problems, it cannot be solved using deterministic methods. Instead, nonlinear regression algorithms implement some kind of iterative minimization process, often a variation of the steepest descent method.
Fast descent basically computes the squared error and its gradient at the current parameter values, chooses a step size, i.e. the learning rate, follows the gradient direction "down", and then computes the squared error and its gradient at the new parameter values. Calculate again. If you're lucky, this process will eventually converge. There are several variants of rapid descent that seek to improve its convergence properties.
Machine learning algorithms are more complex than nonlinear regression, and one of the reasons is that machine learning is not limited to fitting specific mathematical functions such as polynomials. Two representative categories of problems that machine learning solves are regression and classification. Regression is used for numerical data and classification is used for non-numeric data. The former is a problem such as what is the expected income of a person with a specific address and occupation, and the latter is a problem such as whether the loan applicant will not repay this loan. Forecasting problems (“What will the opening price of Microsoft stock be tomorrow?”) are a subset of regression problems for time series data. Classification problems can also be divided into binary (yes/no) or multi-category problems.