As artificial intelligence and machine learning are becoming present in almost every aspect of life, it’s essential to understand how they work and their common applications. Although machine learning has been around for a while, many still portray it as an enemy. Machine learning can be your friend, but only if you learn to “tame” it.


Regression stands out as one of the most popular machine-learning techniques. It serves as a bridge that connects the past to the present and future. It does so by picking up on different “events” from the past and breaking them apart to analyze them. Based on this analysis, regression can make conclusions about the future and help many plan the next move.


The weather forecast is a basic example. With the regression technique, it’s possible to travel back in time to view average temperatures, humidity, and other variables relevant to the results. Then, you “return” to present and tailor predictions about the weather in the future.


There are different types of regression, and each has unique applications, advantages, and drawbacks. This article will analyze these types.


Linear Regression


Linear regression in machine learning is one of the most common techniques. This simple algorithm got its name because of what it does. It digs deep into the relationship between independent and dependent variables. Based on the findings, linear regression makes predictions about the future.


There are two distinguishable types of linear regression:


  • Simple linear regression – There’s only one input variable.
  • Multiple linear regression – There are several input variables.

Linear regression has proven useful in various spheres. Its most popular applications are:


  • Predicting salaries
  • Analyzing trends
  • Forecasting traffic ETAs
  • Predicting real estate prices

Polynomial Regression


At its core, polynomial regression functions just like linear regression, with one crucial difference – the former works with non-linear datasets.


When there’s a non-linear relationship between variables, you can’t do much with linear regression. In such cases, you send polynomial regression to the rescue. You do this by adding polynomial features to linear regression. Then, you analyze these features using a linear model to get relevant results.


Here’s a real-life example in action. Polynomial regression can analyze the spread rate of infectious diseases, including COVID-19.


Ridge Regression


Ridge regression is a type of linear regression. What’s the difference between the two? You use ridge regression when there’s high colinearity between independent variables. In such cases, you have to add bias to ensure precise long-term results.


This type of regression is also called L2 regularization because it makes the model less complex. As such, ridge regression is suitable for solving problems with more parameters than samples. Due to its characteristics, this regression has an honorary spot in medicine. It’s used to analyze patients’ clinical measures and the presence of specific antigens. Based on the results, the regression establishes trends.


LASSO Regression


No, LASSO regression doesn’t have anything to do with cowboys and catching cattle (although that would be interesting). LASSO is actually an acronym for Least Absolute Shrinkage and Selection Operator.


Like ridge regression, this one also belongs to regularization techniques. What does it regulate? It reduces a model’s complexity by eliminating parameters that aren’t relevant, thus concentrating the selection and guaranteeing better results.


Many choose ridge regression when analyzing a model with numerous true coefficients. When there are only a few of them, use LASSO. Therefore, their applications are similar; the real difference lies in the number of available coefficients.



Elastic Net Regression


Ridge regression is good for analyzing problems involving more parameters than samples. However, it’s not perfect; this regression type doesn’t promise to eliminate irrelevant coefficients from the equation, thus affecting the results’ reliability.


On the other hand, LASSO regression eliminates irrelevant parameters, but it sometimes focuses on far too few samples for high-dimensional data.


As you can see, both regressions are flawed in a way. Elastic net regression is the combination of the best characteristics of these regression techniques. The first phase is finding ridge coefficients, while the second phase involves a LASSO-like shrinkage of these coefficients to get the best results.


Support Vector Regression


Support vector machine (SVM) belongs to supervised learning algorithms and has two important uses:


  • Regression
  • Classification problems

Let’s try to draw a mental picture of how SVM works. Suppose you have two classes of items (let’s call them red circles and green triangles). Red circles are on the left, while green triangles are on the right. You can separate these two classes by drawing a line between them.


Things get a bit more complicated if you have red circles in the middle and green triangles wrapped around them. In that case, you can’t draw a line to separate the classes. But you can add new dimensions to the mix and create a circle (rectangle, square, or a different shape encompassing just the red circles).


This is what SVM does. It creates a hyperplane and analyzes classes depending on where they belong.


There are a few parameters you need to understand to grasp the reach of SVM fully:


  • Kernel – When you can’t find a hyperplane in a dimension, you move to a higher dimension, which is often challenging to navigate. A kernel is like a navigator that helps you find the hyperplane without plummeting computational costs.
  • Hyperplane – This is what separates two classes in SVM.
  • Decision boundary – Think of this as a line that helps you “decide” the placement of positive and negative examples.

Support vector regression takes a similar approach. It also creates a hyperplane to analyze classes but doesn’t classify them depending on where they belong. Instead, it tries to find a hyperplane that contains a maximum number of data points. At the same time, support vector regression tries to lower the risk of prediction errors.


SVM has various applications. It can be used in finance, bioinformatics, engineering, HR, healthcare, image processing, and other branches.


Decision Tree Regression


This type of supervised learning algorithm can solve both regression and classification issues and work with categorical and numerical datasets.


As its name indicates, decision tree regression deconstructs problems by creating a tree-like structure. In this tree, every node is a test for an attribute, every branch is the result of a test, and every leaf is the final result (decision).


The starting point of (the root) of every tree regression is the parent node. This node splits into two child nodes (data subsets), which are then further divided, thus becoming “parents” to their “children,” and so on.


You can compare a decision tree to a regular tree. If you take care of it and prune the unnecessary branches (those with irrelevant features), you’ll grow a healthy tree (a tree with concise and relevant results).


Due to its versatility and digestibility, decision tree regression can be used in various fields, from finance and healthcare to marketing and education. It offers a unique approach to decision-making by breaking down complex datasets into easy-to-grasp categories.


Random Forest Regression


Random forest regression is essentially decision tree regression but on a much bigger scale. In this case, you have multiple decision trees, each predicting a certain output. Random forest regression analyzes the outputs of every decision tree to come up with the final result.


Keep in mind that the decision trees used in random forest regression are completely independent; there’s no interaction between them until their outputs are analyzed.


Random forest regression is an ensemble learning technique, meaning it combines the results (predictions) of several machine learning algorithms to create one final prediction.


Like decision tree regression, this one can be used in numerous industries.



The Importance of Regression in Machine Learning Is Immeasurable


Regression in machine learning is like a high-tech detective. It travels back in time, identifies valuable clues, and analyzes them thoroughly. Then, it uses the results to predict outcomes with high accuracy and precision. As such, regression found its way to all niches.


You can use it in sales to analyze the customers’ behavior and anticipate their future interests. You can also apply it in finance, whether to discover trends in prices or analyze the stock market. Regression is also used in education, the tech industry, weather forecasting, and many other spheres.


Every regression technique can be valuable, but only if you know how to use it to your advantage. Think of your scenario (variables you want to analyze) and find the best actor (regression technique) who can breathe new life into it.

Related posts

Sage: The ethics of AI: how to ensure your firm is fair and transparent
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Mar 7, 2025 3 min read

Source:


By Chris Torney

Artificial intelligence (AI) and machine learning have the potential to offer significant benefits and opportunities to businesses, from greater efficiency and productivity to transformational insights into customer behaviour and business performance. But it is vital that firms take into account a number of ethical considerations when incorporating this technology into their business operations. 

The adoption of AI is still in its infancy and, in many countries, there are few clear rules governing how companies should utilise the technology. However, experts say that firms of all sizes, from small and medium-sized businesses (SMBs) to international corporations, need to ensure their implementation of AI-based solutions is as fair and transparent as possible. Failure to do so can harm relationships with customers and employees, and risks causing serious reputational damage as well as loss of trust.

What are the main ethical considerations around AI?

According to Pierluigi Casale, professor in AI at the Open Institute of Technology, the adoption of AI brings serious ethical considerations that have the potential to affect employees, customers and suppliers. “Fairness, transparency, privacy, accountability, and workforce impact are at the core of these challenges,” Casale explains. “Bias remains one of AI’s biggest risks: models trained on historical data can reinforce discrimination, and this can influence hiring, lending and decision-making.”

Part of the problem, he adds, is that many AI systems operate as ‘black boxes’, which makes their decision-making process hard to understand or interpret. “Without clear explanations, customers may struggle to trust AI-driven services; for example, employees may feel unfairly assessed when AI is used for performance reviews.”

Casale points out that data privacy is another major concern. “AI relies on vast datasets, increasing the risk of breaches or misuse,” he says. “All companies operating in Europe must comply with regulations such as GDPR and the AI Act, ensuring responsible data handling to protect customers and employees.”

A third significant ethical consideration is the potential impact of AI and automation on current workforces. Businesses may need to think about their responsibilities in terms of employees who are displaced by technology, for example by introducing training programmes that will help them make the transition into new roles.

Olivia Gambelin, an AI ethicist and the founder of advisory network Ethical Intelligence, says the AI-related ethical considerations are likely to be specific to each business and the way it plans to use the technology. “It really does depend on the context,” she explains. “You’re not going to find a magical checklist of five things to consider on Google: you actually have to do the work, to understand what you are building.”

This means business leaders need to work out how their organisation’s use of AI is going to impact the people – the customers and employees – that come into contact with it, Gambelin says. “Being an AI-enabled company means nothing if your employees are unhappy and fearful of their jobs, and being an AI-enabled service provider means nothing if it’s not actually connecting with your customers.”

Read the full article below:

Read the article
Reuters: EFG Watch: DeepSeek poses deep questions about how AI will develop
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Feb 10, 2025 4 min read

Source:

  • Reuters, Published on February 10th, 2025.

By Mike Scott

Summary

  • DeepSeek challenges assumptions about AI market and raises new ESG and investment risks
  • Efficiency gains significant – similar results being achieved with less computing power
  • Disruption fuels doubts over Big Tech’s long-term AI leadership and market valuations
  • China’s lean AI model also casts doubt on costly U.S.-backed Stargate project
  • Analysts see DeepSeek as a counter to U.S. tariffs, intensifying geopolitical tensions

February 10 – The launch by Chinese company DeepSeek, opens new tab of its R1 reasoning model last month caused chaos in U.S. markets. At the same time, it shone a spotlight on a host of new risks and challenged market assumptions about how AI will develop.

The shock has since been overshadowed by President Trump’s tariff wars, opens new tab, but DeepSeek is set to have lasting and significant implications, observers say. It is also a timely reminder of why companies and investors need to consider ESG risks, and other factors such as geopolitics, in their investment strategies.

“The DeepSeek saga is a fascinating inflection point in AI’s trajectory, raising ESG questions that extend beyond energy and market concentration,” Peter Huang, co-founder of Openware AI, said in an emailed response to questions.

DeepSeek put the cat among the pigeons by announcing that it had developed its model for around $6 million, a thousandth of the cost of some other AI models, while also using far fewer chips and much less energy.

Camden Woollven, group head of AI product marketing at IT governance and compliance group GRC International, said in an email that “smaller companies and developers who couldn’t compete before can now get in the game …. It’s like we’re seeing a democratisation of AI development. And the efficiency gains are significant as they’re achieving similar results with much less computing power, which has huge implications for both costs and environmental impact.”

The impact on AI stocks and companies associated with the sector was severe. Chipmaker Nvidia lost almost $600 billion in market capitalisation after the DeepSeek announcement on fears that demand for its chips would be lower, but there was also a 20-30% drop in some energy stocks, said Stephen Deadman, UK associate partner at consultancy Sia.

As Reuters reported, power producers were among the biggest winners in the S&P 500 last year, buoyed by expectations of ballooning demand from data centres to scale artificial intelligence technologies, yet they saw the biggest-ever one-day drops after the DeepSeek announcement.

One reason for the massive sell-off was the timing – no-one was expecting such a breakthrough, nor for it to come from China. But DeepSeek also upended the prevailing narrative of how AI would develop, and who the winners would be.

Tom Vazdar, professor of cybersecurity and AI at Open Institute of Technology (OPIT), pointed out in an email that it called into question the premise behind the Stargate Project,, opens new tab a $500 billion joint venture by OpenAI, SoftBank and Oracle to build AI infrastructure in the U.S., which was announced with great fanfare by Donald Trump just days before DeepSeek’s announcement.

“Stargate has been premised on the notion that breakthroughs in AI require massive compute and expensive, proprietary infrastructure,” Vazdar said in an email.

There are also dangers in markets being dominated by such a small group of tech companies. As Abbie Llewellyn-Waters, Investment manager at Jupiter Asset Management, pointed out in a research note, the “Magnificent Seven” tech stocks had accounted for nearly 60% of the index’s gains over the previous two years. The group of mega-caps comprised more than a third of the S&P 500’s total value in December 2024.

Read the full article below:

Read the article