In April 1999, a $433 million Air Force rocket inexplicably malfunctioned almost immediately after liftoff, causing the permanent loss of an $800 million military communications satellite. This $1.2 billion disaster remains one of the costliest accidents in human history.


You might wonder if scientists ever found out what caused this misfiring. They sure did! And the answer is a software bug.


This accident alone is a testament to the importance of software testing.


Although you can probably deduce the software testing definition, let’s also review it together.


So, what is software testing?


Software testing refers to running a software program before putting it on the market to determine whether it behaves as expected and displays no defects.


While testing itself isn’t free, these expenses are cost-effective compared to potential money loss resulting from software failure. And this is just one of the benefits of this process. Others include improving performance, preventing human and equipment loss, and increasing stakeholder confidence.


Now that you understand why software testing is such a big deal, let’s inspect this process in more detail.


Software Testing Fundamentals


We’ll start with the basics – what are the fundamentals of testing in software engineering? In other words, what exactly is its end goal, and which principles underlie it?


Regarding the objectives of software testing, there are three distinct ones aiming to answer crucial questions about the software.


  • Verification and validation. Does the software meet all the necessary requirements? And does it satisfy the end customer?
  • Defects and errors identification. Does the software have any defects or errors? What is their scope and impact? And did they cause related issues?
  • Software quality assurance. Is the software performing at optimal levels? Can the software engineering process be further optimized?

As for principles of software testing, there are seven of them, and they go as follows:


  1. Testing shows the presence of defects. With everything we’ve written about software testing, this sounds like a given. But this principle emphasizes that testing can only confirm the presence of defects. It can’t confirm their absence. So, even if no flaws are found, it doesn’t mean the system has none.
  2. Exhaustive testing is impossible. Given how vital software testing is, this process should ideally test all the possible scenarios to confirm the program is defect-free without a shadow of a doubt. Unfortunately, this is impossible to achieve in practice. There’s simply not enough time, money, or space to conduct such testing. Instead, test analysts can only base the testing amount on risk assessment. In other words, they’ll primarily test elements that are most likely to fail.
  3. Testing should start as early as possible. Catching defects in the early stages of software development makes all the difference for the final product. It also saves lots of money in the process. For this reason, software testing should start from the moment its requirements are defined.
  4. Most defects are within a small number of modules. This principle, known as defect clustering, follows the Pareto principle or the 80/20 rule. The rule states that approximately 80% of issues can be found in 20% of modules.
  5. Repetitive software testing is useless. Known as the Pesticide Paradox, this principle warns that conducting the same tests to discover new defects is a losing endeavor. Like insects become resistant to a repeatedly used pesticide mix, the tested software will become “immune” to the same tests.
  6. Testing is context-dependent. The same set of tests can rarely be used on two separate software programs. You’ll need to switch testing techniques, methodologies, and approaches based on the program’s application.
  7. The software program isn’t necessarily usable, even without defects. This principle is known as the absence of errors fallacy. Just because a system is error-free doesn’t mean it meets the customer’s business needs. In software testing objectives, software validation is as important as verification.

Types of Software Testing


There are dozens (if not hundreds) types of testing in software engineering. Of course, not all of these tests apply to all systems. Choosing the suitable types of testing in software testing boils down to your project’s nature and scope.


All of these testing types can be broadly classified into three categories.


Functional Testing


Functional software testing types examine the system to ensure it performs in accordance with the pre-determined functional requirements. We’ll explain each of these types using e-commerce as an example.


  • Unit Testing – Checking whether each software unit (the smallest system component that can be tested) performs as expected. (Does the “Add to Cart” button work?)
  • Integration Testing – Ensuring that all software components interact correctly within the system. (Is the product catalog seamlessly integrated with the shopping cart?)
  • System Testing – Verifying that a system produces the desired output. (Can you complete a purchase?)
  • Acceptance Testing – Ensuring that the entire system meets the end users’ needs. (Is all the information accurate and easy to access?)

Non-Functional Testing


Non-functional types of testing in software engineering deal with the general characteristics of a system beyond its functionality. Let’s go through the most common non-functional tests, continuing the e-commerce analogy.


  • Performance Testing – Evaluating how a system performs under a specific workload. (Can the e-commerce shop handle a massive spike in traffic without crashing?)
  • Usability Testing – Checking the customer’s ability to use the system effectively. (How quickly can you check out?)
  • Security Testing – Identifying the system’s security vulnerabilities. (Will sensitive credit card information be stored securely?)
  • Compatibility Testing – Verifying if the system can run on different platforms and devices. (Can you complete a purchase using your mobile phone?)
  • Localization Testing – Checking the system’s behavior in different locations and regions. (Will time-sensitive discounts take time zones into account?)

Maintenance Testing


Maintenance testing takes place after the system has been produced. It checks whether (or how) the changes made to fix issues or add new features have affected the system.


  • Regression Testing – Checking whether the changes have affected the system’s functionality. (Does the e-commerce shop work seamlessly after integrating a new payment gateway?)
  • Smoke Testing – Verifying the system’s basic functionality before conducting more extensive (and expensive!) tests. (Can the new product be added to the cart?)
  • Sanity Testing – Determining whether the new functionality operates as expected. (Does the new search filter select products adequately?)

Levels of Software Testing


Software testing isn’t done all at once. There are levels to it. Four, to be exact. Each level contains different types of tests, grouped by their position in the software development process.


Read about the four levels of testing in software testing here.


Level 1: Unit Testing


Unit testing helps developers determine whether individual system components (or units) work properly. Since it takes place at the lowest level, this testing sets the tone for the rest of the software development process.


This testing plays a crucial role in test-driven development (TDD). In this methodology, developers perform test cases first and worry about writing the code for software development later.


Level 2: Integration Testing


Integration testing focuses on the software’s inner workings, checking how different units and components interact. After all, you can’t test the system as a whole if it isn’t coherent from the start.


During this phase, testers use two approaches to integration testing: top-down (starting with the highest-level units) and bottom-up (integrating the lowest-level units first).


Level 3: System Testing


After integration testing, the system can now be evaluated as a whole. And that’s exactly what system testing does.


System testing methods are usually classified as white-box or black-box testing. The primary difference is whether the testers are familiar with the system’s internal code structure. In white-box testing, they are.


Level 4: Acceptance Testing


Acceptance testing determines whether the system delivers on its promises. Two groups are usually tasked with acceptance testing: quality assessment experts (alpha testing before the software launches) and a limited number of users (beta testing in a real-time environment).



Software Testing Process


Although some variations might exist, the software testing process typically follows the same pattern.


Step 1: Planning the Test


This step entails developing the following:


  • Test strategy for outlining testing approaches
  • Test plan for detailing testing objectives, priorities, and processes
  • Test estimation for calculating the time and resources needed to complete the testing process

Step 2: Designing the Test


In the design phase, testers create the following:


  • Test scenarios (hypothetical situations used to test the system)
  • Test cases (instructions on how the system should be tested)
  • Test data (set of values used to test the system)

Step 3: Executing the Test


Text execution refers to performing (and monitoring) the planned and designed tests. This phase begins with setting up the test environment and ends with writing detailed reports on the findings.


Step 4: Closing the Test


After completing the testing, testers generate relevant metrics and create a summary report on their efforts. At this point, they have enough information to determine whether the tested software is ready to be released.


High-Quality Testing for High-Quality Software


Think of different types of software testing as individual pieces of a puzzle that come together to form a beautiful picture. Performing software testing hierarchically (from Level 1 to Level 4) ensures no stone is left unturned, and the tested software won’t let anyone down.


With this in mind, it’s easy to conclude that you should only attempt software development projects if you implement effective software testing practices first.

Related posts

Sage: The ethics of AI: how to ensure your firm is fair and transparent
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Mar 7, 2025 3 min read

Source:


By Chris Torney

Artificial intelligence (AI) and machine learning have the potential to offer significant benefits and opportunities to businesses, from greater efficiency and productivity to transformational insights into customer behaviour and business performance. But it is vital that firms take into account a number of ethical considerations when incorporating this technology into their business operations. 

The adoption of AI is still in its infancy and, in many countries, there are few clear rules governing how companies should utilise the technology. However, experts say that firms of all sizes, from small and medium-sized businesses (SMBs) to international corporations, need to ensure their implementation of AI-based solutions is as fair and transparent as possible. Failure to do so can harm relationships with customers and employees, and risks causing serious reputational damage as well as loss of trust.

What are the main ethical considerations around AI?

According to Pierluigi Casale, professor in AI at the Open Institute of Technology, the adoption of AI brings serious ethical considerations that have the potential to affect employees, customers and suppliers. “Fairness, transparency, privacy, accountability, and workforce impact are at the core of these challenges,” Casale explains. “Bias remains one of AI’s biggest risks: models trained on historical data can reinforce discrimination, and this can influence hiring, lending and decision-making.”

Part of the problem, he adds, is that many AI systems operate as ‘black boxes’, which makes their decision-making process hard to understand or interpret. “Without clear explanations, customers may struggle to trust AI-driven services; for example, employees may feel unfairly assessed when AI is used for performance reviews.”

Casale points out that data privacy is another major concern. “AI relies on vast datasets, increasing the risk of breaches or misuse,” he says. “All companies operating in Europe must comply with regulations such as GDPR and the AI Act, ensuring responsible data handling to protect customers and employees.”

A third significant ethical consideration is the potential impact of AI and automation on current workforces. Businesses may need to think about their responsibilities in terms of employees who are displaced by technology, for example by introducing training programmes that will help them make the transition into new roles.

Olivia Gambelin, an AI ethicist and the founder of advisory network Ethical Intelligence, says the AI-related ethical considerations are likely to be specific to each business and the way it plans to use the technology. “It really does depend on the context,” she explains. “You’re not going to find a magical checklist of five things to consider on Google: you actually have to do the work, to understand what you are building.”

This means business leaders need to work out how their organisation’s use of AI is going to impact the people – the customers and employees – that come into contact with it, Gambelin says. “Being an AI-enabled company means nothing if your employees are unhappy and fearful of their jobs, and being an AI-enabled service provider means nothing if it’s not actually connecting with your customers.”

Read the full article below:

Read the article
Reuters: EFG Watch: DeepSeek poses deep questions about how AI will develop
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Feb 10, 2025 4 min read

Source:

  • Reuters, Published on February 10th, 2025.

By Mike Scott

Summary

  • DeepSeek challenges assumptions about AI market and raises new ESG and investment risks
  • Efficiency gains significant – similar results being achieved with less computing power
  • Disruption fuels doubts over Big Tech’s long-term AI leadership and market valuations
  • China’s lean AI model also casts doubt on costly U.S.-backed Stargate project
  • Analysts see DeepSeek as a counter to U.S. tariffs, intensifying geopolitical tensions

February 10 – The launch by Chinese company DeepSeek, opens new tab of its R1 reasoning model last month caused chaos in U.S. markets. At the same time, it shone a spotlight on a host of new risks and challenged market assumptions about how AI will develop.

The shock has since been overshadowed by President Trump’s tariff wars, opens new tab, but DeepSeek is set to have lasting and significant implications, observers say. It is also a timely reminder of why companies and investors need to consider ESG risks, and other factors such as geopolitics, in their investment strategies.

“The DeepSeek saga is a fascinating inflection point in AI’s trajectory, raising ESG questions that extend beyond energy and market concentration,” Peter Huang, co-founder of Openware AI, said in an emailed response to questions.

DeepSeek put the cat among the pigeons by announcing that it had developed its model for around $6 million, a thousandth of the cost of some other AI models, while also using far fewer chips and much less energy.

Camden Woollven, group head of AI product marketing at IT governance and compliance group GRC International, said in an email that “smaller companies and developers who couldn’t compete before can now get in the game …. It’s like we’re seeing a democratisation of AI development. And the efficiency gains are significant as they’re achieving similar results with much less computing power, which has huge implications for both costs and environmental impact.”

The impact on AI stocks and companies associated with the sector was severe. Chipmaker Nvidia lost almost $600 billion in market capitalisation after the DeepSeek announcement on fears that demand for its chips would be lower, but there was also a 20-30% drop in some energy stocks, said Stephen Deadman, UK associate partner at consultancy Sia.

As Reuters reported, power producers were among the biggest winners in the S&P 500 last year, buoyed by expectations of ballooning demand from data centres to scale artificial intelligence technologies, yet they saw the biggest-ever one-day drops after the DeepSeek announcement.

One reason for the massive sell-off was the timing – no-one was expecting such a breakthrough, nor for it to come from China. But DeepSeek also upended the prevailing narrative of how AI would develop, and who the winners would be.

Tom Vazdar, professor of cybersecurity and AI at Open Institute of Technology (OPIT), pointed out in an email that it called into question the premise behind the Stargate Project,, opens new tab a $500 billion joint venture by OpenAI, SoftBank and Oracle to build AI infrastructure in the U.S., which was announced with great fanfare by Donald Trump just days before DeepSeek’s announcement.

“Stargate has been premised on the notion that breakthroughs in AI require massive compute and expensive, proprietary infrastructure,” Vazdar said in an email.

There are also dangers in markets being dominated by such a small group of tech companies. As Abbie Llewellyn-Waters, Investment manager at Jupiter Asset Management, pointed out in a research note, the “Magnificent Seven” tech stocks had accounted for nearly 60% of the index’s gains over the previous two years. The group of mega-caps comprised more than a third of the S&P 500’s total value in December 2024.

Read the full article below:

Read the article