

The larger your database, the higher the possibility of data repetition and inaccuracies that compromise the results you pull from the database. Normalization in DBMS exists to counteract those problems by helping you to create more uniform databases in which redundancies are less likely to occur.
Mastering normalization is a key skill in DBMS for the simple fact that an error-strewn database is of no use to an organization. For example, a retailer that has to deal with a database that has multiple entries for phone numbers and email addresses is a retailer that can’t see as effectively as one that has a simple route to the customer. Let’s look at normalization in DBMS and how it helps you to create a more organized database.
The Concept of Normalization
Grab a pack of playing cards and throw them onto the floor. Now, pick up the “Jack of Hearts.” It’s a tough task because the cards are strewn all over the place. Some are facing down and there’s no rhyme, reason, or pattern to how the cards lie, meaning you’re going to have to check every card individually to find the one you want.
That little experiment shows you how critical organization is, even with a small set of “data.” It also highlights the importance of normalization in DBMS. Through normalization, you implement organizational controls using a set of principles designed to achieve the following:
- Eliminate redundancy – Lower (or eliminate) occurrences of data repeating across different tables, or inside individual tables, in your DBMS.
- Minimize data anomalies – Better organization makes it easier to spot datasets that don’t fit the “norm,” meaning fewer anomalies.
- Improve data integrity – More accurate data comes from normalization controls. Database users can feel more confident in their results because they know that the controls ensure integrity.
The Process of Normalization
If normalization in DBMS is all about organization, it stands to reason that they would be a set process to follow when normalizing your tables and database:
- Decompose your tables – Break every table down into its various parts, which may lead to you creating several tables out of one. Through decomposition, you separate different datasets, eliminate inconsistencies, and set the stage for creating relationships and dependencies between tables.
- Identify functional dependencies – An attribute in one table may be dependent on another to exist. For example, a “Customer ID” number in a retailer’s “Customer” table is functionally dependent on the “Customer Name” field because the ID can’t exist without the customer. Identifying these types of dependencies ensures you don’t end up with empty records (such as a record with a “Customer ID” and no customer attached to it).
- Apply normalization rules – Once you’re broken down your table and identified the functional dependencies, you apply relevant normalization rules. You’ll use Normal Forms to do this, with the six highlighted below each having its own rules, structures, and use cases.
Normal Forms in DBMS
There isn’t a “single” way to achieve normalization in DBMS because every database (and the tables it contains) is different. Instead, there are six normal forms you may use, with each having its own rules that you need to understand to figure out which to apply.
First Normal Form (1NF)
If a relation can’t contain multiple values, it’s in 1NF. In other words, each attribute in the table can only contain a single (called “atomic”) value.
Example
If a retailer wants to store the details of its customers, it may have attributes in its table like “Customer Name,” “Phone Number,” and “Email Address.” By applying 1NF to this table, you ensure that the attributes that could contain multiple entries (“Phone Number” and “Email Address”) only contain one, making contacting that customer much simpler.
Second Normal Form (2NF)
A table that’s in 2NF is in 1NF, with the additional condition that none of its non-prime attributes depend on a subset of candidate keys within the table.
Example
Let’s say an employer wants to create a table that contains information about an employee, the skills they have, and their age. An employee may have multiple skills, leading to multiple records for the same employee in the table, with each denoting a skill while the ID number and age of the employee repeat for each record.
In this table, you’ve achieved 1NF because each attribute has an atomic value. However, the employee’s age is dependent on the employee ID number. To achieve 2NF, you’d break this table down into two tables. The first will contain the employee’s ID number and age, with that ID number linking to a second table that lists each of the skills associated with the employee.
Third Normal Form (3NF)
In 3NF, the table you have must already be in 2NF form, with the added rule of removing the transitive functional dependency of the non-prime attribute of any super key. Transitive functional dependency occurs if the dependency is the result of a pair of functional dependencies. For example, the relationship between A and C is a transitive dependency if A depends on B, B depends on C, but B doesn’t depend on A.
Example
Let’s say a school creates a “Students” table with the following attributes:
- Student ID
- Name
- Zip Code
- State
- City
- District
In this case, the “State,” “District,” and “City” attributes all depend on the “Zip Code” attribute. That “Zip” attribute depends on the “Student ID” attribute, making “State,” “District,” and “City” all transitively depending on “Student ID.”
To resolve this problem, you’d create a pair of tables – “Student” and “Student Zip.” The “Student” table contains the “Student ID,” “Name,” and “Zip Code” attributes, with that “Zip Code” attribute being the primary key of a “Student Zip” table that contains the rest of the attributes and links to the “Student” table.
Boyce-Codd Normal Form (BCNF)
Often referred to as 3.5NF, BCNF is a stricter version of 3NF. So, this normalization in DBMS rule occurs if your table is in 3NF, and for every functional dependence between two fields (i.e., A -> B), A is the super key of your table.
Example
Sticking with the school example, every student in a school has multiple classes. The school has a table with the following fields:
- Student ID
- Nationality
- Class
- Class Type
- Number of Students in Class
You have several functional dependencies here:
- Student ID -> Nationality
- Class -> Number of Students in Class, Class Type
As a result, both the “Student ID” and “Class” attributes are candidate keys but can’t serve as keys alone. To achieve BCNF normalization, you’d break the above table into three – “Student Nationality,” “Student Class,” and “Class Mapping,” allowing “Student ID” and “Class” to serve as primary keys in their own tables.
Fourth Normal Form (4NF)
In 4NF, the database must meet the requirements of BCNF, in addition to containing no more than a single multivalued dependency. It’s often used in academic circles, as there’s little use for 4NF elsewhere.
Example
Let’s say a college has a table containing the following fields:
- College Course
- Lecturer
- Recommended Book
Each of these attributes is independent of the others, meaning each can change without affecting the others. For example, the college could change the lecturer of a course without altering the recommended reading or the course’s name. As such, the existence of the course depends on both the “Lecturer” and “Recommended Book” attributes, creating a multivalued dependency. If a DBMS has more than one of these types of dependencies, it’s a candidate for 4NF normalization.
Fifth Normal Form (5NF)
If your table is in 4NF, has no join dependencies, and all joining is lossless, it’s in 5NF. Think of this as the final form when it comes to normalization in DBMS, as you’ve broken your table down so much that you’ve made redundancy impossible.
Example
A college may have a table that tells them which lecturers teach certain subjects during which semesters, creating the following attributes:
- Subject
- Lecturer Name
- Semester
Let’s say one of the lecturers teaches both “Physics” and “Math” for “Semester 1,” but doesn’t teach “Math” for Semester 2. That means you need to combine all of the fields in this table to get an accurate dataset, leading to redundancy. Add a third semester to the mix, especially if that semester has no defined courses or lecturers, and you have to join dependencies.
The 5NF solution is to break this table down into three tables:
- Table 1 – Contains the “Semester” and “Subject” attributes to show which subjects are taught in each semester.
- Table 2 – Contains the “Subject” and “Lecturer Name” attributes to show which lecturers teach a subject.
- Table 3 – Contains the “Semester” and “Lecturer Name” attributes so you can see which lecturers teach during which semesters.
Benefits of Normalization in DBMS
With normalization in DBMS being so much work, you need to know the following benefits to show that it’s worth your effort:
- Improved database efficiency
- Better data consistency
- Easier database maintenance
- Simpler query processing
- Better access controls, resulting in superior security
Limitations and Trade-Offs of Normalization
Normalization in DBMS does have some drawbacks, though these are trade-offs that you accept for the above benefits:
- The larger your database gets, the more demands it places on system performance.
- Breaking tables down leads to complexity.
- You have to find a balance between normalization and denormalization to ensure your tables make sense.
Practical Tips for Mastering Normalization Techniques
Getting normalization in DBMS is hard, especially when you start feeling like you’re dividing tables into so many small tables that you’re losing track of the database. These tips help you apply normalization correctly:
- Understand the database requirements – Your database exists for you to extract data from it, so knowing what you’ll need to extract indicates whether you need to normalize tables or not.
- Document all functional dependencies – Every functional dependence that exists in your database makes the table in which it exists a candidate for normalization. Identify each dependency and document it so you know whether you need to break the table down.
- Use software and tools – You’re not alone when poring through your database. There are plenty of tools available that help you to identify functional dependencies. Many make normalization suggestions, with some even being able to carry out those suggestions for you.
- Review and refine – Every database evolves alongside its users, so continued refining is needed to identify new functional dependencies (and opportunities for normalization).
- Collaborate with other professionals – A different set of eyes on a database may reveal dependencies and normalization opportunities that you don’t see.
Make Normalization Your New Norm
Normalization may seem needlessly complex, but it serves the crucial role of making the data you extract from your database more refined, accurate, and free of repetition. Mastering normalization in DBMS puts you in the perfect position to create the complex databases many organizations need in a Big Data world. Experiment with the different “normal forms” described in this article as each application of the techniques (even for simple tables) helps you get to grips with normalization.
Related posts

From personalization to productivity: AI at the heart of the educational experience.
Click this link to read and download the e-book.
At its core, teaching is a simple endeavour. The experienced and learned pass on their knowledge and wisdom to new generations. Nothing has changed in that regard. What has changed is how new technologies emerge to facilitate that passing on of knowledge. The printing press, computers, the internet – all have transformed how educators teach and how students learn.
Artificial intelligence (AI) is the next game-changer in the educational space.
Specifically, AI agents have emerged as tools that utilize all of AI’s core strengths, such as data gathering and analysis, pattern identification, and information condensing. Those strengths have been refined, first into simple chatbots capable of providing answers, and now into agents capable of adapting how they learn and adjusting to the environment in which they’re placed. This adaptability, in particular, makes AI agents vital in the educational realm.
The reasons why are simple. AI agents can collect, analyse, and condense massive amounts of educational material across multiple subject areas. More importantly, they can deliver that information to students while observing how the students engage with the material presented. Those observations open the door for tweaks. An AI agent learns alongside their student. Only, the agent’s learning focuses on how it can adapt its delivery to account for a student’s strengths, weaknesses, interests, and existing knowledge.
Think of an AI agent like having a tutor – one who eschews set lesson plans in favour of an adaptive approach designed and tweaked constantly for each specific student.
In this eBook, the Open Institute of Technology (OPIT) will take you on a journey through the world of AI agents as they pertain to education. You will learn what these agents are, how they work, and what they’re capable of achieving in the educational sector. We also explore best practices and key approaches, focusing on how educators can use AI agents to the benefit of their students. Finally, we will discuss other AI tools that both complement and enhance an AI agent’s capabilities, ensuring you deliver the best possible educational experience to your students.

The Open Institute of Technology (OPIT) began enrolling students in 2023 to help bridge the skills gap between traditional university education and the requirements of the modern workplace. OPIT’s MSc courses aim to help professionals make a greater impact on their workplace through technology.
OPIT’s courses have become popular with business leaders hoping to develop a strong technical foundation to understand technologies, such as artificial intelligence (AI) and cybersecurity, that are shaping their industry. But OPIT is also attracting professionals with strong technical expertise looking to engage more deeply with the strategic side of digital innovation. This is the story of one such student, Obiora Awogu.
Meet Obiora
Obiora Awogu is a cybersecurity expert from Nigeria with a wealth of credentials and experience from working in the industry for a decade. Working in a lead data security role, he was considering “what’s next” for his career. He was contemplating earning an MSc to add to his list of qualifications he did not yet have, but which could open important doors. He discussed the idea with his mentor, who recommended OPIT, where he himself was already enrolled in an MSc program.
Obiora started looking at the program as a box-checking exercise, but quickly realized that it had so much more to offer. As well as being a fully EU-accredited course that could provide new opportunities with companies around the world, he recognized that the course was designed for people like him, who were ready to go from building to leading.
OPIT’s MSc in Cybersecurity
OPIT’s MSc in Cybersecurity launched in 2024 as a fully online and flexible program ideal for busy professionals like Obiora who want to study without taking a career break.
The course integrates technical and leadership expertise, equipping students to not only implement cybersecurity solutions but also lead cybersecurity initiatives. The curriculum combines technical training with real-world applications, emphasizing hands-on experience and soft skills development alongside hard technical know-how.
The course is led by Tom Vazdar, the Area Chair for Cybersecurity at OPIT, as well as the Chief Security Officer at Erste Bank Croatia and an Advisory Board Member for EC3 European Cybercrime Center. He is representative of the type of faculty OPIT recruits, who are both great teachers and active industry professionals dealing with current challenges daily.
Experts such as Matthew Jelavic, the CEO at CIM Chartered Manager Canada and President of Strategy One Consulting; Mahynour Ahmed, Senior Cloud Security Engineer at Grant Thornton LLP; and Sylvester Kaczmarek, former Chief Scientific Officer at We Space Technologies, join him.
Course content includes:
- Cybersecurity fundamentals and governance
- Network security and intrusion detection
- Legal aspects and compliance
- Cryptography and secure communications
- Data analytics and risk management
- Generative AI cybersecurity
- Business resilience and response strategies
- Behavioral cybersecurity
- Cloud and IoT security
- Secure software development
- Critical thinking and problem-solving
- Leadership and communication in cybersecurity
- AI-driven forensic analysis in cybersecurity
As with all OPIT’s MSc courses, it wraps up with a capstone project and dissertation, which sees students apply their skills in the real world, either with their existing company or through apprenticeship programs. This not only gives students hands-on experience, but also helps them demonstrate their added value when seeking new opportunities.
Obiora’s Experience
Speaking of his experience with OPIT, Obiora said that it went above and beyond what he expected. He was not surprised by the technical content, in which he was already well-versed, but rather the change in perspective that the course gave him. It helped him move from seeing himself as someone who implements cybersecurity solutions to someone who could shape strategy at the highest levels of an organization.
OPIT’s MSc has given Obiora the skills to speak to boards, connect risk with business priorities, and build organizations that don’t just defend against cyber risks but adapt to a changing digital world. He commented that studying at OPIT did not give him answers; instead, it gave him better questions and the tools to lead. Of course, it also ticks the MSc box, and while that might not be the main reason for studying at OPIT, it is certainly a clear benefit.
Obiora has now moved into a leading Chief Information Security Officer Role at MoMo, Payment Service Bank for MTN. There, he is building cyber-resilient financial systems, contributing to public-private partnerships, and mentoring the next generation of cybersecurity experts.
Leading Cybersecurity in Africa
As well as having a significant impact within his own organization, studying at OPIT has helped Obiora develop the skills and confidence needed to become a leader in the cybersecurity industry across Africa.
In March 2025, Obiora was featured on the cover of CIO Africa Magazine and was then a panelist on the “Future of Cybersecurity Careers in the Age of Generative AI” for Comercio Ltd. The Lagos Chamber of Commerce and Industry also invited him to speak on Cybersecurity in Africa.
Obiora recently presented the keynote speech at the Hackers Secret Conference 2025 on “Code in the Shadows: Harnessing the Human-AI Partnership in Cybersecurity.” In the talk, he explored how AI is revolutionizing incident response, enhancing its speed, precision, and proactivity, and improving on human-AI collaboration.
An OPIT Success Story
Talking about Obiora’s success, the OPIT Area Chair for Cybersecurity said:
“Obiora is a perfect example of what this program was designed for – experienced professionals ready to scale their impact beyond operations. It’s been inspiring to watch him transform technical excellence into strategic leadership. Africa’s cybersecurity landscape is stronger with people like him at the helm. Bravo, Obiora!”
Learn more about OPIT’s MSc in Cybersecurity and how it can support the next steps of your career.
Have questions?
Visit our FAQ page or get in touch with us!
Write us at +39 335 576 0263
Get in touch at hello@opit.com
Talk to one of our Study Advisors
We are international
We can speak in: