As computing technology evolved and the concept of linking multiple computers together into a “network” that could share data came into being, it was clear that a model was needed to define and enable those connections. Enter the OSI model in computer network idea.


This model allows various devices and software to “communicate” with one another by creating a set of universal rules and functions. Let’s dig into what the model entails.


History of the OSI Model


In the late 1970s, the continued development of computerized technology saw many companies start to introduce their own systems. These systems stood alone from others. For example, a computer at Retailer A has no way to communicate with a computer at Retailer B, with neither computer being able to communicate with the various vendors and other organizations within the retail supply chain.


Clearly, some way of connecting these standalone systems was needed, leading to researchers from France, the U.S., and the U.K. splitting into two groups – The International Organization for Standardization and the International Telegraph and Telephone Consultive Committee.


In 1983, these two groups merged their work to create “The Basic Reference Model for Open Systems Interconnection (OSI).” This model established industry standards for communication between networked devices, though the path to OSI’s implementation wasn’t as clear as it could have been. The 1980s and 1990s saw the introduction of another model – The TCP IP model – which competed against the OSI model for supremacy. TCP/IP gained so much traction that it became the cornerstone model for the then-budding internet, leading to the OSI model in computer network applications falling out of favor in many sectors. Despite this, the OSI model is still a valuable reference point for students who want to learn more about networking and still have some practical uses in industry.


The OSI Reference Model


The OSI model works by splitting the concept of computers communicating with one another into seven computer network layers (defined below), each offering standardized rules for its specific function. During the rise of the OSI model, these layers worked in concert, allowing systems to communicate as long as they followed the rules.


Though the OSI model in computer network applications has fallen out of favor on a practical level, it still offers several benefits:


  • The OSI model is perfect for teaching network architecture because it defines how computers communicate.
  • OSI is a layered model, with separation between each layer, so one layer doesn’t affect the operation of any other.
  • The OSI model offers flexibility because of the distinctions it makes between layers, with users being able to replace protocols in any layer without worrying about how they’ll impact the other layers.

The 7 Layers of the OSI Model


The OSI reference model in computer network teaching is a lot like an onion. It has several layers, each standing alone but each needing to be peeled back to get a result. But where peeling back the layers of an onion gets you a tasty ingredient or treat, peeling them back in the OSI model delivers a better understanding of networking and the protocols that lie behind it.


Each of these seven layers serves a different function.


Layer 1: Physical Layer


Sitting at the lowest level of the OSI model, the physical layer is all about the hows and wherefores of transmitting electrical signals from one device to another. Think of it as the protocols needed for the pins, cables, voltages, and every other component of a physical device if said device wants to communicate with another that uses the OSI model.


Layer 2: Data Link Layer


With the physical layer in place, the challenge shifts to transmitting data between devices. The data layer defines how node-to-node transfer occurs, allowing for the packaging of data into “frames” and the correction of errors that may happen in the physical layer.


The data layer has two “sub-layers” of its own:


  • MAC – Media Access Controls that offer multiplexing and flow control to govern a device’s transmissions over an OSI network.
  • LLC – Logical Link Controls that offer error control over the physical media (i.e., the devices) used to transmit data across a connection.

Layer 3: Network Layer


The network layer is like an intermediary between devices, as it accepts “frames” from the data layer and sends them on their way to their intended destination. Think of this layer as the postal service of the OSI model in computer network applications.



Layer 4: Transport Layer


If the network layer is a delivery person, the transport layer is the van that the delivery person uses to carry their parcels (i.e., data packets) between addresses. This layer regulates the sequencing, sizing, and transferring of data between hosts and systems. TCP (Transmission Control Protocol) is a good example of a transport layer in practical applications.


Layer 5: Session Layer


When one device wants to communicate with another, it sets up a “session” in which the communication takes place, similar to how your boss may schedule a meeting with you when they want to talk. The session layer regulates how the connections between machines are set up and managed, in addition to providing authorization controls to ensure no unwanted devices can interrupt or “listen in” on the session.


Layer 6: Presentation Layer


Presentation matters when sending data from one system to another. The presentation layer “pretties up” data by formatting and translating it into a syntax that the recipient’s application accepts. Encryption and decryption is a perfect example, as a data packet can be encrypted to be unreadable to anybody who intercepts it, only to be decrypted via the presentation layer so the intended recipient can see what the data packet contains.


Layer 7: Application Layer


The application layer is a front end through which the end user can interact with everything that’s going on behind the scenes in the network. It’s usually a piece of software that puts a user-friendly face on a network. For instance, the Google Chrome web browser is an application layer for the entire network of connections that make up the internet.


Interactions Between OSI Layers


Though each of the OSI layers in computer networks is independent (lending to the flexibility mentioned earlier), they must also interact with one another to make the network functional.


We see this most obviously in the data encapsulation and de-encapsulation that occurs in the model. Encapsulation is the process of adding information to a data packet as it travels, with de-encapsulation being the method used to remove that data added data so the end user can read what was originally sent. The previously mentioned encryption and decryption of data is a good example.


That process of encapsulation and de-encapsulation defines how the OSI model works. Each layer adds its own little “flavor” to the transmitted data packet, with each subsequent layer either adding something new or de-encapsulating something previously added so it can read the data. Each of these additions and subtractions is governed by the protocols set within each layer. A perfect network can only exist if these protocols properly govern data transmission, allowing for communication between each layer.


Real-World Applications of the OSI Model


There’s a reason why the OSI model in computer network study is often called a “reference” model – though important, it was quickly replaced with other models. As a result, you’ll rarely see the OSI model used as a way to connect devices, with TCP/IP being far more popular. Still, there are several practical applications for the OSI model.


Network Troubleshooting and Diagnostics


Given that some modern computer networks are unfathomably complex, picking out a single error that messes up the whole communication process can feel like navigating a minefield. Every wrong step causes something else to blow up, leading to more problems than you solve. The OSI model’s layered approach offers a way to break down the different aspects of a network to make it easier to identify problems.


Network Design and Implementation


Though the OSI model has few practical purposes, as a theoretical model it’s often seen as the basis for all networking concepts that came after. That makes it an ideal teaching tool for showcasing how networks are designed and implemented. Some even refer to the model when creating networks using other models, with the layered approach helping understand complex networks.


Enhancing Network Security


The concept of encapsulation and de-encapsulation comes to the fore again here (remember – encryption), as this concept shows us that it’s dangerous to allow a data packet to move through a network with no interactions. The OSI model shows how altering that packet as it goes on its journey makes it easier to protect data from unwanted eyes.



Limitations and Criticisms of the OSI Model


Despite its many uses as a teaching tool, the OSI model in computer network has limitations that are the reasons why it sees few practical applications:


  • Complexity – As valuable as the layered approach may be to teaching networks, it’s often too complex to execute in practice.
  • Overlap – The very flexibility that makes OSI great for people who want more control over their networks can come back to bite the model. The failure to implement proper controls and protocols can lead to overlap, as can the layered approach itself. Each of the computer network layers needs the others to work.
  • The Existence of Alternatives – The OSI model walked so other models could run, establishing many fundamental networking concepts that other models executed better in practical terms. Again, the massive network known as the internet is a great example, as it uses the TCP/IP model to reduce complexity and more effectively transmit data.

Use the OSI Reference Model in Computer Network Applications


Though it has little practical application in today’s world, the OSI model in computer network terms is a theoretical model that played a crucial role in establishing many of the “rules” of networking still used today. Its importance is still recognized by the fact that many computing courses use the OSI model to teach the fundamentals of networks.


Think of learning about the OSI model as being similar to laying the foundations for a house. You’ll get to grips with the basic concepts of how networks work, allowing you to build up your knowledge by incorporating both current networking technology and future advancements to become a networking specialist.

Related posts

Il Sole 24 Ore: Integrating Artificial Intelligence into the Enterprise – Challenges and Opportunities for CEOs and Management
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Apr 14, 2025 6 min read

Source:


Expert Pierluigi Casale analyzes the adoption of AI by companies, the ethical and regulatory challenges and the differentiated approach between large companies and SMEs

By Gianni Rusconi

Easier said than done: to paraphrase the well-known proverb, and to place it in the increasingly large collection of critical issues and opportunities related to artificial intelligence, the task that CEOs and management have to adequately integrate this technology into the company is indeed difficult. Pierluigi Casale, professor at OPIT (Open Institute of Technology, an academic institution founded two years ago and specialized in the field of Computer Science) and technical consultant to the European Parliament for the implementation and regulation of AI, is among those who contributed to the definition of the AI ​​Act, providing advice on aspects of safety and civil liability. His task, in short, is to ensure that the adoption of artificial intelligence (primarily within the parliamentary committees operating in Brussels) is not only efficient, but also ethical and compliant with regulations. And, obviously, his is not an easy task.

The experience gained over the last 15 years in the field of machine learning and the role played in organizations such as Europol and in leading technology companies are the requirements that Casale brings to the table to balance the needs of EU bodies with the pressure exerted by American Big Tech and to preserve an independent approach to the regulation of artificial intelligence. A technology, it is worth remembering, that implies broad and diversified knowledge, ranging from the regulatory/application spectrum to geopolitical issues, from computational limitations (common to European companies and public institutions) to the challenges related to training large-format language models.

CEOs and AI

When we specifically asked how CEOs and C-suites are “digesting” AI in terms of ethics, safety and responsibility, Casale did not shy away, framing the topic based on his own professional career. “I have noticed two trends in particular: the first concerns companies that started using artificial intelligence before the AI ​​Act and that today have the need, as well as the obligation, to adapt to the new ethical framework to be compliant and avoid sanctions; the second concerns companies, like the Italian ones, that are only now approaching this topic, often in terms of experimental and incomplete projects (the expression used literally is “proof of concept”, ed.) and without these having produced value. In this case, the ethical and regulatory component is integrated into the adoption process.”

In general, according to Casale, there is still a lot to do even from a purely regulatory perspective, due to the fact that there is not a total coherence of vision among the different countries and there is not the same speed in implementing the indications. Spain, in this regard, is setting an example, having established (with a royal decree of 8 November 2023) a dedicated “sandbox”, i.e. a regulatory experimentation space for artificial intelligence through the creation of a controlled test environment in the development and pre-marketing phase of some artificial intelligence systems, in order to verify compliance with the requirements and obligations set out in the AI ​​Act and to guide companies towards a path of regulated adoption of the technology.

Read the full article below (in Italian):

Read the article
CCN: Australia Tightens Crypto Oversight as Exchanges Expand, Testing Industry’s Appetite for Regulation
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Mar 31, 2025 3 min read

Source:

  • CCN, published on March 29th, 2025

By Kurt Robson

Over the past few months, Australia’s crypto industry has undergone a rapid transformation following the government’s proposal to establish a stricter set of digital asset regulations.

A series of recent enforcement measures and exchange launches highlight the growing maturation of Australia’s crypto landscape.

Experts remain divided on how the new rules will impact the country’s burgeoning digital asset industry.

New Crypto Regulation

On March 21, the Treasury Department said that crypto exchanges and custody services will now be classified under similar rules as other financial services in the country.

“Our legislative reforms will extend existing financial services laws to key digital asset platforms, but not to all of the digital asset ecosystem,” the Treasury said in a statement.

The rules impose similar regulations as other financial services in the country, such as obtaining a financial license, meeting minimum capital requirements, and safeguarding customer assets.

The proposal comes as Australian Prime Minister Anthony Albanese’s center-left Labor government prepares for a federal election on May 17.

Australia’s opposition party, led by Peter Dutton, has also vowed to make crypto regulation a top priority of the government’s agenda if it wins.

Australia’s Crypto Growth

Triple-A data shows that 9.6% of Australians already own digital assets, with some experts believing new rules will push further adoption.

Europe’s largest crypto exchange, WhiteBIT, announced it was entering the Australian market on Wednesday, March 26.

The company said that Australia was “an attractive landscape for crypto businesses” despite its complexity.

In March, Australia’s Swyftx announced it was acquiring New Zealand’s largest cryptocurrency exchange for an undisclosed sum.

According to the parties, the merger will create the second-largest platform in Australia by trading volume.

“Australia’s new regulatory framework is akin to rolling out the welcome mat for cryptocurrency exchanges,” Alexander Jader, professor of Digital Business at the Open Institute of Technology, told CCN.

“The clarity provided by these regulations is set to attract a wave of new entrants,” he added.

Jader said regulatory clarity was “the lifeblood of innovation.” He added that the new laws can expect an uptick “in both local and international exchanges looking to establish a foothold in the market.”

However, Zoe Wyatt, partner and head of Web3 and Disruptive Technology at Andersen LLP, believes that while the new rules will benefit more extensive exchanges looking for more precise guidelines, they will not “suddenly turn Australia into a global crypto hub.”

“The Web3 community is still largely looking to the U.S. in anticipation of a more crypto-friendly stance from the Trump administration,” Wyatt added.

Read the full article below:

Read the article