What does an average day look like for somebody working in cybersecurity?

That isn’t an easy question to answer when you consider the vastness of the field. Somebody who works in cybersecurity needs to stay constantly abreast of industry changes – especially new attacks cooked up by cybercriminals – and help their employers create and tweak their security plans.

However, thanks to Tom Vazdar, who has developed the Open Institute of Technology’s (OPIT’s) Master’s Degree in Enterprise Cybersecurity, we can provide some insight into what your average day may look like.

Who Is Tom Vazdar?

Serving as the Program Chair of OPIT’s upcoming Master’s Degree in Cybersecurity, Tom brings a vast amount of practical experience to the table. His work has spanned the globe. Tom has been employed as the Chief Security Officer for a major Croatian bank, in addition to serving as the Chief Information Officer for a company in the United States’ manufacturing sector.

His practical experience spans other industries – including technology and finance – and he’s currently completing a doctorate while running his own practice. Tom’s specialty is the behavioral aspect of cybersecurity. His deep understanding of the “culture” that surrounds the field has been shaped by his work on development strategies, policies, and frameworks for his past employers.

The Importance of Trends

The first thing Tom highlights is that a cybersecurity professional has to follow the trends in the industry. As he points out: “We are living in an era where digital transformation is accelerating, and with it, the complexity and frequency of cyber threats are also increasing.” To demonstrate this, he points to an ISACA report published in 2023 showing that cyber attacks have increased 48% in 2023 compared to 2022. More worryingly, 62% of the organizations that experience these attacks underreport them – an indication that many simply don’t have the talent to truly understand the threat they face.

As a cybersecurity professional, your role is to provide the expertise such companies are sorely lacking.

Thankfully, many business leaders understand that they need this expertise. Tom points out that 59% of leaders say they’re understaffed in the cyber department, leading to a rising demand for people with the following technical skills:

  • Identity and access management
  • Data protection
  • Cloud computing
  • DevSecOps (development, security, and operations)

Furthermore, Tom says that artificial intelligence (AI) is completely transforming the cybersecurity industry. While AI is often beneficial to professionals in the field – it can enhance threat detection and response – it is also a danger. Malicious entities can use AI to conduct a new wave of attacks, such as data poisoning, for which you need to be prepared as a cybersecurity professional.

Tom’s discussion of these emerging trends highlights one of the most critical aspects of a day in the life of a cybersecurity professional – learning is key. There is no such thing as static knowledge because the industry (and the attacks your company may face) constantly evolve.

An Average Day Broken Down

Now that you understand how important staying on top of the ever-changing trends in cybersecurity is for those in the field, it’s possible to break things down a little further. On an average day, you may find yourself working on any, some, or even all of the following tasks.

Developing and Maintaining a Cybersecurity Strategy

Given that such a large number of business leaders are understaffed and have minimal access to appropriate talent, you’ll often be tasked with creating and maintaining a company’s cybersecurity strategy.

This strategy is not as simple as creating a collection of actions to take in the event of an attack.

Tom emphasizes not only the importance of proactivity, but also of integrating a cybersecurity strategy into the wider business strategy. “It becomes part of the mission and vision,” he says. “After all, there are two things that are important to companies – their data and customer trust. If you lose customer trust, you lose your business. If you lose your data, you lose your business.”

As a technically adept professional, you’ll be tasked with building a strategy that grows ever more complex as the threats the company faces become more advanced. New technologies – such as AI and machine learning – will be used against you, with your main task being to ensure the strategy you create can fend off such technologically-empowered attacks.

The Simpler Day-to-Day

Now, let’s move away from the complexities of developing an overarching plan and go into more detail about daily responsibilities. A cybersecurity professional is usually tasked with dealing with the day-to-day maintenance of systems.

It’s all about control.

Tom says that much of the role involves proactively identifying new protective measures. For instance, software patching is key – outdated software has vulnerabilities that a hacker can exploit. You’ll need to stay up to date on the development of patches for the software your company uses and, crucially, implement those patches as soon as they’re available.

Creating regular backups is also part of this day-to-day work. It’s an area that many businesses neglect – perhaps assuming that nothing bad can happen to them – but a backup will be a lifesaver if a hacker compromises your company’s main data stores.

Tending to Your Ecosystem

It’s not simply your own institution that you must maintain as a cybersecurity professional – everyone who interacts with that institution must also be managed. Vendors, external software developers, and any other part of your supply chain need to be as risk-aware as your business. As Tom puts it: “If they don’t care about vulnerabilities in their system, and they work for you as a company, then you’ll have an issue because their risk suddenly becomes your risk.”

As such, managing the cyber security aspect of your company’s relationships with its partners is a vital part of your duties. You may engage in planning with those partners, helping them improve their practices, or cooperate with them to create strategies encompassing your entire supply chain.

Continued Education

Tom goes on to highlight just how important continued education is to the success of a cybersecurity professional. “It’s always interesting. And if you’re really passionate about it, cybersecurity becomes your lifestyle,” he says. “You want to see what’s new. What are the new attack methods, what are your competitors doing, and what is new on the market.”

He points to a simple example – phishing emails.

These emails – which were traditionally laden with spelling errors that made them easier to spot – are becoming increasingly hard to detect thanks to the use of AI. They’re written better. Failure to understand and adapt to that fact could make it harder to educate yourself and the people in your company.

Your average day may also involve educating your colleagues about upcoming threats and new attack methods they need to understand. The phishing example Tom shares applies here. Any email that looks somewhat legitimate is a threat, so continued education of your colleagues is essential to stop that threat from having its intended effect.

An Example of a Typical Project

Given how vast the cybersecurity field is, the range of projects you may work on will vary enormously. However, Tom provides an example of when he worked in the banking industry and saw the rise of the Zeus Botnet.

In this case, his responsibilities were twofold.

First – finding a way to defend against botnet attacks. That involved researching the malware to figure out how it spread, allowing him to put protective measures in place to prevent that spread. The second task involved creating educational programs, both for employees and his bank’s clients, to make them aware of the Zeus Botnet.

Here, we see the education part of the cybersecurity professional’s “average day” coming into play, complementing the more technical aspects of dealing with malware. We even see supply chain risk coming into play – each client is part of the bank’s supply chain, meaning they need to understand how to defend themselves just as much as the bank does.

The Qualifications Needed to Work in Cybersecurity

With a multitude of cybersecurity qualifications available – many covering specific niches – it’s tough to find the appropriate one to make you attractive to an employer. That’s where Tom’s work with OPIT comes in. The master’s degree that he’s developing not only focuses on the technical skills a professional needs but places those skills in a business context.

The upcoming course will offer electives in subjects such as AI, cloud security, and IoT security, granting students flexibility to pursue a specialization within their degree. The overall program is also closely aligned to industry certifications – such as those offered by CISSP – to ensure graduates are as industry-ready as they are academically qualified.

The intention, Tom says, is to fill the skills gap that 3 million businesses say they have in cybersecurity. The program provides the right blend of knowledge between technical and managerial skills, in addition to allowing students to pursue subjects of particular interest to them.

Ultimately, it doesn’t teach absolutely everything that you could learn about the industry. No course can. But it does equip you with key foundational knowledge aligned with industry certifications that make you more employable. That, combined with your continued education and completion of relevant certifications once you’re employed, means you have an enormous opportunity to build a successful cybersecurity career with OPIT.

So, the qualifications needed for the industry start with a relevant degree. They then blossom out. Professionals focus on courses that meet the specific requirements of their roles so that they learn the cybersecurity techniques that are most effective for their needs.

 

Related posts

Il Sole 24 Ore: Integrating Artificial Intelligence into the Enterprise – Challenges and Opportunities for CEOs and Management
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Apr 14, 2025 6 min read

Source:


Expert Pierluigi Casale analyzes the adoption of AI by companies, the ethical and regulatory challenges and the differentiated approach between large companies and SMEs

By Gianni Rusconi

Easier said than done: to paraphrase the well-known proverb, and to place it in the increasingly large collection of critical issues and opportunities related to artificial intelligence, the task that CEOs and management have to adequately integrate this technology into the company is indeed difficult. Pierluigi Casale, professor at OPIT (Open Institute of Technology, an academic institution founded two years ago and specialized in the field of Computer Science) and technical consultant to the European Parliament for the implementation and regulation of AI, is among those who contributed to the definition of the AI ​​Act, providing advice on aspects of safety and civil liability. His task, in short, is to ensure that the adoption of artificial intelligence (primarily within the parliamentary committees operating in Brussels) is not only efficient, but also ethical and compliant with regulations. And, obviously, his is not an easy task.

The experience gained over the last 15 years in the field of machine learning and the role played in organizations such as Europol and in leading technology companies are the requirements that Casale brings to the table to balance the needs of EU bodies with the pressure exerted by American Big Tech and to preserve an independent approach to the regulation of artificial intelligence. A technology, it is worth remembering, that implies broad and diversified knowledge, ranging from the regulatory/application spectrum to geopolitical issues, from computational limitations (common to European companies and public institutions) to the challenges related to training large-format language models.

CEOs and AI

When we specifically asked how CEOs and C-suites are “digesting” AI in terms of ethics, safety and responsibility, Casale did not shy away, framing the topic based on his own professional career. “I have noticed two trends in particular: the first concerns companies that started using artificial intelligence before the AI ​​Act and that today have the need, as well as the obligation, to adapt to the new ethical framework to be compliant and avoid sanctions; the second concerns companies, like the Italian ones, that are only now approaching this topic, often in terms of experimental and incomplete projects (the expression used literally is “proof of concept”, ed.) and without these having produced value. In this case, the ethical and regulatory component is integrated into the adoption process.”

In general, according to Casale, there is still a lot to do even from a purely regulatory perspective, due to the fact that there is not a total coherence of vision among the different countries and there is not the same speed in implementing the indications. Spain, in this regard, is setting an example, having established (with a royal decree of 8 November 2023) a dedicated “sandbox”, i.e. a regulatory experimentation space for artificial intelligence through the creation of a controlled test environment in the development and pre-marketing phase of some artificial intelligence systems, in order to verify compliance with the requirements and obligations set out in the AI ​​Act and to guide companies towards a path of regulated adoption of the technology.

Read the full article below (in Italian):

Read the article
The Lucky Future: How AI Aims to Change Everything
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Apr 10, 2025 7 min read

There is no question that the spread of artificial intelligence (AI) is having a profound impact on nearly every aspect of our lives.

But is an AI-powered future one to be feared, or does AI offer the promise of a “lucky future.”

That “lucky future” prediction comes from Zorina Alliata, principal AI Strategist at Amazon and AI faculty member at Georgetown University and the Open Institute of Technology (OPIT), in her recent webinar “The Lucky Future: How AI Aims to Change Everything” (February 18, 2025).

However, according to Alliata, such a future depends on how the technology develops and whether strategies can be implemented to mitigate the risks.

How AI Aims to Change Everything

For many people, AI is already changing the way they work. However, more broadly, AI has profoundly impacted how we consume information.

From the curation of a social media feed and the summary answer to a search query from Gemini at the top of your Google results page to the AI-powered chatbot that resolves your customer service issues, AI has quickly and quietly infiltrated nearly every aspect of our lives in the past few years.

While there have been significant concerns recently about the possibly negative impact of AI, Alliata’s “lucky future” prediction takes these fears into account. As she detailed in her webinar, a future with AI will have to take into consideration:

  • Where we are currently with AI and future trajectories
  • The impact AI is having on the job landscape
  • Sustainability concerns and ethical dilemmas
  • The fundamental risks associated with current AI technology

According to Alliata, by addressing these risks, we can craft a future in which AI helps individuals better align their needs with potential opportunities and limitations of the new technology.

Industry Applications of AI

While AI has been in development for decades, Alliata describes a period known as the “AI winter” during which educators like herself studied AI technology, but hadn’t arrived at a point of practical applications. Contributing to this period of uncertainty were concerns over how to make AI profitable as well.

That all changed about 10-15 years ago when machine learning (ML) improved significantly. This development led to a surge in the creation of business applications for AI. Beginning with automation and robotics for repetitive tasks, the technology progressed to data analysis – taking a deep dive into data and finding not only new information but new opportunities as well.

This further developed into generative AI capable of completing creative tasks. Generative AI now produces around one billion words per day, compared to the one trillion produced by humans.

We are now at the stage where AI can complete complex tasks involving multiple steps. In her webinar, Alliata gave the example of a team creating storyboards and user pathways for a new app they wanted to develop. Using photos and rough images, they were able to use AI to generate the code for the app, saving hundreds of hours of manpower.

The next step in AI evolution is Artificial General Intelligence (AGI), an extremely autonomous level of AI that can replicate or in some cases exceed human intelligence. While the benefits of such technology may readily be obvious to some, the industry itself is divided as to not only whether this form of AI is close at hand or simply unachievable with current tools and technology, but also whether it should be developed at all.

This unpredictability, according to Alliata, represents both the excitement and the concerns about AI.

The AI Revolution and the Job Market

According to Alliata, the job market is the next area where the AI revolution can profoundly impact our lives.

To date, the AI revolution has not resulted in widespread layoffs as initially feared. Instead of making employees redundant, many jobs have evolved to allow them to work alongside AI. In fact, AI has also created new jobs such as AI prompt writer.

However, the prediction is that as AI becomes more sophisticated, it will need less human support, resulting in a greater job churn. Alliata shared statistics from various studies predicting as many as 27% of all jobs being at high risk of becoming redundant from AI and 40% of working hours being impacted by language learning models (LLMs) like Chat GPT.

Furthermore, AI may impact some roles and industries more than others. For example, one study suggests that in high-income countries, 8.5% of jobs held by women were likely to be impacted by potential automation, compared to just 3.9% of jobs held by men.

Is AI Sustainable?

While Alliata shared the many ways in which AI can potentially save businesses time and money, she also highlighted that it is an expensive technology in terms of sustainability.

Conducting AI training and processing puts a heavy strain on central processing units (CPUs), requiring a great deal of energy. According to estimates, Chat GPT 3 alone uses as much electricity per day as 121 U.S. households in an entire year. Gartner predicts that by 2030, AI could consume 3.5% of the world’s electricity.

To reduce the energy requirements, Alliata highlighted potential paths forward in terms of hardware optimization, such as more energy-efficient chips, greater use of renewable energy sources, and algorithm optimization. For example, models that can be applied to a variety of uses based on prompt engineering and parameter-efficient tuning are more energy-efficient than training models from scratch.

Risks of Using Generative AI

While Alliata is clearly an advocate for the benefits of AI, she also highlighted the risks associated with using generative AI, particularly LLMs.

  • Uncertainty – While we rely on AI for answers, we aren’t always sure that the answers provided are accurate.
  • Hallucinations – Technology designed to answer questions can make up facts when it does not know the answer.
  • Copyright – The training of LLMs often uses copyrighted data for training without permission from the creator.
  • Bias – Biased data often trains LLMs, and that bias becomes part of the LLM’s programming and production.
  • Vulnerability – Users can bypass the original functionality of an LLM and use it for a different purpose.
  • Ethical Risks – AI applications pose significant ethical risks, including the creation of deepfakes, the erosion of human creativity, and the aforementioned risks of unemployment.

Mitigating these risks relies on pillars of responsibility for using AI, including value alignment of the application, accountability, transparency, and explainability.

The last one, according to Alliata, is vital on a human level. Imagine you work for a bank using AI to assess loan applications. If a loan is denied, the explanation you give to the customer can’t simply be “Because the AI said so.” There needs to be firm and explainable data behind the reasoning.

OPIT’s Masters in Responsible Artificial Intelligence explores the risks and responsibilities inherent in AI, as well as others.

A Lucky Future

Despite the potential risks, Alliata concludes that AI presents even more opportunities and solutions in the future.

Information overload and decision fatigue are major challenges today. Imagine you want to buy a new car. You have a dozen features you desire, alongside hundreds of options, as well as thousands of websites containing the relevant information. AI can help you cut through the noise and narrow the information down to what you need based on your specific requirements.

Alliata also shared how AI is changing healthcare, allowing patients to understand their health data, make informed choices, and find healthcare professionals who meet their needs.

It is this functionality that can lead to the “lucky future.” Personalized guidance based on an analysis of vast amounts of data means that each person is more likely to make the right decision with the right information at the right time.

Read the article