There’s no doubt about it – artificial intelligence has revolutionized almost every aspect of modern life. Healthcare, finance, and manufacturing are just some of the sectors that have been virtually turned upside down by this powerful new force. Cybersecurity also ranks high on this list.

But as much as AI can benefit cybersecurity, it also presents new challenges. Or – to be more direct –new threats.

To understand just how serious these threats are, we’ve enlisted the help of two prominent figures in the cybersecurity world – Tom Vazdar and Venicia Solomons. Tom is the chair of the Master’s Degree in Enterprise Cybersecurity program at the Open Institute of Technology (OPIT). Venicia, better known as the “Cyber Queen,” runs a widely successful cybersecurity community looking to empower women to succeed in the industry.

Together, they held a master class titled “Cyber Threat Landscape 2024: Navigating New Risks.” In this article, you get the chance to hear all about the double-edged sword that is AI in cybersecurity.

How Can Organizations Benefit From Using AI in Cybersecurity?

As with any new invention, AI has primarily been developed to benefit people. In the case of AI, this mainly refers to enhancing efficiency, accuracy, and automation in tasks that would be challenging or impossible for people to perform alone.

However, as AI technology evolves, its potential for both positive and negative impacts becomes more apparent.

But just because the ugly side of AI has started to rear its head more dramatically, it doesn’t mean we should abandon the technology altogether. The key, according to Venicia, is in finding a balance. And according to Tom, this balance lies in treating AI the same way you would cybersecurity in general.

Keep reading to learn what this means.

Top of Form

Implement a Governance Framework

In cybersecurity, there is a governance framework called ISO/IEC 27000, whose goal is to provide a systematic approach to managing sensitive company information, ensuring it remains secure. A similar framework has recently been created for AI— ISO/IEC 42001.

Now, the trouble lies in the fact that many organizations “don’t even have cybersecurity, not to speak artificial intelligence,” as Tom puts it. But the truth is that they need both if they want to have a chance at managing the risks and complexities associated with AI technology, thus only reaping its benefits.

Implement an Oversight Mechanism

Fearing the risks of AI in cybersecurity, many organizations chose to forbid the usage of this technology outright within their operations. But by doing so, they also miss out on the significant benefits AI can offer in enhancing cybersecurity defenses.

So, an all-out ban on AI isn’t a solution. A well-thought-out oversight mechanism is.

According to Tom, this control framework should dictate how and when an organization uses cybersecurity and AI and when these two fields are to come in contact. It should also answer the questions of how an organization governs AI and ensures transparency.

With both of these frameworks (governance and oversight), it’s not enough to simply implement new mechanisms. Employees should also be educated and regularly trained to uphold the principles outlined in these frameworks.

Control the AI (Not the Other Way Around!)

When it comes to relying on AI, one principle should be every organization’s guiding light. Control the AI; don’t let the AI control you.

Of course, this includes controlling how the company’s employees use AI when interacting with client data, business secrets, and other sensitive information.

Now, the thing is – people don’t like to be controlled.

But without control, things can go off the rails pretty quickly.

Tom gives just one example of this. In 2022, an improperly trained (and controlled) chatbot gave an Air Canada customer inaccurate information and a non-existing discount. As a result, the customer bought a full-price ticket. A lawsuit ensued, and in 2024, the court ruled in the customer’s favor, ordering Air Canada to pay compensation.

This case alone illustrates one thing perfectly – you must have your AI systems under control. Tom hypothesizes that the system was probably affordable and easy to implement, but it eventually cost Air Canada dearly in terms of financial and reputational damage.

How Can Organizations Protect Themselves Against AI-Driven Cyberthreats?

With well-thought-out measures in place, organizations can reap the full benefits of AI in cybersecurity without worrying about the threats. But this doesn’t make the threats disappear. Even worse, these threats are only going to get better at outsmarting the organization’s defenses.

So, what can the organizations do about these threats?

Here’s what Tom and Venicia suggest.

Fight Fire With Fire

So, AI is potentially attacking your organization’s security systems? If so, use AI to defend them. Implement your own AI-enhanced threat detection systems.

But beware – this isn’t a one-and-done solution. Tom emphasizes the importance of staying current with the latest cybersecurity threats. More importantly – make sure your systems are up to date with them.

Also, never rely on a single control system. According to our experts, “layered security measures” are the way to go.

Never Stop Learning (and Training)

When it comes to AI in cybersecurity, continuous learning and training are of utmost importance – learning for your employees and training for the AI models. It’s the only way to ensure all system aspects function properly and your employees know how to use each and every one of them.

This approach should also alleviate one of the biggest concerns regarding an increasing AI implementation. Namely, employees fear that they will lose their jobs due to AI. But the truth is, the AI systems need them just as much as they need those systems.

As Tom puts it, “You need to train the AI system so it can protect you.”

That’s why studying to be a cybersecurity professional is a smart career move.

However, you’ll want to find a program that understands the importance of AI in cybersecurity and equips you to handle it properly. Get a master’s degree in Enterprise Security from OPIT, and that’s exactly what you’ll get.

Join the Bigger Fight

When it comes to cybersecurity, transparency is key. If organizations fail to report cybersecurity incidents promptly and accurately, they not only jeopardize their own security but also that of other organizations and individuals. Transparency builds trust and allows for collaboration in addressing cybersecurity threats collectively.

So, our experts urge you to engage in information sharing and collaborative efforts with other organizations, industry groups, and governmental bodies to stay ahead of threats.

How Has AI Impacted Data Protection and Privacy?

Among the challenges presented by AI, one stands out the most – the potential impact on data privacy and protection. Why? Because there’s a growing fear that personal data might be used to train large AI models.

That’s why European policymakers sprang into action and introduced the Artificial Intelligence Act in March 2024.

This regulation, implemented by the European Parliament, aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI. The act is akin to the well-known General Data Protection Regulation (GDPR) passed in 2016 but exclusively targets the use of AI. The good news for those fearful of AI’s potential negative impact is that every requirement imposed by this act is backed up with heavy penalties.

But how can organizations ensure customers, clients, and partners that their data is fully protected?

According to our experts, the answer is simple – transparency, transparency, and some more transparency!

Any employed AI system must be designed in a way that doesn’t jeopardize anyone’s privacy and freedom. However, it’s not enough to just design the system in such a way. You must also ensure all the stakeholders understand this design and the system’s operation. This includes providing clear information about the data being collected, how it’s being used, and the measures in place to protect it.

Beyond their immediate group of stakeholders, organizations also must ensure that their data isn’t manipulated or used against people. Tom gives an example of what must be avoided at all costs. Let’s say a client applies for a loan in a financial institution. Under no circumstances should that institution use AI to track the client’s personal data and use it against them, resulting in a loan ban. This hypothetical scenario is a clear violation of privacy and trust.

And according to Tom, “privacy is more important than ever.” The same goes for internal ethical standards organizations must develop.

Keeping Up With Cybersecurity

Like most revolutions, AI has come in fast and left many people (and organizations) scrambling to keep up. However, those who recognize that AI isn’t going anywhere have taken steps to embrace it and fully benefit from it. They see AI for what it truly is – a fundamental shift in how we approach technology and cybersecurity.

Those individuals have also chosen to advance their knowledge in the field by completing highly specialized and comprehensive programs like OPIT’s Enterprise Cybersecurity Master’s program. Coincidentally, this is also the program where you get to hear more valuable insights from Tom Vazdar, as he has essentially developed this course.

Related posts

IE University: How Corporate Purpose Drives Success in the AI Era
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Oct 17, 2024 7 min read

Source:


By Francesco Derchi

Purpose is a strategic tool for driving innovation, competitive advantage, and addressing AI challenges, writes Francesco Derchi.

Since the early 2000s, technology has dominated discussions among scholars and professionals about global development and economic trends, with the first wave of research regarding the internet’s impact on firms and society focusing on the enabling potential of technologies. The concept of “digital revolution,” as popularized by Nicholas Negroponte, became the new paradigm for broader considerations about the development of the firm’s macro environment, and how businesses could leverage it as an asset for creating competitive advantage. The following wave focused on the convergence of different technologies, such as manufacturing, and included the dynamics of coexistence between humans and machines. From the management side, the major challenges are related to defining effective digital transformation practices that could help to migrate organizations and exploit this new paradigm.

The current technological focus builds on these previous trends, particularly on artificial intelligence and more recently on the emergence of generative AI. The Age of AI is characterized by technology’s power to reshape business and society on a variety of levels. While AI’s pervasive impact is not new for firms, the mainstream adoption of ChatGPT for business purposes and the response to this ready adoption from big tech players like Microsoft, Google, and more recently Apple, shows how AI is reshaping and influencing companies’ strategic priorities.

From a research perspective, AI’s societal impact is inspiring new studies in the field of ethics. Luciano Floridi, now of Yale University, has identified several challenges for AI, characterizing them by global magnitudes like its environmental impact and has identified several challenges for AI security, including intellectual property, privacy, transparency, and accountability. In his work, Floridi underlines the importance of philosophy in defining problems and designing solutions – but it is equally important to consider how these challenges can be addressed at the firm level. What are the tools for managers?

Part of the answer may lie in the increasing and recent focus of management studies around “corporate purpose” and “brand purpose.” This trend represents an important attempt to deepen our understanding of “why to act” (purpose framing) and “how to act” (purpose formalizing and internalizing), while technology management studies address the “what to act” (purpose impacting) question. Furthermore, studies show that corporate purpose is critical for both digital native firms as well as traditional companies undergoing a digital transformation, serving as an important growth engine through purpose-driven innovation. It is therefore fair to ask: can purpose help in addressing any of the AI challenges previously mentioned?

Purpose concepts are not exclusively “cause-related” like CSR and environmental impact. Other types have emerged, such as “competence” (the function of the product) and “culture” (the intent that drives the business). This broadens the consideration of impact types that can help address specific challenges in the age of AI.

Purpose-driven organizations are not new. Take Tesla’s direction “to accelerate the world’s transition to sustainable energy” – it explicitly addresses environmental challenges while defining a business direction that requires constant innovation and leverages multiple converging technologies. The key is to have the purpose formalized and internalized within the company as a concrete drive for growth.

Due to its characteristics, the MTP plays a key role in digital transformation. This necessarily ambitious and long-term vision or goal – the Massive Transformative Purpose – requires firms, particularly those focused on exponential growth, to address emerging accelerating technologies with a purpose-first transformation logic. P&G’s Global Business Services division was able to improve market leadership and gain a competitive advantage over various start-ups and potential disruptors through its “Free up the employee, for free” MTP. This served as a north star for every employee, encouraging them to contribute ideas and best practices to overcome bulky processes and limitations.

My research on MTPs in AI-era firms explores their role in driving innovation to address specific challenges. Results show that the MTP impacts the organization across four dimensions, requiring commitment and synergy from management. Let’s consider these four dimensions by looking at Airbnb:

  1. Internal Impact: The MTP acts as the organization’s genetic code and guiding philosophy. It is key for leveraging employee motivation, with a strong relationship between purpose, organizational culture, and firm values. Airbnb’s culture of belonging highlights this, with its various purpose-shaping practices, starting with culture-fit interviews delivered during the recruitment process.
  2. Brand and Market Influence: The MTP contributes directly to building a strong brand and influencing the market. It allows firms to extend beyond functional and symbolic benefits to make the impact of the company on society visible. This involves addressing market demand coherently and consistently. Airbnb’s “Bélo” symbol visually represents this concept of belonging while their MTP features in campaigns like “Wall and Chain: A Story of Breaking Down Walls.”
  3. Competitive Advantage and Growth: The MTP drives innovation and can lead to superior stock market performance. In digital firms, it’s key in the creation of ecosystems that aggregate leveraged assets and third parties for value creation. The company’s “belong anywhere transformation journey” is a strategic initiative that formalized and interiorized the MTP through various touchpoints for all the different ecosystem members. As Leigh Gallagher details in her 2016 Fortune feature about the company, “When travellers leave their homes, they feel alone. They reach their Airbnb, and they feel accepted and taken care of by their host. They then feel safe to be the same kind of person they are when they’re at home.”
  4. Core Organization Identity: The MTP is considered part of the core dimension of the organization. More than a goal or business strategy, it is a strategic issue that generates a sense of direction and purpose that affects every part of the organization: internal, external, personality, and expression. This dimension also involves the role of the founder(s) and their personality in shaping the business. At Airbnb, the MTP is often used as a shortcut to explain the firm’s mission and vision. The founders’ approach is pragmatic, and instead of debating differences, time should be spent on execution. At the same time, the personalities of the three founders, Chesky, Gebbia, and Blecharcyzk, are the identity of the firm. They were the first hosts for the platform. Their credibility is key for making Airbnb a trustworthy and coherent proposal in a crowded market.

Executives and leaders of business in the current AI era should embrace three key principles. Be true: Purpose is an essential strategic tool that enables firms to identify and connect with their original selves, decoding their reason for being and embedding it into their identity. Be ambitious: The MTP allows for global impact, confronting major challenges by synthesizing business values and guiding innovation paths to address AI-related issues. Be generous: Purpose allows firms to explicitly address environmental and social issues, taking action on values-based challenges such as transparency, respect for intellectual property, and accountability. By following these principles, organizations and their leaders can maintain their direction and continue to advance in the AI era.

Read the full article below:

Read the article
Zorina Alliata Of Open Institute of Technology On Five Things You Need To Create A Highly Successful Career In The AI Industry
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Sep 19, 2024 13 min read

Source:


Gaining hands-on experience through projects, internships, and collaborations is vital for understanding how to apply AI in various industries and domains. Use Kaggle or get a free cloud account and start experimenting. You will have projects to discuss at your next interviews.

By David Leichner, CMO at Cybellum

14 min read

Artificial Intelligence is now the leading edge of technology, driving unprecedented advancements across sectors. From healthcare to finance, education to environment, the AI industry is witnessing a skyrocketing demand for professionals. However, the path to creating a successful career in AI is multifaceted and constantly evolving. What does it take and what does one need in order to create a highly successful career in AI?

In this interview series, we are talking to successful AI professionals, AI founders, AI CEOs, educators in the field, AI researchers, HR managers in tech companies, and anyone who holds authority in the realm of Artificial Intelligence to inspire and guide those who are eager to embark on this exciting career path.

As part of this series, we had the pleasure of interviewing Zorina Alliata.

Zorina Alliata is an expert in AI, with over 20 years of experience in tech, and over 10 years in AI itself. As an educator, Zorina Alliata is passionate about learning, access to education and about creating the career you want. She implores us to learn more about ethics in AI, and not to fear AI, but to embrace it.

Thank you so much for joining us in this interview series! Before we dive in, our readers would like to learn a bit about your origin story. Can you share with us a bit about your childhood and how you grew up?

I was born in Romania, and grew up during communism, a very dark period in our history. I was a curious child and my parents, both teachers, encouraged me to learn new things all the time. Unfortunately, in communism, there was not a lot to do for a kid who wanted to learn: there was no TV, very few books and only ones that were approved by the state, and generally very few activities outside of school. Being an “intellectual” was a bad thing in the eyes of the government. They preferred people who did not read or think too much. I found great relief in writing, I have been writing stories and poetry since I was about ten years old. I was published with my first poem at 16 years old, in a national literature magazine.

Can you share with us the ‘backstory’ of how you decided to pursue a career path in AI?

I studied Computer Science at university. By then, communism had fallen and we actually had received brand new PCs at the university, and learned several programming languages. The last year, the fifth year of study, was equivalent with a Master’s degree, and was spent preparing your thesis. That’s when I learned about neural networks. We had a tiny, 5-node neural network and we spent the year trying to teach it to recognize the written letter “A”.

We had only a few computers in the lab running Windows NT, so really the technology was not there for such an ambitious project. We did not achieve a lot that year, but I was fascinated by the idea of a neural network learning by itself, without any programming. When I graduated, there were no jobs in AI at all, it was what we now call “the AI winter”. So I went and worked as a programmer, then moved into management and project management. You can imagine my happiness when, about ten years ago, AI came back to life in the form of Machine Learning (ML).

I immediately went and took every class possible to learn about it. I spent that Christmas holiday coding. The paradigm had changed from when I was in college, when we were trying to replicate the entire human brain. ML was focused on solving one specific problem, optimizing one specific output, and that’s where businesses everywhere saw a benefit. I then joined a Data Science team at GEICO, moved to Capital One as a Delivery lead for their Center for Machine Learning, and then went to Amazon in their AI/ML team.

Can you tell our readers about the most interesting projects you are working on now?

While I can’t discuss work projects due to confidentiality, there are some things I can mention! In the last five years, I worked with global companies to establish an AI strategy and to introduce AI and ML in their organizations. Some of my customers included large farming associations, who used ML to predict when to plant their crops for optimal results; water management companies who used ML for predictive maintenance to maintain their underground pipes; construction companies that used AI for visual inspections of their buildings, and to identify any possible defects and hospitals who used Digital Twins technology to improve patient outcomes and health. It is amazing to see how much AI and ML are already part of our everyday lives, and to recognize some of it in the mundane around us.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful for who helped get you to where you are? Can you share a story about that?

When you are young, there are so many people who step up and help you along the way. I have had great luck with several professors who have encouraged me in school, and an uncle who worked in computers who would take me to his office and let me play around with his machines. I now try to give back and mentor several young people, especially women who are trying to get into the field. I volunteer with AnitaB and Zonta, as well as taking on mentees where I work.

As with any career path, the AI industry comes with its own set of challenges. Could you elaborate on some of the significant challenges you faced in your AI career and how you managed to overcome them?

I think one major challenge in AI is the speed of change. I remember after spending my Christmas holiday learning and coding in R, when I joined the Data Science team at GEICO, I realized the world had moved on and everyone was now coding in Python. So, I had to learn Python very fast, in order to understand what was going on.

It’s the same with research — I try to work on one subject, and four new papers are published every week that move the goal posts. It is very challenging to keep up, but you just have to adapt to continuously learn and let go of what becomes obsolete.

Ok, let’s now move to the main part of our interview about AI. What are the 3 things that most excite you about the AI industry now? Why?

1. Creativity

Generative AI brought us the ability to create amazing images based on simple text descriptions. Entire videos are now possible, and soon, maybe entire movies. I have been working in AI for several years and I never thought creative jobs will be the first to be achieved by AI. I am amazed at the capacity of an algorithms to create images, and to observe the artificial creativity we now see for the first time.

2. Abstraction

I think with the success and immediate mainstream adoption of Generative AI, we saw the great appetite out there for automation and abstraction. No one wants to do boring work and summarizing documents; no one wants to read long websites, they just want the gist of it. If I drive a car, I don’t need to know how the engine works and every equation that the engineers used to build it — I just want my car to drive. The same level of abstraction is now expected in AI. There is a lot of opportunity here in creating these abstractions for the future.

3. Opportunity

I like that we are in the beginning of AI, so there is a lot of opportunity to jump in. Most people who are passionate about it can learn all about AI fully online, in places like Open Institute of Technology. Or they can get experience working on small projects, and then they can apply for jobs. It is great because it gives people access to good jobs and stability in the future.

What are the 3 things that concern you about the AI industry? Why? What should be done to address and alleviate those concerns?

1. Fairness

The large companies that build LLMs spend a lot of energy and money into making them fair. But it is not easy. Us, as humans, are often not fair ourselves. We even have problems agreeing what fairness even means. So, how can we teach the machines to be fair? I think the responsibility stays with us. We can’t simply say “AI did this bad thing.”

2. Regulation

There are some regulations popping up but most are not coordinated or discussed widely. There is controversy, such as regarding the new California bill SB1047, where scientists take different sides of the debate. We need to find better ways to regulate the use and creation of AI, working together as a society, not just in small groups of politicians.

3. Awareness

I wish everyone understood the basics of AI. There is denial, fear, hatred that is created by doomsday misinformation. I wish AI was taught from a young age, through appropriate means, so everyone gets the fundamental principles and understands how to use this great tool in their lives.

For a young person who would like to eventually make a career in AI, which skills and subjects do they need to learn?

I think maybe the right question is: what are you passionate about? Do that, and see how you can use AI to make your job better and more exciting! I think AI will work alongside people in most jobs, as it develops and matures.

But for those who are looking to work in AI, they can choose from a variety of roles as well. We have technical roles like data scientist or machine learning engineer, which require very specialized knowledge and degrees. They learn computing, software engineering, programming, data analysis, data engineering. There are also business roles, for people who understand the technology well but are not writing code. Instead, they define strategies, design solutions for companies, or write implementation plans for AI products and services. There is also a robust AI research domain, where lots of scientists are measuring and analyzing new technology developments.

With Generative AI, new roles appeared, such as Prompt Engineer. We can now talk with the machines in natural language, so speaking good English is all that’s required to find the right conversation.

With these many possible roles, I think if you work in AI, some basic subjects where you can start are:

  1. Analytics — understand data and how it is stored and governed, and how we get insights from it.
  2. Logic — understand both mathematical and philosophical logic.
  3. Fundamentals of AI — read about the history and philosophy of AI, models of thinking, and major developments.

As you know, there are not that many women in the AI industry. Can you advise what is needed to engage more women in the AI industry?

Engaging more women in the AI industry is absolutely crucial if you want to build any successful AI products. In my twenty years career, I have seen changes in the tech industry to address this gender discrepancy. For example, we do well in school with STEM programs and similar efforts that encourage girls to code. We also created mentorship organizations such as AnitaB.org who allow women to connect and collaborate. One place where I think we still lag behind is in the workplace. When I came to the US in my twenties, I was the only woman programmer in my team. Now, I see more women at work, but still not enough. We say we create inclusive work environments, but we still have a long way to go to encourage more women to stay in tech. Policies that support flexible hours and parental leave are necessary, and other adjustments that account for the different lives that women have compared to men. Bias training and challenging stereotypes are also necessary, and many times these are implemented shoddily in organizations.

Ethical AI development is a pressing concern in the industry. How do you approach the ethical implications of AI, and what steps do you believe individuals and organizations should take to ensure responsible and fair AI practices?

Machine Learning and AI learn from data. Unfortunately, lot of our historical data shows strong biases. For example, for a long time, it was perfectly legal to only offer mortgages to white people. The data shows that. If we use this data to train a new model to enhance the mortgage application process, then the model will learn that mortgages should only be offered to white men. That is a bias that we had in the past, but we do not want to learn and amplify in the future.

Generative AI has introduced a new set of fresh risks, the most famous being the “hallucinations.” Generative AI will create new content based on chunks of text it finds in its training data, without an understanding of what the content means. It could repeat something it learned from one Reddit user ten years ago, that could be factually incorrect. Is that piece of information unbiased and fair?

There are many ways we fight for fairness in AI. There are technical tools we can use to offer interpretability and explainability of the actual models used. There are business constraints we can create, such as guardrails or knowledge bases, where we can lead the AI towards ethical answers. We also advise anyone who build AI to use a diverse team of builders. If you look around the table and you see the same type of guys who went to the schools, you will get exactly one original idea from them. If you add different genders, different ages, different tenures, different backgrounds, then you will get ten innovative ideas for your product, and you will have addressed biases you’ve never even thought of.

Read the full article below:

Read the article