The Magazine

Zorina Alliata Of Open Institute of Technology On Five Things You Need To Create A Highly Successful Career In The AI Industry

Source:


Gaining hands-on experience through projects, internships, and collaborations is vital for understanding how to apply AI in various industries and domains. Use Kaggle or get a free cloud account and start experimenting. You will have projects to discuss at your next interviews.

By David Leichner, CMO at Cybellum

14 min read

Artificial Intelligence is now the leading edge of technology, driving unprecedented advancements across sectors. From healthcare to finance, education to environment, the AI industry is witnessing a skyrocketing demand for professionals. However, the path to creating a successful career in AI is multifaceted and constantly evolving. What does it take and what does one need in order to create a highly successful career in AI?

In this interview series, we are talking to successful AI professionals, AI founders, AI CEOs, educators in the field, AI researchers, HR managers in tech companies, and anyone who holds authority in the realm of Artificial Intelligence to inspire and guide those who are eager to embark on this exciting career path.

As part of this series, we had the pleasure of interviewing Zorina Alliata.

Zorina Alliata is an expert in AI, with over 20 years of experience in tech, and over 10 years in AI itself. As an educator, Zorina Alliata is passionate about learning, access to education and about creating the career you want. She implores us to learn more about ethics in AI, and not to fear AI, but to embrace it.

Thank you so much for joining us in this interview series! Before we dive in, our readers would like to learn a bit about your origin story. Can you share with us a bit about your childhood and how you grew up?

I was born in Romania, and grew up during communism, a very dark period in our history. I was a curious child and my parents, both teachers, encouraged me to learn new things all the time. Unfortunately, in communism, there was not a lot to do for a kid who wanted to learn: there was no TV, very few books and only ones that were approved by the state, and generally very few activities outside of school. Being an “intellectual” was a bad thing in the eyes of the government. They preferred people who did not read or think too much. I found great relief in writing, I have been writing stories and poetry since I was about ten years old. I was published with my first poem at 16 years old, in a national literature magazine.

Can you share with us the ‘backstory’ of how you decided to pursue a career path in AI?

I studied Computer Science at university. By then, communism had fallen and we actually had received brand new PCs at the university, and learned several programming languages. The last year, the fifth year of study, was equivalent with a Master’s degree, and was spent preparing your thesis. That’s when I learned about neural networks. We had a tiny, 5-node neural network and we spent the year trying to teach it to recognize the written letter “A”.

We had only a few computers in the lab running Windows NT, so really the technology was not there for such an ambitious project. We did not achieve a lot that year, but I was fascinated by the idea of a neural network learning by itself, without any programming. When I graduated, there were no jobs in AI at all, it was what we now call “the AI winter”. So I went and worked as a programmer, then moved into management and project management. You can imagine my happiness when, about ten years ago, AI came back to life in the form of Machine Learning (ML).

I immediately went and took every class possible to learn about it. I spent that Christmas holiday coding. The paradigm had changed from when I was in college, when we were trying to replicate the entire human brain. ML was focused on solving one specific problem, optimizing one specific output, and that’s where businesses everywhere saw a benefit. I then joined a Data Science team at GEICO, moved to Capital One as a Delivery lead for their Center for Machine Learning, and then went to Amazon in their AI/ML team.

Can you tell our readers about the most interesting projects you are working on now?

While I can’t discuss work projects due to confidentiality, there are some things I can mention! In the last five years, I worked with global companies to establish an AI strategy and to introduce AI and ML in their organizations. Some of my customers included large farming associations, who used ML to predict when to plant their crops for optimal results; water management companies who used ML for predictive maintenance to maintain their underground pipes; construction companies that used AI for visual inspections of their buildings, and to identify any possible defects and hospitals who used Digital Twins technology to improve patient outcomes and health. It is amazing to see how much AI and ML are already part of our everyday lives, and to recognize some of it in the mundane around us.

None of us are able to achieve success without some help along the way. Is there a particular person who you are grateful for who helped get you to where you are? Can you share a story about that?

When you are young, there are so many people who step up and help you along the way. I have had great luck with several professors who have encouraged me in school, and an uncle who worked in computers who would take me to his office and let me play around with his machines. I now try to give back and mentor several young people, especially women who are trying to get into the field. I volunteer with AnitaB and Zonta, as well as taking on mentees where I work.

As with any career path, the AI industry comes with its own set of challenges. Could you elaborate on some of the significant challenges you faced in your AI career and how you managed to overcome them?

I think one major challenge in AI is the speed of change. I remember after spending my Christmas holiday learning and coding in R, when I joined the Data Science team at GEICO, I realized the world had moved on and everyone was now coding in Python. So, I had to learn Python very fast, in order to understand what was going on.

It’s the same with research — I try to work on one subject, and four new papers are published every week that move the goal posts. It is very challenging to keep up, but you just have to adapt to continuously learn and let go of what becomes obsolete.

Ok, let’s now move to the main part of our interview about AI. What are the 3 things that most excite you about the AI industry now? Why?

1. Creativity

Generative AI brought us the ability to create amazing images based on simple text descriptions. Entire videos are now possible, and soon, maybe entire movies. I have been working in AI for several years and I never thought creative jobs will be the first to be achieved by AI. I am amazed at the capacity of an algorithms to create images, and to observe the artificial creativity we now see for the first time.

2. Abstraction

I think with the success and immediate mainstream adoption of Generative AI, we saw the great appetite out there for automation and abstraction. No one wants to do boring work and summarizing documents; no one wants to read long websites, they just want the gist of it. If I drive a car, I don’t need to know how the engine works and every equation that the engineers used to build it — I just want my car to drive. The same level of abstraction is now expected in AI. There is a lot of opportunity here in creating these abstractions for the future.

3. Opportunity

I like that we are in the beginning of AI, so there is a lot of opportunity to jump in. Most people who are passionate about it can learn all about AI fully online, in places like Open Institute of Technology. Or they can get experience working on small projects, and then they can apply for jobs. It is great because it gives people access to good jobs and stability in the future.

What are the 3 things that concern you about the AI industry? Why? What should be done to address and alleviate those concerns?

1. Fairness

The large companies that build LLMs spend a lot of energy and money into making them fair. But it is not easy. Us, as humans, are often not fair ourselves. We even have problems agreeing what fairness even means. So, how can we teach the machines to be fair? I think the responsibility stays with us. We can’t simply say “AI did this bad thing.”

2. Regulation

There are some regulations popping up but most are not coordinated or discussed widely. There is controversy, such as regarding the new California bill SB1047, where scientists take different sides of the debate. We need to find better ways to regulate the use and creation of AI, working together as a society, not just in small groups of politicians.

3. Awareness

I wish everyone understood the basics of AI. There is denial, fear, hatred that is created by doomsday misinformation. I wish AI was taught from a young age, through appropriate means, so everyone gets the fundamental principles and understands how to use this great tool in their lives.

For a young person who would like to eventually make a career in AI, which skills and subjects do they need to learn?

I think maybe the right question is: what are you passionate about? Do that, and see how you can use AI to make your job better and more exciting! I think AI will work alongside people in most jobs, as it develops and matures.

But for those who are looking to work in AI, they can choose from a variety of roles as well. We have technical roles like data scientist or machine learning engineer, which require very specialized knowledge and degrees. They learn computing, software engineering, programming, data analysis, data engineering. There are also business roles, for people who understand the technology well but are not writing code. Instead, they define strategies, design solutions for companies, or write implementation plans for AI products and services. There is also a robust AI research domain, where lots of scientists are measuring and analyzing new technology developments.

With Generative AI, new roles appeared, such as Prompt Engineer. We can now talk with the machines in natural language, so speaking good English is all that’s required to find the right conversation.

With these many possible roles, I think if you work in AI, some basic subjects where you can start are:

  1. Analytics — understand data and how it is stored and governed, and how we get insights from it.
  2. Logic — understand both mathematical and philosophical logic.
  3. Fundamentals of AI — read about the history and philosophy of AI, models of thinking, and major developments.

As you know, there are not that many women in the AI industry. Can you advise what is needed to engage more women in the AI industry?

Engaging more women in the AI industry is absolutely crucial if you want to build any successful AI products. In my twenty years career, I have seen changes in the tech industry to address this gender discrepancy. For example, we do well in school with STEM programs and similar efforts that encourage girls to code. We also created mentorship organizations such as AnitaB.org who allow women to connect and collaborate. One place where I think we still lag behind is in the workplace. When I came to the US in my twenties, I was the only woman programmer in my team. Now, I see more women at work, but still not enough. We say we create inclusive work environments, but we still have a long way to go to encourage more women to stay in tech. Policies that support flexible hours and parental leave are necessary, and other adjustments that account for the different lives that women have compared to men. Bias training and challenging stereotypes are also necessary, and many times these are implemented shoddily in organizations.

Ethical AI development is a pressing concern in the industry. How do you approach the ethical implications of AI, and what steps do you believe individuals and organizations should take to ensure responsible and fair AI practices?

Machine Learning and AI learn from data. Unfortunately, lot of our historical data shows strong biases. For example, for a long time, it was perfectly legal to only offer mortgages to white people. The data shows that. If we use this data to train a new model to enhance the mortgage application process, then the model will learn that mortgages should only be offered to white men. That is a bias that we had in the past, but we do not want to learn and amplify in the future.

Generative AI has introduced a new set of fresh risks, the most famous being the “hallucinations.” Generative AI will create new content based on chunks of text it finds in its training data, without an understanding of what the content means. It could repeat something it learned from one Reddit user ten years ago, that could be factually incorrect. Is that piece of information unbiased and fair?

There are many ways we fight for fairness in AI. There are technical tools we can use to offer interpretability and explainability of the actual models used. There are business constraints we can create, such as guardrails or knowledge bases, where we can lead the AI towards ethical answers. We also advise anyone who build AI to use a diverse team of builders. If you look around the table and you see the same type of guys who went to the schools, you will get exactly one original idea from them. If you add different genders, different ages, different tenures, different backgrounds, then you will get ten innovative ideas for your product, and you will have addressed biases you’ve never even thought of.

Read the full article below:

Related posts

Il Sole 24 Ore: Integrating Artificial Intelligence into the Enterprise – Challenges and Opportunities for CEOs and Management
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Apr 14, 2025 6 min read

Source:


Expert Pierluigi Casale analyzes the adoption of AI by companies, the ethical and regulatory challenges and the differentiated approach between large companies and SMEs

By Gianni Rusconi

Easier said than done: to paraphrase the well-known proverb, and to place it in the increasingly large collection of critical issues and opportunities related to artificial intelligence, the task that CEOs and management have to adequately integrate this technology into the company is indeed difficult. Pierluigi Casale, professor at OPIT (Open Institute of Technology, an academic institution founded two years ago and specialized in the field of Computer Science) and technical consultant to the European Parliament for the implementation and regulation of AI, is among those who contributed to the definition of the AI ​​Act, providing advice on aspects of safety and civil liability. His task, in short, is to ensure that the adoption of artificial intelligence (primarily within the parliamentary committees operating in Brussels) is not only efficient, but also ethical and compliant with regulations. And, obviously, his is not an easy task.

The experience gained over the last 15 years in the field of machine learning and the role played in organizations such as Europol and in leading technology companies are the requirements that Casale brings to the table to balance the needs of EU bodies with the pressure exerted by American Big Tech and to preserve an independent approach to the regulation of artificial intelligence. A technology, it is worth remembering, that implies broad and diversified knowledge, ranging from the regulatory/application spectrum to geopolitical issues, from computational limitations (common to European companies and public institutions) to the challenges related to training large-format language models.

CEOs and AI

When we specifically asked how CEOs and C-suites are “digesting” AI in terms of ethics, safety and responsibility, Casale did not shy away, framing the topic based on his own professional career. “I have noticed two trends in particular: the first concerns companies that started using artificial intelligence before the AI ​​Act and that today have the need, as well as the obligation, to adapt to the new ethical framework to be compliant and avoid sanctions; the second concerns companies, like the Italian ones, that are only now approaching this topic, often in terms of experimental and incomplete projects (the expression used literally is “proof of concept”, ed.) and without these having produced value. In this case, the ethical and regulatory component is integrated into the adoption process.”

In general, according to Casale, there is still a lot to do even from a purely regulatory perspective, due to the fact that there is not a total coherence of vision among the different countries and there is not the same speed in implementing the indications. Spain, in this regard, is setting an example, having established (with a royal decree of 8 November 2023) a dedicated “sandbox”, i.e. a regulatory experimentation space for artificial intelligence through the creation of a controlled test environment in the development and pre-marketing phase of some artificial intelligence systems, in order to verify compliance with the requirements and obligations set out in the AI ​​Act and to guide companies towards a path of regulated adoption of the technology.

Read the full article below (in Italian):

Read the article
The Lucky Future: How AI Aims to Change Everything
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
Apr 10, 2025 7 min read

There is no question that the spread of artificial intelligence (AI) is having a profound impact on nearly every aspect of our lives.

But is an AI-powered future one to be feared, or does AI offer the promise of a “lucky future.”

That “lucky future” prediction comes from Zorina Alliata, principal AI Strategist at Amazon and AI faculty member at Georgetown University and the Open Institute of Technology (OPIT), in her recent webinar “The Lucky Future: How AI Aims to Change Everything” (February 18, 2025).

However, according to Alliata, such a future depends on how the technology develops and whether strategies can be implemented to mitigate the risks.

How AI Aims to Change Everything

For many people, AI is already changing the way they work. However, more broadly, AI has profoundly impacted how we consume information.

From the curation of a social media feed and the summary answer to a search query from Gemini at the top of your Google results page to the AI-powered chatbot that resolves your customer service issues, AI has quickly and quietly infiltrated nearly every aspect of our lives in the past few years.

While there have been significant concerns recently about the possibly negative impact of AI, Alliata’s “lucky future” prediction takes these fears into account. As she detailed in her webinar, a future with AI will have to take into consideration:

  • Where we are currently with AI and future trajectories
  • The impact AI is having on the job landscape
  • Sustainability concerns and ethical dilemmas
  • The fundamental risks associated with current AI technology

According to Alliata, by addressing these risks, we can craft a future in which AI helps individuals better align their needs with potential opportunities and limitations of the new technology.

Industry Applications of AI

While AI has been in development for decades, Alliata describes a period known as the “AI winter” during which educators like herself studied AI technology, but hadn’t arrived at a point of practical applications. Contributing to this period of uncertainty were concerns over how to make AI profitable as well.

That all changed about 10-15 years ago when machine learning (ML) improved significantly. This development led to a surge in the creation of business applications for AI. Beginning with automation and robotics for repetitive tasks, the technology progressed to data analysis – taking a deep dive into data and finding not only new information but new opportunities as well.

This further developed into generative AI capable of completing creative tasks. Generative AI now produces around one billion words per day, compared to the one trillion produced by humans.

We are now at the stage where AI can complete complex tasks involving multiple steps. In her webinar, Alliata gave the example of a team creating storyboards and user pathways for a new app they wanted to develop. Using photos and rough images, they were able to use AI to generate the code for the app, saving hundreds of hours of manpower.

The next step in AI evolution is Artificial General Intelligence (AGI), an extremely autonomous level of AI that can replicate or in some cases exceed human intelligence. While the benefits of such technology may readily be obvious to some, the industry itself is divided as to not only whether this form of AI is close at hand or simply unachievable with current tools and technology, but also whether it should be developed at all.

This unpredictability, according to Alliata, represents both the excitement and the concerns about AI.

The AI Revolution and the Job Market

According to Alliata, the job market is the next area where the AI revolution can profoundly impact our lives.

To date, the AI revolution has not resulted in widespread layoffs as initially feared. Instead of making employees redundant, many jobs have evolved to allow them to work alongside AI. In fact, AI has also created new jobs such as AI prompt writer.

However, the prediction is that as AI becomes more sophisticated, it will need less human support, resulting in a greater job churn. Alliata shared statistics from various studies predicting as many as 27% of all jobs being at high risk of becoming redundant from AI and 40% of working hours being impacted by language learning models (LLMs) like Chat GPT.

Furthermore, AI may impact some roles and industries more than others. For example, one study suggests that in high-income countries, 8.5% of jobs held by women were likely to be impacted by potential automation, compared to just 3.9% of jobs held by men.

Is AI Sustainable?

While Alliata shared the many ways in which AI can potentially save businesses time and money, she also highlighted that it is an expensive technology in terms of sustainability.

Conducting AI training and processing puts a heavy strain on central processing units (CPUs), requiring a great deal of energy. According to estimates, Chat GPT 3 alone uses as much electricity per day as 121 U.S. households in an entire year. Gartner predicts that by 2030, AI could consume 3.5% of the world’s electricity.

To reduce the energy requirements, Alliata highlighted potential paths forward in terms of hardware optimization, such as more energy-efficient chips, greater use of renewable energy sources, and algorithm optimization. For example, models that can be applied to a variety of uses based on prompt engineering and parameter-efficient tuning are more energy-efficient than training models from scratch.

Risks of Using Generative AI

While Alliata is clearly an advocate for the benefits of AI, she also highlighted the risks associated with using generative AI, particularly LLMs.

  • Uncertainty – While we rely on AI for answers, we aren’t always sure that the answers provided are accurate.
  • Hallucinations – Technology designed to answer questions can make up facts when it does not know the answer.
  • Copyright – The training of LLMs often uses copyrighted data for training without permission from the creator.
  • Bias – Biased data often trains LLMs, and that bias becomes part of the LLM’s programming and production.
  • Vulnerability – Users can bypass the original functionality of an LLM and use it for a different purpose.
  • Ethical Risks – AI applications pose significant ethical risks, including the creation of deepfakes, the erosion of human creativity, and the aforementioned risks of unemployment.

Mitigating these risks relies on pillars of responsibility for using AI, including value alignment of the application, accountability, transparency, and explainability.

The last one, according to Alliata, is vital on a human level. Imagine you work for a bank using AI to assess loan applications. If a loan is denied, the explanation you give to the customer can’t simply be “Because the AI said so.” There needs to be firm and explainable data behind the reasoning.

OPIT’s Masters in Responsible Artificial Intelligence explores the risks and responsibilities inherent in AI, as well as others.

A Lucky Future

Despite the potential risks, Alliata concludes that AI presents even more opportunities and solutions in the future.

Information overload and decision fatigue are major challenges today. Imagine you want to buy a new car. You have a dozen features you desire, alongside hundreds of options, as well as thousands of websites containing the relevant information. AI can help you cut through the noise and narrow the information down to what you need based on your specific requirements.

Alliata also shared how AI is changing healthcare, allowing patients to understand their health data, make informed choices, and find healthcare professionals who meet their needs.

It is this functionality that can lead to the “lucky future.” Personalized guidance based on an analysis of vast amounts of data means that each person is more likely to make the right decision with the right information at the right time.

Read the article