The Magazine

Data Science & AI

Dive deep into data-driven technologies: Machine Learning, Reinforcement Learning, Data Mining, Big Data, NLP & more. Stay updated.

The Educator: OPIT – Open Institute of Technology launches AI agent to support students and staff
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
July 03, 2025

Source:


OPIT – Open Institute of Technology, a global online educational institution, has launched its very own AI agent: OPIT AI Copilot. The institution is amongst the first in Europe to introduce a custom AI assistant for students and faculty.

Developed by an in-house team of faculty, engineers, and researchers, OPIT AI Copilot has been trained on OPIT’s entire educational archive developed over the past three years, including 131 courses, around 3,500 hours of video content, and 320 certified assessments, amongst other content.

Due to this, OPIT AI Copilot can provide responses that adapt in real-time to the student’s progress, offering direct links to referenced sources within the virtual learning environment.

It can also “see” exactly where the student is in their course modules, avoids revealing information from unreleased modules, and provides consistent guidance for a fully integrated learning experience. During exams, it switches to “anti-cheating” mode, detecting the exam period and automatically transitioning from a study assistant to basic research tool, disabling direct answers on exam topics.

The AI assistant operates and interacts 24/7, bridging time zones for a community of 350 students from over 80 countries, many of whom are working professionals. This is crucial for those balancing online study with work and personal commitments.

OPIT AI Copilot also supports faculty and staff by grading assignments and generating educational materials, freeing up resources for teaching. It offers professors and tutors self-assessment tools and feedback rubrics that cut correction time by up to 30%.

OPIT AI Copilot was unveiled during the event “AI Agents and the Future of Higher Education” hosted at Microsoft Italy in Milan, bringing together representatives from some of the world’s most prestigious academic institutions to discuss the impact of AI in education. This featured talks from OPIT Rector Francesco Profumo and founder and director Riccardo Ocleppo, as well as Danielle Barrios O’Neill from Royal College of Art and Francisco Machín from IE University.

Through live demos and panel discussions, the event explored how the technological revolution is redefining study, teaching, and interaction between students, educators, and institutions, opening new possibilities for the future of university education.

“We’re in the midst of a deep transformation, where AI is no longer just a tool: it’s an environment, a context that radically changes how we learn, teach, and create. But we must be cautious: it’s not a shortcut. It’s a cultural, ethical, and pedagogical challenge, and to meet it we need the courage to shift perspectives, rethink traditional models, and build solid bridges between human and artificial intelligence,” says Professor Profumo.

“We want to put technology at the service of higher education. We’re ready to develop solutions not only for our own students, but also to share with other global institutions that are eager to innovate the learning experience, to face a future in education that’s fast approaching,” says Ocleppo.

A mobile app is already scheduled for release this autumn, alongside features for downloading exercises, summaries, and concept maps.

A demonstration of OPIT AI Copilot can be seen here:

Read the full article below:

 

Read the article
Il Sole 24 Ore: From OPIT, an ‘AI agent’ for students and teachers
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
July 02, 2025

Source:


At its core is a teaching heritage made up of 131 courses, 3,500 hours of video, 1,800 live sessions

The Open Institute of Technology – a global academic institution that offers Bachelor’s and Master’s degrees – launches the “OPIT AI Copilot” which aims to revolutionize, through Artificial Intelligence, the learning and teaching experience. Trained on the entire educational heritage developed in the last three years (131 courses, 3,500 hours of asynchronous videos, 1,800 live sessions per year, etc.) the assistant “sees” the student’s level of progress between the educational modules, avoids anticipations on modules not yet released and accompanies them along the way. In addition to the role of tutor for students, OPIT AI Copilot supports teachers and staff by correcting papers and generating teaching materials, freeing up resources for teaching.
 

Read the full article below:

Read the article
Agenda Digitale: The Five Pillars of the Cloud According to NIST – A Compass for Businesses and Public Administrations
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
June 26, 2025

Source:


By Lokesh Vij, Professor of Cloud Computing Infrastructure, Cloud Development, Cloud Computing Automation and Ops and Cloud Data Stacks at OPIT – Open Institute of Technology

NIST identifies five key characteristics of cloud computing: on-demand self-service, network access, resource pooling, elasticity, and metered service. These pillars explain the success of the global cloud market of 912 billion in 2025

In less than twenty years, the cloud has gone from a curiosity to an indispensable infrastructure. According to Precedence Research, the global market will reach 912 billion dollars in 2025 and will exceed 5.1 trillion in 2034. In Europe, the expected spending for 2025 will be almost 202 billion dollars. At the base of this success are five characteristics, identified by the NIST (National Institute of Standards and Technology): on-demand self-service, network access, shared resource pool, elasticity and measured service.

Understanding them means understanding why the cloud is the engine of digital transformation.

On-demand self-service: instant provisioning

The journey through the five pillars starts with the ability to put IT in the hands of users.

Without instant provisioning, the other benefits of the cloud remain potential. Users can turn resources on and off with a click or via API, without tickets or waiting. Provisioning a VM, database, or Kubernetes cluster takes seconds, not weeks, reducing time to market and encouraging continuous experimentation. A DevOps team that releases microservices multiple times a day or a fintech that tests dozens of credit-scoring models in parallel benefit from this immediacy. In OPIT labs, students create complete Kubernetes environments in two minutes, run load tests, and tear them down as soon as they’re done, paying only for the actual minutes.

Similarly, a biomedical research group can temporarily allocate hundreds of GPUs to train a deep-learning model and release them immediately afterwards, without tying up capital in hardware that will age rapidly. This flexibility allows the user to adapt resources to their needs in real time. There are no hard and fast constraints: you can activate a single machine and deactivate it when it is no longer needed, or start dozens of extra instances for a limited time and then release them. You only pay for what you actually use, without waste.

Wide network access: applications that follow the user everywhere

Once access to resources is made instantaneous, it is necessary to ensure that these resources are accessible from any location and device, maintaining a uniform user experience. The cloud lives on the network and guarantees ubiquity and independence from the device.

A web app based on HTTP/S can be used from a laptop, tablet or smartphone, without the user knowing where the containers are running. Geographic transparency allows for multi-channel strategies: you start a purchase on your phone and complete it on your desktop without interruptions. For the PA, this means providing digital identities everywhere, for the private sector, offering 24/7 customer service.

Broad access moves security from the physical perimeter to the digital identity and introduces zero-trust architecture, where every request is authenticated and authorized regardless of the user’s location.

All you need is a network connection to use the resources: from the office, from home or on the move, from computers and mobile devices. Access is independent of the platform used and occurs via standard web protocols and interfaces, ensuring interoperability.

Shared Resource Pools: The Economy of Scale of Multi-Tenancy

Ubiquitous access would be prohibitive without a sustainable economic model. This is where infrastructure sharing comes in.

The cloud provider’s infrastructure aggregates and shares computational resources among multiple users according to a multi-tenant model. The economies of scale of hyperscale data centers reduce costs and emissions, putting cutting-edge technologies within the reach of startups and SMBs.

Pooling centralizes patching, security, and capacity planning, freeing IT teams from repetitive tasks and reducing the company’s carbon footprint. Providers reinvest energy savings in next-generation hardware and immersion cooling research programs, amplifying the collective benefit.

Rapid Elasticity: Scaling at the Speed ​​of Business

Sharing resources is only effective if their allocation follows business demand in real time. With elasticity, the infrastructure expands or reduces resources in minutes following the load. The system behaves like a rubber band: if more power or more instances are needed to deal with a traffic spike, it automatically scales in real time; when demand drops, the additional resources are deactivated just as quickly.

This flexibility seems to offer unlimited resources. In practice, a company no longer has to buy excess servers to cover peaks in demand (which would remain unused during periods of low activity), but can obtain additional capacity from the cloud only when needed. The economic advantage is considerable: large initial investments are avoided and only the capacity actually used during peak periods is paid for.

In the OPIT cloud automation lab, students simulate a streaming platform that creates new Kubernetes pods as viewers increase and deletes them when the audience drops: a concrete example of balancing user experience and cost control. The effect is twofold: the user does not suffer slowdowns and the company avoids tying up capital in underutilized servers.

Metered Service: Transparency and Cost Governance

The dynamic scale generated by elasticity requires precise visibility into consumption and expenses : without measurement there is no governance. Metering makes every second of CPU, every gigabyte and every API call visible. Every consumption parameter is tracked and made available in transparent reports.

This data enables pay-per-use pricing , i.e. charges proportional to actual usage. For the customer, this translates into variable costs: you only pay for the resources actually consumed. Transparency helps you plan your budget: thanks to real-time data, it is easier to optimize expenses, for example by turning off unused resources. This eliminates unnecessary fixed costs, encouraging efficient use of resources.

The systemic value of the five pillars

When the five pillars work together, the effect is multiplier . Self-service and elasticity enable rapid response to workload changes, increasing or decreasing resources in real time, and fuel continuous experimentation; ubiquitous access and pooling provide global scalability; measurement ensures economic and environmental sustainability.

It is no surprise that the Italian market will grow from $12.4 billion in 2025 to $31.7 billion in 2030 with a CAGR of 20.6%. Manufacturers and retailers are migrating mission-critical loads to cloud-native platforms , gaining real-time data insights and reducing time to value .

From the laboratory to the business strategy

From theory to practice: the NIST pillars become a compass for the digital transformation of companies and Public Administration. In the classroom, we start with concrete exercises – such as the stress test of a video platform – to demonstrate the real impact of the five pillars on performance, costs and environmental KPIs.

The same approach can guide CIOs and innovators: if processes, governance and culture embody self-service, ubiquity, pooling, elasticity and measurement, the organization is ready to capture the full value of the cloud. Otherwise, it is necessary to recalibrate the strategy by investing in training, pilot projects and partnerships with providers. The NIST pillars thus confirm themselves not only as a classification model, but as the toolbox with which to build data-driven and sustainable enterprises.

Read the full article below (in Italian):

Read the article
ChatGPT Action Figures & Responsible Artificial Intelligence
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
June 23, 2025

You’ve probably seen two of the most recent popular social media trends. The first is creating and posting your personalized action figure version of yourself, complete with personalized accessories, from a yoga mat to your favorite musical instrument. There is also the Studio Ghibli trend, which creates an image of you in the style of a character from one of the animation studio’s popular films.

Both of these are possible thanks to OpenAI’s GPT-4o-powered image generator. But what are you risking when you upload a picture to generate this kind of content? More than you might imagine, according to Tom Vazdar, chair of cybersecurity at the Open Institute of Technology (OPIT), in a recent interview with Wired. Let’s take a closer look at the risks and how this issue ties into the issue of responsible artificial intelligence.

Uploading Your Image

To get a personalized image of yourself back from ChatGPT, you need to upload an actual photo, or potentially multiple images, and tell ChatGPT what you want. But in addition to using your image to generate content for you, OpenAI could also be using your willingly submitted image to help train its AI model. Vazdar, who is also CEO and AI & Cybersecurity Strategist at Riskoria and a board member for the Croatian AI Association, says that this kind of content is “a gold mine for training generative models,” but you have limited power over how that image is integrated into their training strategy.

Plus, you are uploading much more than just an image of yourself. Vazdar reminds us that we are handing over “an entire bundle of metadata.” This includes the EXIF data attached to the image, such as exactly when and where the photo was taken. And your photo may have more content in it than you imagine, with the background – including people, landmarks, and objects – also able to be tied to that time and place.

In addition to this, OpenAI also collects data about the device that you are using to engage with the platform, and, according to Vazdar, “There’s also behavioral data, such as what you typed, what kind of image you asked for, how you interacted with the interface and the frequency of those actions.”

After all that, OpenAI knows a lot about you, and soon, so could their AI model, because it is studying you.

How OpenAI Uses Your Data

OpenAI claims that they did not orchestrate these social media trends simply to get training data for their AI, and that’s almost certainly true. But they also aren’t denying that access to that freely uploaded data is a bonus. As Vazdar points out, “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

OpenAI isn’t the only company using your data to train its AI. Meta recently updated its privacy policy to allow the company to use your personal information on Meta-related services, such as Facebook, Instagram, and WhatsApp, to train its AI. While it is possible to opt-out, Meta isn’t advertising that fact or making it easy, which means that most users are sharing their data by default.

You can also control what happens with your data when using ChatGPT. Again, while not well publicized, you can use ChatGPT’s self-service tools to access, export, and delete your personal information, and opt out of having your content used to improve OpenAI’s model. Nevertheless, even if you choose these options, it is still worth it to strip data like location and time from images before uploading them and to consider the privacy of any images, including people and objects in the background, before sharing.

Are Data Protection Laws Keeping Up?

OpenAI and Meta need to provide these kinds of opt-outs due to data protection laws, such as GDPR in the EU and the UK. GDPR gives you the right to access or delete your data, and the use of biometric data requires your explicit consent. However, your photo only becomes biometric data when it is processed using a specific technical measure that allows for the unique identification of an individual.

But just because ChatGPT is not using this technology, doesn’t mean that ChatGPT can’t learn a lot about you from your images.

AI and Ethics Concerns

But you might wonder, “Isn’t it a good thing that AI is being trained using a diverse range of photos?” After all, there have been widespread reports in the past of AI struggling to recognize black faces because they have been trained mostly on white faces. Similarly, there have been reports of bias within AI due to the information it receives. Doesn’t sharing from a wide range of users help combat that? Yes, but there is so much more that could be done with that data without your knowledge or consent.

One of the biggest risks is that the data can be manipulated for marketing purposes, not just to get you to buy products, but also potentially to manipulate behavior. Take, for instance, the Cambridge Analytica scandal, which saw AI used to manipulate voters and the proliferation of deepfakes sharing false news.

Vazdar believes that AI should be used to promote human freedom and autonomy, not threaten it. It should be something that benefits humanity in the broadest possible sense, and not just those with the power to develop and profit from AI.

Responsible Artificial Intelligence

OPIT’s Master’s in Responsible AI combines technical expertise with a focus on the ethical implications of AI, diving into questions such as this one. Focusing on real-world applications, the course considers sustainable AI, environmental impact, ethical considerations, and social responsibility.

Completed over three or four 13-week terms, it starts with a foundation in technical artificial intelligence and then moves on to advanced AI applications. Students finish with a Capstone project, which sees them apply what they have learned to real-world problems.

Read the article
Juggling Work and Study: Interview With OPIT Student Karina
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
June 05, 2025

During the Open Institute of Technology’s (OPIT’s) 2025 Graduation Day, we conducted interviews with many recent graduates to understand why they chose OPIT, how they felt about the course, and what advice they might give to others considering studying at OPIT.

Karina is an experienced FinTech professional who is an experienced integration manager, ERP specialist, and business analyst. She was interested in learning AI applications to expand her career possibilities, and she chose OPIT’s MSc in Applied Data Science & AI.

In the interview, Karina discussed why she chose OPIT over other courses of study, the main challenges she faced when completing the course while working full-time, and the kind of support she received from OPIT and other students.

Why Study at OPIT?

Karina explained that she was interested in enhancing her AI skills to take advantage of a major emerging technology in the FinTech field. She said that she was looking for a course that was affordable and that she could manage alongside her current demanding job. Karina noted that she did not have the luxury to take time off to become a full-time student.

She was principally looking at courses in the United States and the United Kingdom. She found that comprehensive courses were expensive, costing upwards of $50,000, and did not always offer flexible study options. Meanwhile, flexible courses that she could complete while working offered excellent individual modules, but didn’t always add up to a coherent whole. This was something that set OPIT apart.

Karina admits that she was initially skeptical when she encountered OPIT because, at the time, it was still very new. OPIT only started offering courses in September 2023, so 2025 was the first cohort of graduates.

Nevertheless, Karina was interested in OPIT’s affordable study options and the flexibility of fully remote learning and part-time options. She said that when she looked into the course, she realized that it aligned very closely with what she was looking for.

In particular, Karina noted that she was always wary of further study because of the level of mathematics required in most computer science courses. She appreciated that OPIT’s course focused on understanding the underlying core principles and the potential applications, rather than the fine programming and mathematical details. This made the course more applicable to her professional life.

OPIT’s MSc in Applied Data Science & AI

The course Karina took was OPIT’s MSc in Applied Data Science & AI. It is a three- to four-term course (13 weeks), which can take between one and two years to complete, depending on the pace you choose and whether you choose the 90 or 120 ECTS option. As well as part-time, there are also regular and fast-track options.

The course is fully online and completed in English, with an accessible tuition fee of €2,250 per term, which is €6,750 for the 90 ECTS course and €9,000 for the 120 ECTS course. Payment plans are available as are scholarships, and discounts are available if you pay the full amount upfront.

It matches foundational tech modules with business application modules to build a strong foundation. It then ends with a term-long research project culminating in a thesis. Internships with industry partners are encouraged and facilitated by OPIT, or professionals can work on projects within their own companies.

Entry requirements include a bachelor’s degree or equivalency in any field, including non-tech fields, and English proficiency to a B2 level.

Faculty members include Pierluigi Casale, a former Data Science and AI Innovation Officer for the European Parliament and Principal Data Scientist at TomTom; Paco Awissi, former VP at PSL Group and an instructor at McGill University; and Marzi Bakhshandeh, a Senior Product Manager at ING.

Challenges and Support

Karina shared that her biggest challenge while studying at OPIT was time management and juggling the heavy learning schedule with her hectic job. She admitted that when balancing the two, there were times when her social life suffered, but it was doable. The key to her success was organization, time management, and the support of the rest of the cohort.

According to Karina, the cohort WhatsApp group was often a lifeline that helped keep her focused and optimistic during challenging times. Sharing challenges with others in the same boat and seeing the example of her peers often helped.

The OPIT Cohort

OPIT has a wide and varied cohort with over 300 students studying remotely from 78 countries around the world. Around 80% of OPIT’s students are already working professionals who are currently employed at top companies in a variety of industries. This includes global tech firms such as Accenture, Cisco, and Broadcom, FinTech companies like UBS, PwC, Deloitte, and the First Bank of Nigeria, and innovative startups and enterprises like Dynatrace, Leonardo, and the Pharo Foundation.

Study Methods

This cohort meets in OPIT’s online classrooms, powered by the Canvas Learning Management System (LMS). One of the world’s leading teaching and learning software, it acts as a virtual hub for all of OPIT’s academic activities, including live lectures and discussion boards. OPIT also uses the same portal to conduct continuous assessments and prepare students before final exams.

If you want to collaborate with other students, there is a collaboration tab where you can set up workrooms, and also an official Slack platform. Students tend to use WhatsApp for other informal communications.

If students need additional support, they can book an appointment with the course coordinator through Canvas to get advice on managing their workload and balancing their commitments. Students also get access to experienced career advisor Mike McCulloch, who can provide expert guidance.

A Supportive Environment

These services and resources create a supportive environment for OPIT students, which Karina says helped her throughout her course of study. Karina suggests organization and leaning into help from the community are the best ways to succeed when studying with OPIT.

Read the article
Agenda Digitale: AI Ethics Starts with Data – The Role of Training
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 20, 2025

Source:


By Riccardo Ocleppo, Founder and Director of OPIT – Open Institute of Technology

AI ethics requires ongoing commitment. Organizations must integrate guidelines and a corporate culture geared towards responsibility and inclusiveness, preventing negative consequences for individuals and society.

In the world of artificial intelligence, concerns about algorithmic bias are coming to the forefront, calling for a collective effort to promote ethical practices in the development and use of AI.

This implies the need to understand the multiple causes and potential consequences of the biases themselves, identify concrete solutions and recognize the key role of academic institutions in this process.

Bias in AI is a form of injustice, often systemic, that can be embedded in algorithms. Its origins are many, but the main culprit is almost always the data set used to train the models. If this data reflects inequalities or prejudices present in society, the risk is that AI will absorb and reproduce them, consolidating these distortions.

But bias can also manifest itself in the opposite direction. This is what happened some time ago with Google Gemini. The generative AI system developed by Google, in an attempt to ensure greater inclusivity, ended up generating content and images completely disconnected from the reality it was supposed to represent.

Further complicating the picture is the very nature of AI models, which are often characterized by complex algorithms and opaque decision-making processes. This complexity makes it difficult to identify, and therefore correct, biases inherent in the systems.

Ethical Data Management to Reduce Bias in AI

Adopting good data management practices is essential to address these issues. The first step is to ensure that the datasets used for training are diverse and representative. This means actively seeking data that includes a wide variety of demographic, cultural, and social contexts, so as to avoid AI exclusively reproducing existing and potentially biased models.

Alongside data diversification, it is equally important to test models on different demographic groups. Only in this way can latent biases that would otherwise remain invisible be highlighted. Furthermore, promoting transparency in algorithms and decision-making processes is crucial. Transparency allows for critical control and makes all actors involved in the design and use of AI accountable.

Strategies for ethical and responsible artificial intelligence

Building ethical AI is not an isolated action, but an ongoing journey that requires constant attention and updating. This commitment is divided into several fundamental steps. First, ethical guidelines must be defined. Organizations must clearly establish the ethical standards to follow in the development and use of AI, inspired by fundamental values ​​such as fairness, responsibility and transparency. These principles serve as a compass to guide all projects.

It is also essential to include a plurality of perspectives in the development of AI. Multidisciplinary teams, composed of technologists, ethicists, sociologists and representatives of the potentially involved communities, can help prevent and correct biases thanks to the variety of approaches. Last but not least, promote an ethical culture : in addition to establishing rules and composing diverse teams, it is essential to cultivate a corporate culture that places ethics at the center of every project. Only by integrating these values ​​​​in the DNA of the organization can we ensure that ethics is a founding element of the development of AI.

The consequences of biased artificial intelligence

Ignoring the problem of bias can have serious and unpredictable consequences, with profound impacts on different areas of our lives. From the reinforcement of social inequalities to the loss of trust in AI-based systems, the risk is to fuel skepticism and resistance towards technological innovation. AI, if distorted, can negatively influence crucial decisions in sectors such as healthcare, employment and justice. Think, for example, of loan selection algorithms that unfairly penalize certain categories, or facial recognition software that incorrectly identifies people, with possible legal consequences. These are just some of the situations in which an unethical use of AI can worsen existing inequalities.

University training and research to counter bias in AI

Universities and higher education institutions have a crucial responsibility to address bias and promote ethical practices in AI development. Ethics must certainly be integrated into educational curricula. By including ethics modules in AI and computer science courses, universities can provide new generations of developers with the tools to recognize and address bias, contributing to more equitable and inclusive design. Universities can also be protagonists through research.

Academic institutions, with their autonomy and expertise, can explore the complexities of bias in depth, developing innovative solutions for detecting and mitigating bias. Since the topic of bias is multidimensional in nature, a collaborative approach is needed, thus fostering interdisciplinary collaboration. Universities can create spaces where computer scientists, ethicists, lawyers, and social scientists work together, offering more comprehensive and innovative solutions.

But that’s not all. As places of critical thinking and debate, universities can foster dialogue between developers, policy makers, and citizens through events, workshops, and conferences. This engagement is essential to raise awareness and promote responsible use of AI.

In this direction, several universities have already activated degree courses in artificial intelligence that combine advanced technical skills (in areas such as machine learning, computer vision and natural language processing) with training that is attentive to ethical and human implications.

Academic Opportunities for an Equitable AI Future

More and more universities around the world – including Yale and Oxford – are also creating research departments dedicated to AI and ethics.

The path to ethical AI is complex, but it also represents an opportunity to build a future where technology truly serves the common good.

By recognizing the root causes of bias , adopting responsible data practices, and engaging in ongoing and vigilant development, we can reduce the unintended effects of biased algorithms. In this process, academic institutions – thanks to their expertise and authority – are at the forefront, helping to shape a more equitable and inclusive digital age.

Read the full article below:

Read the article
TechFinancials: Are We Raising AI Correctly?
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 20, 2025

Source:


By Zorina Alliata

Artificial intelligence (AI) used to be the stuff of science fiction. Stories about rogue machines and robot uprisings were once a source of amusement, not anxiety. But over recent years, AI has quietly embedded itself in our daily lives.

From the algorithms behind social media feeds, to the voice assistants managing our calendars. This quiet takeover has become something far louder: fear.

Headlines around AI are often alarmist. Statements such as “AI will take your job”, “AI will end education”, or “AI is dangerous and unregulated” are thrown around regularly. These narratives feed on uncertainty and fuel distrust.

But it doesn’t have to be this way. The hyper-fixation on the never-ending negative aspects of AI is the wrong approach to take. What if AI isn’t the villain? What if, at this stage, it’s simply a child?

AI, in many ways, is still learning. It mimics human behaviour, absorbs language, and forms patterns based on what it sees. Its current capabilities, however powerful they may seem, are not equivalent to human intelligence. It has limitations. It makes mistakes. It can even be manipulated and misled. It reflects our world; flaws and all. In that sense, AI is less an omnipotent force and more in line with a toddler trying to find its way.

And, like any child, it needs guidance.

This is especially evident in education. The emergence of AI tools such as ChatGPT has caused a stir in higher education institutions and universities, sparking fears about plagiarism and the erosion of critical thinking. Some institutions have responded with strict bans, while others have embraced cautious integration. The panic is understandable, but is it misplaced?

Rather than jumping to conclusions, educators should consider shifting the conversation. AI can, in fact, become an ally in learning. Instead of assuming students will cheat, we can teach them to use AI responsibly. Most of us educators can already recognise the signs of AI-generated work: excessive use of numbered lists, repetitive language and poor comparison skills. So why not use this as a teaching opportunity?

Encouraging students to engage with AI critically, understanding what it’s good at, where it falls short, and how to improve its output, can strengthen their own analytical skills. It invites them to become more active participants in their learning, not passive consumers of machine generated answers. Teaching young people how to work with AI is arguably more important than shielding them from it.

Outside the classroom, AI’s impact on the workforce is another growing concern. Stories about AI replacing jobs often dominate the news cycle. But these conversations often ignore a key point: AI is not autonomous. AI needs human designers, engineers, analysts, and ethicists to guide it. For every job that AI may eliminate, others will emerge to support and direct it.

More importantly, there are many things AI simply cannot do. It doesn’t understand nuance, morality or emotion. It can’t make ethical decisions without human input. These aren’t minor gaps, they’re fundamental. That’s why we must stop imagining AI as an unstoppable force and start thinking about how to raise it responsibly.

When considering how to raise our AI child responsibly, we need to acknowledge the issue of the algorithm being biased. Critics often point out that AI reproduces prejudices and errors, and whilst this is true, the source of that bias is us. It is important to remember that AI learns from historical data created by us, much of which reflects deeply ingrained societal inequalities.

Take, for example, mortgage lending in the US, where decades of discriminatory practices have skewed the data. Unless we intervene, AI trained on this information will inevitably reflect those same biases.

That’s not a reason to reject AI. It’s a reason to be more involved in its development, like any good parent. The responsibility lies with us.

Parenting is not about control for control’s sake; it’s about nurturing growth while setting boundaries. AI, like a child, needs feedback, accountability, and care. It will grow, but how it grows is up to us.

It’s tempting to view technology as something that happens to us, rather than something we can shape. But AI doesn’t exist outside of society, it’s a product of our values, decisions, and input. If we treat it as a monster, it may become one. If we treat it as a mirror, it will reflect what we show it. And if we treat it as a child, we may be able to raise it into something better.

So instead of fearmongering, let’s ask ourselves a better question: Are we raising AI correctly?

Read the full article below:

Read the article
Wired: Think Twice Before Creating That ChatGPT Action Figure
OPIT - Open Institute of Technology
OPIT - Open Institute of Technology
May 12, 2025

Source:

  • Wired, published on May 01st, 2025

People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.

By Kate O’Flaherty

At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.

All this is possible because of OpenAI’s new GPT-4o-powered image generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.

The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.

Hidden Data

The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”

OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

It’s not just your face. If you upload a high-resolution photo, you’re giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.

This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.

OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.

Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.

Uncanny Likeness

In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.

However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.

Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.

The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”

OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.

Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.

ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data⁠ by default, according to the company.

Read the full article below:

Read the article