What Counts as a “Prohibited Practice”? AI Applications That Cross the Line

Artificial intelligence isn’t just clever code anymore. It’s in your phone, your car, your favorite streaming app. It reads patterns, predicts needs, even writes text. But like any powerful tool, AI can go wrong fast when it starts deciding things it shouldn’t.

So, what counts as a “prohibited practice”? AI applications that cross the line are those that don’t just make mistakes — they break trust. They blur the border between innovation and intrusion. Some collect personal data without consent. Others judge people unfairly. A few go as far as shaping behavior for profit.

Governments, especially in Europe, are drawing clear lines. The EU’s AI Act, for instance, classifies certain systems as flat-out banned. But beyond the legal jargon, there’s a moral question underneath: how far is too far when machines start understanding us — or pretending to?

Let’s look at the main types of AI systems that push beyond ethics and into prohibited territory.

Social Scoring Systems

Turning reputation into a number

Imagine a society where your credit score decides if you can travel, rent, or even swipe right. That’s social scoring — and it’s already happening in some places. These systems track what you buy, say, or post, then use that data to rank you.

It sounds efficient on paper: reward the “good,” penalize the “bad.” But real life doesn’t work that neatly. Humans are messy. We argue, experiment, and sometimes make mistakes. A scoring system sees only data points, not context.

Worse, it can punish people unfairly. Miss a payment? Lose points. Criticize an official? Lose more. Before long, freedom feels conditional. Social scoring turns personal life into a leaderboard — one that no one asked to join.

That’s why it’s on the prohibited list. It undermines equality and privacy at once. AI should inform decisions, not become the judge, jury, and warden.

Exploitative Systems Targeting Vulnerabilities

When algorithms push people over the edge

AI doesn’t need to shout to manipulate. It studies behavior quietly — clicks, pauses, purchases — and then nudges you toward another choice. For most of us, it’s harmless. For vulnerable people, it can be devastating.

Take gambling platforms that track emotional data. When they sense frustration, they send a “special offer.” The timing feels personal, but it’s calculated. The algorithm knows you’re at your weakest moment.

Or consider kids’ apps that make it easy to spend money — bright colors, fake rewards, endless loops. These aren’t bugs. They’re business models.

Regulators see this kind of AI as predatory. It exploits emotion and psychology rather than serving needs. The line is clear: once a system learns to profit from distress, it stops being technology and starts being manipulation.

Ethical design means protecting the vulnerable — not squeezing them for clicks.

Biometric Categorisation Systems Based on Sensitive Characteristics

The danger of classifying people by how they look

Biometric data feels futuristic — face scans, voiceprints, iris patterns. When used properly, it’s handy. But when it starts categorizing people by race, gender, or sexuality, it turns sinister fast.

Some AI systems claim to “detect” character traits or political leanings just from facial structure. That’s junk science wrapped in expensive software. It revives old prejudices under new names.

Worse still, it often happens without consent. People get scanned while walking through airports, stores, or offices, unaware they’re part of an experiment.

The EU calls such systems prohibited for good reason. They strip away individuality and replace it with stereotypes. No machine should sort humans into boxes based on appearance.

Technology should see beyond faces, not fixate on them. We’ve come too far as a society to let algorithms reopen those wounds.

Real-time Remote Biometric Identification in Public Spaces

Always watching, rarely protecting

Walk into a stadium, a subway, or a shopping mall. Somewhere above, a camera might already know who you are. Real-time biometric surveillance is sold as “public safety.” It promises faster responses and fewer crimes. The catch? It watches everyone, all the time.

When you can be identified in seconds, anonymity vanishes. Maybe you haven’t done anything wrong, but that doesn’t matter. You’re part of a system that treats everyone like a potential suspect.

History shows where that road leads. Once governments deploy such systems, they rarely pull them back. Mission creep sets in. What starts as crime prevention quietly becomes political control.

That’s why lawmakers are cautious. Real-time biometric tracking erodes public trust faster than it builds security. It chills expression. It changes how people move, speak, and even think in public spaces.

A free society can’t thrive under permanent surveillance.

Emotion Recognition in the Workplace and Educational Institutions

When a camera claims to read your feelings

There’s something deeply unsettling about an algorithm telling you how you feel. Yet companies and schools are trying it. Emotion recognition tools promise to “measure engagement” or “enhance productivity.” What they actually do is create pressure and mistrust.

In offices, cameras scan faces to check who looks “motivated.” A raised eyebrow might be logged as disinterest. A yawn could count as poor performance. People start performing for the machine, not their manager.

In classrooms, it’s worse. A quiet student gets marked as “unfocused.” A child looking away for a moment triggers an alert. The software can’t tell the difference between boredom and grief.

Scientists say emotion recognition is shaky at best. Human feelings don’t fit neatly into charts. That’s why using it in workplaces or schools is seen as a prohibited practice. It turns people into metrics and chips away at trust.

True empathy can’t be automated. It takes a conversation, not a camera.

Predictive Policing Based on Profiling

When numbers replace nuance

Predictive policing sounds efficient: feed old crime data into an algorithm, and it tells you where to send patrols. But data reflects bias. If one area has been over-policed in the past, the system flags it again and again.

The result? A feedback loop. More police, more arrests, more “evidence” that the area is dangerous. It doesn’t fix inequality; it reinforces it.

Some systems even generate “risk scores” for individuals. Imagine being flagged as a potential criminal based on where you live or who you know. That’s not protection — that’s profiling.

Many regulators now call predictive policing a prohibited practice, especially when it uses personal data. It replaces human judgment with pattern recognition, and patterns can be poisonous.

Technology should help find truth, not manufacture it. Once algorithms decide who looks suspicious, fairness fades into fiction.

AI Systems That Manipulate Human Behaviour

The invisible hand that shapes your mind

Ever notice how social media keeps showing you exactly what makes you angry? That’s not an accident. It’s engagement engineering. AI learns which emotions hold your attention, then feeds you more of it.

These systems don’t yell. They whisper. They shape opinions, polarize groups, and nudge choices subtly over time. People think they’re acting freely when they’re really being steered.

Political campaigns have used similar tactics. Ads tailored to fears or biases appear only to select audiences. That level of micro-manipulation used to be impossible. Now it’s business as usual.

AI that manipulates behavior crosses a moral and legal line. It undermines free will, distorts truth, and destabilizes communities. Regulators want transparency — users deserve to know when they’re being influenced.

At its best, AI should guide, not govern. Once machines start playing puppet master, democracy itself begins to tremble.

A Moment of Reflection

A story from a few years ago still haunts teachers in one California school. A new classroom tool claimed it could track engagement by analyzing students’ faces. One day, it marked a quiet boy as “disinterested.” His teacher later found out he’d lost a family member the week before.

That’s what happens when empathy gets outsourced to code. The system wasn’t evil; it was clueless. It couldn’t read pain, only pixels.

This isn’t a story about technology failing. It’s about humans forgetting what technology can’t do.

Conclusion

What counts as a “prohibited practice”? AI applications that cross the line don’t just make errors — they make ethical breaches. They strip away privacy, autonomy, or dignity in the name of progress.

Social scoring. Biometric profiling. Emotional surveillance. Predictive policing. Each shows what happens when innovation skips introspection.

Rules like the EU AI Act are a start, but real change depends on conscience, not just compliance. Engineers, executives, and lawmakers all share a duty to ask the hard questions: Who benefits? Who gets hurt? Who’s accountable?

Technology should serve people, not the other way around. Once machines begin deciding what makes us trustworthy, emotional, or criminal, we lose more than privacy — we lose humanity’s core privilege: choice.

So maybe the real test of progress isn’t how smart AI gets, but how wise we stay.

Frequently Asked Questions

Find quick answers to common questions about this topic

By conducting risk assessments, auditing AI systems, and following local and international laws.

Sometimes. If redesigned with transparency and fairness, certain systems can serve ethical purposes.

It’s about balance—innovation with accountability, creativity with conscience, and progress with protection.

To protect human rights, ensure transparency, and promote trustworthy innovation across industries.

About the author

Julia Kim

Julia Kim

Contributor

Julia Kim is an innovative mobile application specialist with 15 years of experience developing user-centered design frameworks, accessibility integration strategies, and cross-platform development methodologies for diverse user populations. Julia has transformed how organizations approach app development through her inclusive design principles and created several groundbreaking approaches to universal usability. She's dedicated to ensuring digital experiences work for everyone regardless of ability and believes that accessibility drives innovation that benefits all users. Julia's human-centered methods guide development teams, product managers, and design professionals creating mobile experiences that truly serve their entire audience.

View articles