When artificial intelligence manages to interpret facial expressions, voice tone, and behavioral cues to deduce a person’s emotional state, some fundamental questions arise:
- Where is the line between personalized services and intrusive profiling?
- How far can companies go without compromising fundamental rights?
At the International Conference “Perspectives of Banking and Financial Law” (Faculty of Law, ASE), our colleague Mihai Cărăbaș had the opportunity to explore this topic in depth, highlighting:
✓ How institutions use AI to evaluate the emotional state of users or the performance of employees
✓ When “personalization” becomes prohibited profiling
✓ The risks of using biometric and behavioral data as filters for accessing services, whether in banking or social sectors.
From a legal standpoint, the GDPR and the EU AI Act are the fundamental pillars regulating new technologies in this context. As professionals, we have a responsibility to understand them deeply and integrate them ethically into our work.
A common-sense conclusion?
Artificial intelligence should continue to evolve, but always with human oversight or alternatives in place. Technology’s role should not be to punish or make final judgments, but to support, learn, and provide clarity – without replacing human discernment.
If you’d like to watch the full conference, I’ve shared the link in the first comment.
The specific discussion on profiling and emotional AI begins at 7:10.