AI literacy: From abstract concept to strategic priority

07 Oct, 2025

The rapid advancements in artificial intelligence (AI) over recent years have been astounding, driving one of the most significant and profound technological transformations in recent history. What was once confined to laboratories and niche specialties has become part of the daily reality of nearly every organization, from productivity software to management, marketing, operational, and customer support tools.

From Concept to Legal Obligation

With the European Union´s approval of the AI Act on August 1, 2024, AI literacy transitioned from a mere “good practice” to a legally binding obligation. While many provisions of the AI Act are set to take effect in stages (until the regulatory framework is fully applicable across the EU by 2027), the specific obligation on AI literacy set out in Article 4 has been in effect since February 2, 2025. This means that AI literacy is already a mandatory requirement for organizations that use, develop, or provide AI systems.

The regulation links AI literacy to the protection of fundamental rights, safety, transparency, and democratic oversight. In practical terms, organizations cannot simply “adopt AI”. They must ensure that those dealing with AI systems have knowledge proportionate to their role, namely, understanding risks to fundamental rights and safety, correctly interpreting outputs, applying appropriate protective measures, and knowing the technology´s limits. The goal is not to turn everyone into technical experts but rather to establish a practical baseline of literacy that enables the conscious, responsible, and compliant use of technology.

Why Is AI Literacy Also a Strategic Necessity?

While AI literacy is now a legal requirement, it has also become a strategic necessity.

Legally speaking, the obligation to be AI literate is already in force, and organizations should prepare for increased supervision by European and national authorities.

From a strategic perspective, market trust has become critical, and this is especially true in the current economic climate. Clients, partners, and investors expect assurances that AI is used ethically, safely, and effectively. The ability to demonstrate responsible AI practices is not just a competitive advantage; it is a decisive criterion for maintaining business relationships and accessing financing or new markets.

From Theory to Practice: First Steps That Can Be Taken

This is where AI literacy stops being an abstract concept and becomes a concrete practice. Without wanting to turn this article into a training manual, we thought it would be useful to share a few practical steps that can help an organization begin this journey:

1. Map where AI is already present

Today, many organizations already use tools with AI functionalities in day-to-day tasks, including recruitment software, digital marketing campaigns, customer-support chatbots, and even compliance solutions, without this having been previously discussed or formally approved by management. Identifying these uses is essential to understanding the associated risks and opportunities.

2. Build a common language

Not everyone needs to be an expert, but there must be a shared baseline of concepts. What does “hallucination” mean? What constitutes a “high-risk” system? When should an output be validated manually? Having a common, organization-wide AI language prevents misunderstandings, facilitates collaboration, and reduces risk.

3. Define internal principles for use

AI literacy is not limited to theoretical knowledge; it also involves the ability to apply that knowledge consciously in day-to-day work. And simple rules make a difference. For example: AI can support processes, but it should not replace human decision-making in critical matters. Sensitive data should not be entered into external tools without first validating it. And outputs should always be verified before use. These principles serve as practical, accessible guides for all employees.

4. Train leaders and decision-makers, involving Legal and Compliance

Soon, almost all organizations will utilize AI systems, which are often integrated into everyday software. This means that leaders and decision-makers must be prepared to understand risks, validate results, and assume responsibility. Yet it is often at this level that significant gaps remain, with decision-makers still making critical decisions without adequate AI knowledge, exposing organizations to legal, reputational, and operational risks.

Against this backdrop, Legal and Compliance functions play a central role, translating abstract principles into clear policies, creating appropriate validation processes, and ensuring AI use is aligned not only with the AI Act but also with the organization’s values and strategy.

5. Build a culture of responsible use

AI literacy should not be treated as a stand-alone project. Like cybersecurity, it should be embedded into governance, compliance, and risk-management practices. This requires continuous awareness, role-specific training, and reinforcement of ethical values in the use of technology.

Conclusion

AI literacy is no longer optional. Since February 2025, it has been a legal obligation, and it is also indispensable for earning market trust and ensuring the responsible use of technology.

Many organizations in Portugal and elsewhere are still taking their first steps in this area. The most important thing is to start: investing in training, awareness, and practical governance will allow AI to become not only a driver of innovation but also a source of trust, competitiveness, and sustainable growth.

 

Contact

This article was prepared by Anne Vogdt, of counsel at Proença de Carvalho. For more information or support in the field of artificial intelligence, including developing internal policies, training, and legal guidance, please visit her profile at proencadecarvalho.com or contact her at amv@proencadecarvalho.com.