BLOG ARTICLE

Guidance for Intercloud9 customers on the European Artificial Intelligence (AI) Act

Customers with CRM and digital technologies should be aware of the Artificial Intelligence (AI) Act.

The European Union’s AI Act is being implemented in phases, with key dates as follows:

  • August 1, 2024: The AI Act entered into force.
  • February 2, 2025: Prohibitions on certain AI systems and requirements on AI literacy start to apply.
  • August 2, 2025: Additional rules become applicable, including those for notified bodies, general-purpose AI models, governance, confidentiality, and penalties.
  • August 2, 2026: The majority of the AI Act’s provisions, including obligations for high-risk AI systems, come into effect.

See: Artificial Intelligence Act EU

Whilst the UK is developing its own AI regulatory framework (expected to be less prescriptive than the EU’s), membership organisations may choose to align with the stricter EU rules to avoid compliance challenges.

The UK government has signalled a preference for a pro-innovation approach to AI regulation, emphasising flexibility and sector-specific guidance over rigid rules. This contrasts with the EU’s more stringent, risk-based framework.

We suggest membership organisations may want to consider adopting EU-compliant practices to mitigate risks when dealing with AI applications and to ensure compliance with EU regulation, which will be a requirement for any membership bodies delivering services in Europe and overseas because the EU’s AI Act could set a de facto global standard, much like GDPR, influencing expectations for AI systems worldwide.

The European AI Act seeks to regulate the development, deployment, and use of artificial intelligence in the European Union. Here are the main points:

1. Classification of AI Systems

The AI Act categorises AI systems into risk levels – unacceptable risk, high risk, and low/minimal risk.

Many CRM tools and digital technologies leveraging AI for customer profiling, predictive analytics, or decision-making could fall into the high-risk category, particularly if they impact fundamental rights (e.g., decisions about creditworthiness or hiring).

Action: Assess whether your AI-driven CRM tools are classified as high-risk under the Act.

2. Compliance with High-Risk AI Requirements

High-risk AI systems must meet stringent requirements, including:

  • Risk Management Systems – regularly identify, assess, and mitigate risks.
  • Data Governance – ensure datasets used for training and operation are accurate, complete, and unbiased.
  • Transparency and Explainability – provide users with clear information on how the AI system operates and its limitations.
  • Human Oversight – implement mechanisms for human monitoring and control over AI-driven decisions.
  • Robustness and Security – safeguard against errors, cyberattacks, and misuse.

Action: Work with technology partners to ensure their CRM, and other integrated systems comply with these requirements, or adapt internal systems accordingly.

3. Prohibited Practices

Certain AI practices are banned under the Act, such as systems that manipulate human behaviour to cause harm or exploit vulnerabilities. For membership organisations, this means avoiding manipulative or deceptive AI-driven marketing or profiling techniques.

Action: Review marketing and customer engagement strategies to ensure they do not rely on prohibited AI practices.

4. Transparency Obligations.

AI systems interacting with users (e.g., chatbots in CRM) must disclose that users are interacting with AI. If the AI involves emotion recognition [see note 1] or biometric categorisation [see note 2], businesses must inform users explicitly.

Action: Ensure transparency in AI-powered customer interactions and provide clear disclosures.

5. Accountability and Auditing

Membership organisations must maintain documentation demonstrating compliance with the AI Act and may be subject to audits. This includes ensuring third-party AI vendors meet regulatory standards.

Action: Establish a compliance framework, document AI system usage, and work with vendors to ensure accountability.

6. Penalties for Non-Compliance

Non-compliance with the AI Act can result in significant fines (up to €30 million or 6% of annual global turnover, whichever is higher).

Action: Whilst these penalties will not apply to membership organisations in the UK unless they are processing data on EU citizens, it is recommended to prioritise compliance to avoid financial and reputational risks because of point (7) below.

7. Cross-Border Implications

The AI Act applies to businesses operating outside the EU if their AI systems affect individuals within the EU.

Action: If your membership organisation operates internationally, ensure compliance with the Act for any AI systems used in the EU market.

Practical Steps for your Membership Organisation

  1. Audit Current AI Systems – review existing CRM and digital tools to identify potential regulatory risks.
  2. Engage Experts – collaborate with legal and technical teams to interpret and implement the AI Act’s requirements.
  3. Monitor Updates – the AI Act is still evolving. Stay updated on final provisions and implementation timelines.
  4. Partner with Compliant Technology Partners: – ensure providers of CRM and AI technologies adhere to the AI Act.

By proactively addressing these areas, you can ensure compliance and build trust your members.

If you would like more detailed information or personalised support and guidance, please contact support@intercloud9.co.uk.

Notes:
  1. Emotion recognition in AI refers to the process of identifying and interpreting human emotions using data such as facial expressions, voice tone, body language, physiological signals, or text. Of these opportunities, text would seem the most likely opportunity for membership organisations to leverage emotion recognition because AI uses natural language processing (NLP) to detect emotions in written or spoken text such as in e-mails.
  2. Biometric classification in AI refers to the process of identifying or verifying individuals based on their unique biological or behavioural characteristics such as fingerprints, facial structure, voice, or gait.

SHARE:

Related Posts