Frequently Asked Questions
Frequently asked questions regarding AI capabilities and data used by these capabilities.
Table of Contents
Have Frequently asked Questions on Matrix42 Intelligence and data handling.
This page answers frequently asked questions, specifically dealing with data handling, when using the capabilities provided by Matrix42 Intelligence. These capabilities are currently:
Technical documentation and related FAQs can be found in the specific areas linked above.
I have received the Data Processing Agreement amendment, but I don’t want to sign it / my company does not comply. Can I still use the Matrix42 Intelligence cloud-based capabilities?
In this case, you will not be able to use the Matrix42 cloud-based AI capabilities as those require further data processing using cloud services. Matrix42 needs to comply with the conditions mandated by the service offering party.
I have already signed a Data Processing Agreement with Matrix42. Was there any change?
The amendment to the existing Data Processing Agreement adds an additional subprocessor, including their conditions.
Which LLM/technology is used in relation to GenAI capabilities?
Matrix42 Intelligence provides the capabilities using Azure Open AI. Large Language Models (LLMs) currently used are based on use case e.g.: GPT-3.5Turbo or GPT-4o mini.
How is access to data controlled?
Access to data is controlled by the consuming solution, like Matrix42 Enterprise, using the role-based security concept.
Which Azure Region is used?
The actual Azure Region used depends on the action taken with the data or capabilities used.
| Capability | Data operation | Azure Region |
|---|---|---|
| Search Index | Persist | North Europe (Ireland) |
| GenAI - LLM | Compute | Sweden Central |
| ACOM storage | Persist | West Europe (Netherlands) |
| API Services | Compute | West Europe (Netherlands) |
| Logs | Persist | West Europe (Netherlands) |
What are the specifics on ‘Microsoft Abuse Monitoring’?
Matrix42 Intelligence uses Abuse Monitoring from Microsoft with the standard content filters. Matrix42 aims to provide ethically correct use of AI, and will therefore use this functionality.
Which data sources are currently used?
At the moment, capabilities provided via Matrix42 Intelligence are relying on data provided in our Matrix42 Enterprise Service Management solution.
Will the uploaded data of customers be mixed?
The underlying cloud platform M42Next is implemented with multi-tenant capabilities.
At the moment, no cross-client data is used to train AI models.
Where can I obtain information on data processing/GDPR/information security?
Matrix42 Intelligence uses data which was previously uploaded (ingested) from the Matrix42 Enterprise solution. Details can be found in dedicated help areas:
Topics related to GDPR and overall data processing can be found in our Data Processing Agreements or any amendments.
I am concerned about our data being used for model training; how do you address data handling?
As the European Choice in Service Management, we place the highest value on data protection, data security, and transparency. Our contractual terms (including the DPA) allow data to be used solely for improving and further developing our services, but always in strict compliance with the GDPR and the principles of privacy by design and data minimization.
Specifically, this means:
- Anonymization / Data minimization
Before data is used for internal development or training purposes, it is anonymized or pseudonymized so that we cannot draw any conclusions about identifiable individuals or customer-specific content. Upon request, we can explicitly document this in the contract. - Training of models only with anonymized data
We use only anonymized data to train general models. Model improvements are made exclusively on the basis of previously anonymized or aggregated information.
Therefore, there is no mixing between tenants and no unwanted flow of knowledge into models that are available to other customers.
All development also takes place exclusively within our own isolated environment. - No disclosure to third parties
There is no disclosure of data to third parties or any other external parties for training or other purposes. Any “commercial use” or other transfer is explicitly excluded. - Why is storage for training purposes still mentioned?
The reference in the documents does not mean that we use raw data or personal content for model training. It simply means that, as is common in the industry, we may use certain anonymized telemetry data, usage metrics, or aggregated information to improve features, detect errors, or optimize models within the platform.
Even anonymization itself is considered a form of processing under the GDPR. Therefore, we require a clear contractual basis for this or the corresponding wording in the DPA or its addendum.
What is the classification of Matrix42 regarding the EU AI Act?
Matrix42 is a provider of AI systems, but not foundation (GPAI) models. This means that:
- Matrix42 develops AI-powered products and systems (e.g. AI Knowledge, AI Actions, AI Workflow, AI Assist) using existing foundation or open-source models such as Mistral, Phi-4, Azure OpenAI, or Llama 3.
- Matrix42 does not publish or distribute those base models as standalone products.
It is to mention that the documentation obligations concerning model training (datasets, weights, methodology) rest with the original model provider (e.g., Microsoft, Meta, Mistral). The responsibility of Matrix42 is to ensure compliance, transparency, and safe integration when using these models inside Matrix42 solutions.
| Role | Description |
Provider (AI System Supplier) |
Matrix42 develops and releases AI-based systems to the market, including: AI Knowledge, AI Actions, AI Workflow and AI Assist. This is to be considered as the primary role. |
| Deployer (AI System User / Implementer) | Matrix42 also uses AI internally - for example in development, marketing, or operations. Meaning that the company has to fulfil the obligations of a deployer. |
Have the solution and its capabilities been assessed from a risk perspective in accordance with the EU AI Act?
Matrix42 has performed a risk classification of provided capabilities, aligned with the EU AI Act.
| AI capability | Description of AI capability |
Potential impact of error |
Risk classification | Validation points |
IT Asset Management (Asset Discovery / Dependency Mapping / Predictions) |
Detects assets, maps dependencies, and predicts lifecycle trends. |
Incorrect data could lead to wrong IT or financial decisions. |
Limited Risk |
AI assists decision-making; no autonomous actions on infrastructure. |
Semantic Search / Recommendations |
Performs semantic search, document recommendations, and content relevance ranking. |
Misleading or irrelevant suggestions. |
Limited Risk |
Must show content sources and enable user feedback (“Was this helpful?”). |
Workflow Automation & Escalation AI |
Automates escalation decisions and routing in IT/business processes. |
Missed critical incidents or wrong action execution. |
Potentially High Risk |
Include human override (“Stop / Approve”) and detailed logging of actions. Autonomous decision-making. |
Remote Assistance / FastViewer with AI Support |
AI suggests diagnostics or next steps during remote sessions. |
Incorrect diagnosis could cause user or system errors. |
Limited to Moderate Risk |
Ensure human agent approval for all suggested actions. |
Self-Service Portal + Knowldege Discovery |
Provides automated answers, FAQs, and self-help guidance. |
Inaccurate responses or poor UX; no direct operational impact. |
Limited Risk |
Clearly label AI-generated answers; restrict sensitive queries. |
Predictive Maintenance / Anomaly Detection |
Detects anomalies and predicts failures across devices or systems. |
Missed warning could result in operational disruption. |
Limited Risk |
Functions as monitoring support, not autonomous control. |