Artificial Intelligence and Machine Learning Statement
At Phrase, artificial intelligence (AI) plays an integral role in our services. Our commitment to leveraging cutting-edge AI technologies ensures that we deliver innovative, efficient, and personalized solutions tailored to our customers’ needs. By integrating AI into our services, we provide our customers with enhanced value, improved experiences, and transformative results that keep them ahead in a rapidly evolving digital world.
This Artificial Intelligence and Machine Learning Statement (“Statement”) outlines our commitment to responsibly using AI-enabled solutions and content to train our AI models, with a focus on transparency, security, and ethical AI practices. Our mission is to enhance our customer experience by delivering seamless, insightful, and rewarding technology solutions.
We recognize that trust is the foundation of our relationship with customers, and we are committed to keeping them fully informed about how their content is used. Our approach emphasizes ethical practices, strong data security, and compliance with applicable regulations, ensuring our AI-enabled solutions are not only powerful and efficient but also respect the privacy and confidentiality of Customer Content.
This statement is intended to ensure Phrase’s compliance with the transparency obligations set forth in the European Union Artificial Intelligence Act (AI Act). Although the AI Act will be fully enforceable as of 2 August 2026, Phrase has been committed to create and use its AI system in a manner that is safe, ethical, and therefore aligned with the AI Act and other applicable legal regulations.
1. Definitions
“AI Act” means Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
“AI system” means, according to the AI Act, a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
“Customer Content” means any content that Phrase’s customer or its users upload to, or create, or translate (including the translated content) within the Phrase solutions.
“GDPR” means Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
“QPS” means Quality Performance Score.
“LQA” means Language Quality Assessment.
“MQM” means Multidimensional Quality Metrics.
“MT” means machine translation.
“OpenAI” means OpenAI, LLC.
2. Introduction
The Phrase Localization Platform represents an AI system within the meaning of the AI Act and Phrase, as its provider, is responsible for the compliance of the Phrase Localization Platform with the AI Act.
None of the AI-enabled solutions used within the Phrase Localization Platform engages in any practices prohibited under the AI Act, such as manipulating users, performing social scoring or exploiting vulnerabilities of users. Further, none of the AI-enabled solutions used within the Phrase Localization Platform shall be classified as a high-risk AI system under the AI Act as none of Phrase’s AI-enabled solutions is used by Phrase for activities like critical infrastructure management, biometric identification, or other high-stakes applications that would require special oversight in accordance with the AI Act.
For years, customer content has been essential in improving translation models – long before AI-driven solutions became the norm. Historically, translation engines, including statistical and rule-based machine translation models, have relied on customer translation memories (TMs) and linguistic assets to enhance translation accuracy, fluency, and overall quality. This long-standing practice has been fundamental to the development and continuous refinement of translation technology.
3. AI-enabled solutions at Phrase
Below, we provide a detailed overview of the individual AI features that form an integral part of the Phrase Localization Platform. In line with our commitment to transparency, we aim to offer clear and comprehensive information about how these features function, the processes involved in training our AI models, and how Customer Content is utilized in these processes.
3.1 Are we engaging Phrase for a service or technology solution that uses artificial intelligence or machine learning?
We offer AI-enabled solutions that provide customers with full control over their activation, whether applied across all projects or selectively for specific ones.
Our dedicated AI research team, working alongside our development teams, reflects our strong commitment to continuously advancing our AI and machine learning (ML) capabilities. The key solutions outlined below represent either features currently available or those actively under development.
We welcome the opportunity to engage in a follow-up discussion to gain deeper insights into your specific use cases and any concerns you may have. Your feedback is invaluable in helping us refine our solutions and prioritize considerations for future releases.
3.2 What is the AI/ML solution intended to do and how will customers use the AI/ML solution? (e.g., what decisions or actions with the AI/ML solution drive?)
In the section below we list the individual AI-enabled solutions at Phrase. In general, our goal is to provide solutions that optimize the localization process to facilitate content globalization at scale. We aim to offer both fully and partially automated solutions that can drive commercial optimizations but also tooling that can improve the usability and human experience of our platform. This includes automated translation, as well as mechanisms such as Phrase QPS which help identify and route the lowest quality content for human review.
The following is a summary table of our AI-enabled features, more detailed explanations are included below:
(a) Text-generating AI: The following table details AI features which output or generate textual content or translation:
Feature Name | Feature Summary | Trained with Customer Content? |
Auto Adapt | Automated post-editing and adaptation of text | NO |
AI Actions | A subset of automated editing operations in the Strings Editor platform | NO |
Phrase NextMT | Machine translation; automated translation of content via Phrase’s MT engine | NO |
Phrase Next GenMT | Machine translation; automated translation of content via Phrase-proprietary model interaction with OpenAI | NO |
Phrase CustomAI (Custom NextMT) | Training and evaluation of customer-specific, personalized MT engines | YES |
Phrase CustomAI (Linguistic Asset Curation) | Uses automated tooling to curate linguistic assets and datasets for custom MT training | YES (Uses QPS) |
(b) Other AI features: The following table details AI features which do not generate customer content or text
Feature Name | Feature Summary | Trained with Customer Content? |
MT Autoselect | Automatically selects the most appropriate MT engine for a job | YES |
Phrase QPS | Automatically scores translated content for accuracy (0-100) | YES |
Auto LQA | Provides full or partial automation of LQA (Language Quality Assessment); evaluates translation quality | YES |
Non-Translatables | Automatic detection of non-translatables | YES |
Automated Linguist Selection | Automatic linguist recommendations based on previous similar jobs. | YES |
A. Phrase Language AI, MT Autoselect
Our MT Autoselect is an AI-based mechanism for selecting the most appropriate MT engine for a particular job. Customers can use it with or without a defined list of preferred engines in order to maximize output translation quality and minimize post-edit cost downstream.
What is the goal of this feature? We want to enable customers to access a range of MT engines and dynamically switch between them, where one might produce higher quality output than another on Customer Content.
Example: A customer has assigned three MT engines in their MT profile and wants to make sure they get the engine with the best quality output. The system looks at their content, reviews recent post-edit data on the three MT engines and recommends one of those three engines as the best candidate for translation.
How does it work? We scan Customer Content for keywords that help the AI identify the domain of the content. We also assess the quality of recent customer output for that domain across different MT engines. The system then determines which MT engine achieves the best quality according to data.
How do we use Customer Content? The AI briefly scans Customer Content in order to identify the domain, it also looks at other Customer Content on a regular basis to evaluate the amount of post-editing that was done. This helps to establish the performance of each MT engine.
How do we protect Customer Content? No Customer Content is retained or shared during this process whatsoever.
B. QPS, Quality Performance Score
Phrase QPS is a quality estimation system that predicts the MQM score that might be given to a particular piece of content in an LQA process. It outputs a score between 0-100 that helps customers to plan resources, post-edit workload and other processes by providing quality transparency at scale.
What is the goal of this feature? We aim to provide a scalable solution for broad insight into translation quality. This is particularly important in scenarios where machine translation is preferred and quality is not guaranteed. Specifically, customers can use this feature in automated workflows to decide where to route content and ensure that the lowest quality content is captured and sent for post-editing or other appropriate workflow steps.
Example: A customer has a limited budget for post-editing and relies on machine translation for a majority of their content. Using Phrase QPS, they can automatically identify the worst scoring content and route it for human post-editing.
How does it work? The AI behind Phrase QPS is an internal, proprietary model trained on both publicly available datasets and Customer Content. Specifically, we train it with examples of prior LQA assessments that have been generated on the Phrase Platform, as well as examples of MT post-edits that have been generated on the Phrase Platform. This content includes the original content, the resulting translation and the LQA assessments of that translation. We show the AI multiple examples of translations and the MQM scores that resulted from that translation such that it learns to predict MQM scores on new content. When the trained AI is used in practice, we supply it with both the source and the translated content and the system outputs a score between 0-100.
How do we use Customer Content? We use Customer Content to train and periodically retrain the AI and output an estimate of the Multidimensional Quality Metrics (MQM) score that the translation would receive were it sent through human LQA for review. Customer Content is passed through the model to generate a score for that content.
How do we protect Customer Content? Whilst the model is trained with Customer Content, its only function is to output a score between 0-100; no Customer Content is retrievable from the model by anyone. Customer Content that is used to train QPS is also not retained; we implement a regular deletion of training data in the Phrase AI Research Team to ensure compliance with both GDPR and our privacy notice and data retention policy.
C. Auto LQA
Auto LQA allows customers to gain deeper insight into quality issues through automated Language Quality Assessment. This can be used independently of human review or in concert with human LQA through Validation mode. The system provides LQA output in line with MQM in a similar fashion to the human process.
What is the goal of this feature? Where QPS provides a quality score as an estimate of MQM, Auto LQA is intended to provide more granular information on translation quality issues. It is designed to replicate the human LQA process in that it identifies quality issues and assigns error categories and severities to those errors. It also provides a minimal explanation of the detected errors. It can be used either as a means of providing broader quality feedback in complement to human LQA or in ‘Validation Mode’, in which automated annotations are presented to a human linguist for confirmation and review. In this way the feature is intended to provide quality-of-life improvement to the human linguists and optimize and scale the LQA process.
Example: A subset of content with quality issues is identified by QPS, a customer can use Auto LQA to prefill LQA annotations at scale for review by a human linguist who can confirm or correct the detected errors. The customer can review reports about the quality issues found and take action to remedy the affected translations.
How does it work? The system currently uses OpenAI’s ChatGPT to evaluate translated content. The newest production model is an instance of ChatGPT that has been fine-tuned specifically for the task using Customer Content.
How do we use Customer Content? Customer’s content for translation and the translated output from the machine translation engine is passed through Auto LQA via OpenAI. The output of the system is an evaluation in line with MQM that consists of error categories and severities and an explanatory comment from the system.
In addition, we use Customer Content resulting from previous human LQA assessments, including customer source content and translated output in order to fine-tune an instance of ChatGPT specifically for the evaluation task, with the goal of improving the accuracy and utility of the output.
How do we protect Customer Content? We maintain an enterprise-level agreement with OpenAI that ensures that Customer Content is not stored with OpenAI or used in the training of OpenAI’s own models and products. Customer Content used in fine-tuning of the ChatGPT instance behind Auto LQA is not retained by Phrase; similar to our treatment of content used in training of QPS we implement a regular deletion of training data in the AI Research Team.
D. Auto Adapt
Auto Adapt is an automated post-editing and content adaptation solution that enables adjustment of text to ensure things like terminology, formality-level and tone-of-voice consistency. Customers can provide additional instructions to make customized adaptations of the text such as style.
What is the goal of this feature? Auto Adapt is intended to offer an automated alternative to post-editing that better ensures consistency in style and formality and adjustment to customer-specified style.
Example: A customer has translated content using a neural, segment-level MT system. The result has some inconsistencies in style and tone of voice and some linguistic errors. The customer runs the content through AutoAdapt in order to automatically iron out inconsistencies and improve the overall quality of the text.
How does it work? The system currently uses multiple OpenAI models to adjust translated content, using internally managed model interactions and configurations provided by the customer.
How do we use Customer Content? Customer’s content (either monolingual text or both the text for translation and the translated output) is passed through Auto Adapt via OpenAI. The output of the system is a revised version of the original text. Auto Adapt is not fine-tuned or trained on Customer Content.
How do we protect Customer Content? We maintain an enterprise-level agreement with OpenAI that ensures that Customer Content is not stored with OpenAI or used in the training of OpenAI’s own models and products.
E. AI Actions
AI Actions is a set of features within the Phrase Stings Editor that enable automated adjustment of the text to refine it, or adjust the tone of the output.
What is the goal of this feature? AI Actions allows the editor working in the Phrase Platform to either automatically refine the translated output or to adjust the tone of such output. Specifically, the features allow editors to automatically rephrase the text, improve grammar, shorten or adjust the tone of the text to reflect a business, academic, casual or technical style.
Example: An editor is working on a translation for a UI element that can only contain text of a certain length. The editor can use the Shorten action to reduce the length of the output to suit.
How does it work? The system currently uses OpenAI’s ChatGPT models to modify and adapt the text in the requested manner. Behind each action is a prompt that is sent to OpenAI with an instruction to generate the appropriate output. These prompts are internal to Phrase and not accessible by the customer or third parties other than OpenAI.
How do we use Customer Content? AI Actions relies solely on internal prompts to ChatGPT models, beyond the textual data that is sent to OpenAI through use of the feature, no Customer Content is used in fine-tuning the ChatGPT model.
How do we protect Customer Content? We maintain an enterprise-level agreement with OpenAI that ensures that Customer Content is not stored with OpenAI or used to train OpenAI’s own models and products.
F. Identification of Non-translatables
An AI system that is tasked with identifying segments that contain items that should not be translated such as company/product names.
What is the goal of this feature? This feature is intended to optimize translation workflows by identifying text that does not require translation and can therefore be skipped. This allows customers to avoid corrective post-editing, particularly of machine translated content.
How does it work? The AI is trained with Customer Content and learns to recognize text items that are non-translatable, it then outputs the likelihood (represented by a number between 0-1) of whether the segment in its entirety is non-translatable or not.
Example: A customer has some content in which a product name appears; they don’t want to pass this to machine translation as it may result in unnecessary translation of the item and subsequent post-edit cost in reinstating the product names. They use this AI feature to capture items that are non-translatable and block them from machine translation.
How do we use Customer Content? Customer Content is used in the training and periodic retraining of the AI feature. The AI reviews Customer Content and outputs a number that represents the likelihood that the content contains non-translatables which can be used to flag content. Customer Content is never retained by the AI and the model is not capable of outputting anything but a binary label.
How do we protect Customer Content? Whilst Customer Content is used to train this AI model, the system cannot retain or output that content. It is not possible to retrieve Customer Content from the model. Similar to our treatment of content used in training of QPS we implement a regular deletion of training data in the Phrase AI Research Team.
G. Phrase NextMT, Phrase’s in-house translation engine
Phrase NextMT is a machine translation engine, which can be used to automatically translate content in a number of languages. It is similar in nature to alternative third party MT engines save that it uses Customer Content at translation time to optimize the quality of its output. It is built and maintained internally at Phrase.
What is the goal of the feature? Customers can improve the quality and speed of localization projects by providing linguists with machine-translated content optimized for post-editing by professional translators. It is also feasible to use Phrase NextMT to automatically translate larger volumes of content at speed.
Example: A customer has a large volume of content to translate with a limited budget, they can use Phrase NextMT to translate the entirety of that content and route it to humans for post-editing.
How does it work? Phrase NextMT engine is tailored to professional translations, including support for tag placement, advanced glossary integration (including morphological inflection), and translation memory adaption (fuzzy matches). We train the Phrase NextMT engine with publicly available data for a range of languages. We then use the customer’s translation memory to improve the translation, e.g. in order to better align with the customer’s content style or branding.
How do we use Customer Content? Phrase NextMT is trained using publicly available and appropriately licensed datasets, and a limited number of commercially obtained proprietary datasets. We do not use any Customer Content, content or translations in the training or retraining of this AI model.
Customers can however leverage their translation memories and other linguistic assets during the translation process to improve the quality of the output. This content is not retained in any way and is also not used for training of the AI model.
How do we protect Customer Content? Aside from the exclusion of Customer Content in training, customer assets (translation memories, glossaries etc) used in the translation process are only accessible by the individual customer. No Customer Content is retained by the AI model or is accessible by other customers.
H. Phrase Next GenMT, Phrase’s LLM-based translation engine
Phrase Next GenMT is an LLM-based, higher quality alternative to Phrase NextMT. It is based on and leverages OpenAI’s ChatGPT models to provide high quality, fluent output.
What is the goal of this feature? Phrase Next GenMT is the best performing MT engine developed by Phrase; the goal is to provide automated translation at the highest possible quality. In a similar fashion to Phrase NextMT we allow customers to leverage translation memories and other assets to optimize the quality of the output translation. This can then be used as a standalone translation solution or in concert with human post-editing for high quality results.
Example: A customer has a volume of content for translation on a limited budget with high quality expectation; Phrase Next GenMT is used to produce a first translation which is then sent for human post-editing. The customer increases the quality and stylistic alignment by providing access to their translation memory, which further improves the translation quality.
How does it work? Phrase Next GenMT interacts with OpenAI’s ChatGPT models through a prompt developed internally at Phrase. It also optionally provides ChatGPT with ‘few-shot’ examples of similar translations retrieved from the customers translation memory.
How do we use Customer Content? Currently, we rely solely on a combination of prompting and few-shot example retrieval to achieve high quality translation. We do not otherwise use Customer Content in the training or fine-tuning of Next GenMT. The content for translation together with any applicable examples from the translation memory are sent to OpenAI and a translation is returned.
How do we protect Customer Content? We do not use Customer Content in the training or fine-tuning of Next GenMT. When a customer requests use of this feature we send content and examples to OpenAI in order to generate the translation. We maintain enterprise-level agreements with OpenAI that no Customer Content is retained or used by OpenAI in the training or development of any of their models or products.
Details: Phrase NextMT engine is tailored to professional translations, including support for tag placement, advanced glossary integration (including morphological inflection), and translation memory adaption (fuzzy matches). We train the current version of Phrase NextMT, our in-house machine translation engine, with publicly available data. We then use the customer’s translation memory to improve the translation, such as better style matches. In future releases of Phrase NextMT, we plan to provide customers with the ability to generate custom engines. Customers will be able to then utilize a custom-trained “customer-only Phrase NextMT engine” that no other customers will be able to access.
Data usage: Only publicly available data are utilized in the training of the current generic Phrase NextMT translation engines. During the translation process, the trained Phrase NextMT translation engines have access to the translated content along with customer’s translation memory (for relevant fuzzy matches) and customer’s MT glossary.
Data protection: Customer’s content and the custom-trained engine will not be accessible to anyone else.
I. Phrase Custom AI, Dataset creation and custom engine training
Customers have the ability to create custom training data from their translation memories and use them to generate custom Phrase NextMT engines for specific use cases. Customers are able to then utilize a custom-trained “customer-only Phrase NextMT engine” that no other customers will be able to access.
What is the goal of this feature? In certain circumstances it can be beneficial to a customer to train a personalized MT engine for a specific use case. We provide tooling to enable customers to create datasets from their own content and train their own individualized instances of Phrase NextMT. We also provide analytics that demonstrate the success of model training and the utility of the resulting engine.
Example: A customer with translation memories has a specific use case where general purpose MT does not provide satisfactory quality. The customer can use CustomAI to clean and filter their translation memories, create a dataset from that content and automatically train their own instance of Phrase NextMT that is specialized to their requirements.
How do we use Customer Content? During the dataset creation and cleaning process and subsequent training of the Phrase NextMT engine, the system has access to the customer’s translation memory. Once trained, access to the resulting engine and datasets is restricted such that they can be accessed and used only by the relevant customer. No other Customer Content is used in the training of these custom MT engines.
How do we protect Customer Content? Customer Content and the custom-trained engine is uniquely accessible to that customer. No other Customer Content is used in the training of custom NextMT engines.
J. Phrase CustomAI; Linguistic Asset Curation
Automated tooling in Custom AI can be used to filter and clean customer translation memories and other assets.
What is the goal of this feature? Our linguistic asset curation tooling is intended to allow customers to filter and optimize their linguistic assets for use in other areas of the localization process such as custom NextMT model training or as ‘few-shot’ examples to improve the quality of output of Phrase Next GenMT.
Example: A customer has generated a large translation memory and wants to clean it to better guarantee the efficacy of their translation workflow that includes skipping the translation step for similar content that was already translated or for optimizing the quality of output from Phrase Next GenMT. They use the curation tooling to remove repetitions and low quality translations, resulting in a refined translation memory.
How does it work? Our asset curation tooling uses a combination of basic, rule-based (non-AI) filters, Phrase QPS (detail on Phrase QPS is provided above) and other tools. Phrase QPS here is used for example to score every item in the translation memory and remove the lowest scoring entries (for example the bottom 10%).
How do we use Customer Content? Beyond Phrase QPS (which is discussed above) no other AI features are used in our asset curation tooling. Customer Content is isolated per customer and the tooling does not retain or share Customer Content. The result of filtering is again isolated and accessible only by the customer.
How do we protect Customer Content? The results of asset curation and related AI-based filtering are only accessible by the customer.
K. Automated linguist selection
Our automated linguist selection allows customers to automate part of the decision of project assignment by recommending a linguist with past experience working on similar documents.
What is the goal of this feature? This AI-based feature allows customers to improve the quality and speed of localization projects by optimizing linguist assignments and reducing repetitive project management decisions.
Example: A customer has a new project for translation of content from a specialized domain. The AI reviews the input content and identifies the domain automatically. The feature reviews similar projects and outputs a recommendation for assignment to a linguist with the most experience from projects in a similar domain.
How does it work? The AI is trained to categorize documents by content type and given a new piece of content, recognize the general category of content to which the new project belongs. In this manner the feature retrieves the list of linguists who last worked on similar content and produces a recommendation for assignment.
How do we use Customer Content? Customer Content is used in the training of this AI to enable it to categorize the domain of input content. The system reviews new content and produces a recommended list of linguists based on prior work on a similar domain.
How do we protect Customer Content? Whilst Customer Content is used in the training of the AI model, the model is not capable of generating Customer Content. No Customer Content can be retrieved from the model. Similar to our treatment of content used in training of Phrase QPS, we implement a regular deletion of training data in the Phrase AI Research Team.
3.3 Will Phrase custom develop AI/ML models exclusively for customers or will they provide generic AI/ML models?
We offer both. Customers may utilize any of the 30+ generic engines and additional custom MT models. Custom MT models would be tailored for the particular customer, using Customer Content. Please refer to section G in Section 3.2 for more details.
MT engines integrated via customer’s own API key
We have no control over custom model training of MT engines provided by third parties with whom you maintain a direct relationship, within the meaning that you are using your own API key to integrate the MT engine with the Phrase Localization Platform.
4. General Principles of AI Governance
AI governance encompasses the frameworks, policies, and practices that guide the ethical and responsible development, deployment, and management of AI-enabled solutions. At Phrase, we recognize that trust in AI-enabled solutions is built on a foundation of transparency, accountability, and respect for the rights and reasonable expectations of all stakeholders.
4.1 Ethical Considerations and Privacy
In developing and deploying our AI-enabled solutions, we are guided by principles that respect individual rights, promote fairness, and preserve the confidentiality of the Customer Content.
Phrase ensures that the amount of data required for training of the AI models is minimized to the extent strictly necessary for achieving the purpose of enhancing the AI-enabled solutions and delivery of the quality services to customers. In this connection, Phrase also strictly limits retention of training data. Where possible, Phrase considers using anonymized or pseudonymized data for training of its AI models or uses aggregated data. Phrase oversees that the outputs of AI models are relevant and do not reveal personal data related to training data.
We ensure that any Customer Content used for training of our AI models (as described in more detail in Section 3 above) is processed in accordance with the GDPR and other applicable data protection laws in case it potentially includes any personal data. For more details on data processing at Phrase, please see our Privacy Notice.
4.2 Human Oversight and Control
A dedicated AI Research Team at Phrase is responsible for conducting regular quality assurance checks on the outputs of our AI-enabled solutions. The whole procedure is overseen by the Legal Team to ensure compliance with applicable laws. These checks ensure that the outputs meet proper standards of accuracy, fairness, and compliance with both internal policies and applicable legal regulations.
To support transparency and continuous improvement, users and employees may flag potential issues for further review by our AI Research Team. Once flagged, these issues undergo a structured review process, allowing our team to identify root causes, address any shortcomings, and implement necessary adjustments to improve the system’s performance and trustworthiness.
By fostering an open feedback loop, we aim to empower all stakeholders to contribute to the reliability and ethical use of our AI-enabled solutions.
4.3 AI Literacy
In accordance with the AI Act, at Phrase we are committed to ensuring that all our personnel involved in the development, deployment, and supervision of AI systems possess an adequate understanding of AI concepts, risks, and best practices. Phrase’s personnel are obliged to participate in training programs covering fundamental AI concepts and principles of the AI Act and compliance therewith. This ensures our teams have the knowledge and skill sets required to implement and oversee AI-enabled solutions responsibly.
As the AI Act and other relevant regulations will evolve in the future, we are committed to update our training materials, policies, and documentation to reflect the most current standards.
Conclusion
We reserve the right to revise this Statement as necessary, recognizing that understanding and interpretation of the AI Act will continue to evolve over time.
If you have any questions or requests regarding this Statement, please contact us at privacy@phrase.com.
Last updated: 13 February 2025