AI Acceptable Use Policy

Revision: 1-1 Revision Date: 07/11/2025

1. Introduction

1.1 About the AI System

Applied offers AI-powered solutions to promote fairer, more consistent and efficient hiring decisions. The solutions offered by Applied include the following:

Suggesting skills-based and competency questions and scoring rubrics. Hiring teams provide inputs such as job descriptions, key job attributes, and responsibilities. Using OpenAI (ChatGPT), Applied analyses these inputs to suggest skills to assess throughout the hiring process, tailored sift and interview questions to evaluate applicants' fit for the role, and scoring rubrics to ensure fair and consistent evaluation and shortlisting.

Parsing CVs/Résumés. Using Anthropic (Claude) via AWS Amazon Bedrock, Applied transforms applicants' CVs/résumés upon upload into structured data for easier, structured and anonymous review. Applicants can review and edit the parsed content before submitting it to hiring teams. For roles with CV parsing enabled, hiring teams can view candidates' CVs in a completely anonymous, structured format to consistently and efficiently identify key attributes across CVs.

Assisting hiring teams in analysing and scoring candidate responses. Using Anthropic (Claude) via Amazon Bedrock, Applied offers an AI screening assistant that helps hiring teams review and score candidates' written responses to skills, work, and competency questions after they have reviewed an initial sample of responses and provided scoring rubrics.

Giving examples of AI-generated responses. Using OpenAI (ChatGPT), Applied generates example LLM responses to skills, work, and competency questions. Hiring teams can review these examples while scoring candidates' written responses to flag potential AI use.

Detecting when AI may have been used by applicants when they submit text responses. Applied's AI Text Detection Tool uses an artificial intelligence model developed by Pangram Labs, trained to classify text as either human-written or AI-generated. Applied runs applicants' written responses to skills, work, and competency questions through this model and flags responses highly likely to be AI-generated -those with a predicted AI likelihood of 0.95 or above on a 0.0 (human) to 1.0 (AI) scale.

1.2 Purpose of this policy

This Acceptable Use Policy ("AUP") governs the use of AI Solutions (“System” or “AI System” or “AI Solutions”) by our business customers ("Customers") and their Authorised Users (“Users” as defined through the terms of use of Be Applied Ltd’s platform (“Applied”). It outlines acceptable and prohibited uses, security requirements, and user responsibilities to ensure the safe, ethical, legal, and compliant operation of our AI system, protecting both our company and our users from potential risks associated with AI use. This includes, but is not limited to, misuse, data breaches, and legal liabilities.

It aims to:

  • Establish boundaries between acceptable and prohibited uses of the AI system.
  • Provide security and compliance guidelines for users to mitigate risks and ensure responsible usage.
  • Detail user responsibilities and accountability when integrating the AI system into their workflows.

2. Systems Information

2.1 Technical Specifications


The technical specifications are organised according to Applied's AI solutions and their respective AI service providers:

Suggesting skills-based questions and scoring rubrics:

  • Base model: Anthropic’s Claude Sonnet, via Amazon Bedrock
  • Fine-tuning dataset type: none
  • Model version: anthropic.claude-3-sonnet-20240229-v1:0
  • Last update: February 2024
  • Supported Input types: job description (string) and/or key attributes (string), and responsibilities (string)
  • Output capabilities: 
    • For questions: JSON array with up to 5 suggested questions per question type (e.g. sift, interview question)
    • The scoring rubrics for each question include a  1-star, 3-star, and 5-star answer characteristics. These help hiring teams distinguish between poor, average, and excellent responses.

Parsing CVs/résumés:

  • Base model: Athropic’s Claude Sonnet , via AWS Amazon Bedrock 
  • Fine-tuning dataset type: none
  • Model version: anthropic.claude-3-sonnet-20240229-v1:0
  • Last update: Feb 2024
  • Supported Input types: prompt instructions, and a PDF version of CV/résumé
  • Output capabilities: structures JSON CV/résumé data

Assisting hiring teams in analysing and scoring candidate text responses (AI screening assistant):

  • Fine-tuning dataset type: few-shot prompting
  • Model version: anthropic.claude-3-sonnet-20240229-v1:0
  • Last update: February 2024
  • Supported Input types: text
  • Output capabilities: text, score classification 

Giving examples of AI-generated responses:

  • Base model: OpenAI’s ChatGPT
  • Fine-tuning dataset type: none
  • Model version: gpt-3.5-turbo
  • Last update: February 2024
  • Supported Input types: text
  • Output capabilities: text

Detecting when AI may have been used by applicants when they submit text responses:

  • Base Model: The model is Pangram’s transformer-based neural network trained to distinguish text written by large language models from text written by humans. Find out more about how it works here.
  • Fine-tuning Dataset Type: The Pangram model is trained on a diverse set of 1 million documents labelled as human or AI-generated. The model is then honed using hard negative mining. Learn more here
  • Model Version: N/A - Pangram doesn’t expose this
  • Last Update: N/A - Pangram doesn’t expose this
  • Supported Input Types: JSON object with the following:
    • text (string):  the input text to classify
    • return_ai_sentences (boolean): (optional, default is False). If True, then return a list of the most indicative AI sentences.
  • Output Capabilities: JSON object with the following:
    • ai_likelihood (float): the classification of the text, on a scale from 0.0 (human) to 1.0 (AI)
    • text (string): The classified text.
    • prediction (string): A string representing the classification of the ai_likelihood (e.g. very highly likely human).
    • ai_sentences (list):  If return_ai_sentences was True, then a list of the most indicative AI sentences

2.2 Intended use cases

Within the scope of hiring decisions, these are the general intended use cases of each of the AI Solutions offered by Applied:

Suggesting skills-based questions and scoring rubrics:

When the User is setting up a job vacancy via Applied, they can automatically identify relevant skills based on inputs such as the job description. The User can then use these skills to tag different hiring stages and assessment tools, ensuring they are assessed throughout the hiring process.

Users can also generate questions tailored to the required job responsibilities and skills, and job applicants respond to these questions when applying for the job, as an additional task that occurs later in the hiring process or throughout the interview stages. To score responses in a structured and consistent manner, Users can also generate scoring rubrics that indicate how hiring team members should score each answer on a 1-5 scale.


Parsing CVs/résumés:

When a User sets up a job vacancy via Applied, they can enable Applied’s Chunked CV review flow, which uses candidate-parsed CVs/résumés. CVs/résumés are broken down into sections so that Users can choose what's important to them and decide what to score and how to score it. Users can generate and share scoring rubrics across hiring team members to ensure a consistent, structured review of parsed and chunked CVs/résumés.

Assisting hiring teams in analysing and scoring candidate text responses (AI screening assistant):

The main use case is to assist Users in reviewing high-volume roles (those with 100 or more applicants), by helping them efficiently obtain scores and analyses of candidate responses to work samples and scenario questions that assess specific skills or work-related challenges they would face on the job. These types of questions work well with Applied’s screening assistant because the variability in answers is lower than that of past-experience or motivation questions. Lower variability allows Applied’s AI screening assistant to be more consistent in its scoring patterns, adhere more closely to the scoring rubrics, and use calibration inputs more effectively,  ensuring more accurate results.

Before using the AI screening assistant, Users should also complete an initial calibration process by scoring a minimum number of answers per question to ensure more accurate results.

The AI screening assistant can then score the remaining responses, provide insights, and classify them according to the pre-set scoring rubric and sample scoring. With these insights, Applied recommends that Users review high-performing answers before advancing applicants to the next hiring rounds, balancing the AI screening assistant's insights with the User’s own insights about top performers.


Giving examples of AI-generated responses:

When hiring teams are going through Applied’s anonymous review flow and scoring applicants' responses to job, skills-based, and competency questions, they can use Applied’s reference responses to compare them with candidates’ responses and take action in line with the User’s policies and guidelines on AI use.


Detecting when AI may have been used by applicants when they submit text responses:

Applied automates the detection of highly likely AI-generated text responses to skills, work, and competency questions, such as those that hiring teams ask through Applied’s Sift Questions feature. This provides more accurate and consistent information about applicants’ AI use, which the User can then compare with their AI use policies and guidelines.

Text responses flagged by Applied as likely AI-generated include those copied from AI tools, as well as other specific scenarios. Applied shares these different scenarios with Users through the communication channels listed in this AUP. This transparency enables Users to understand why a response may be flagged and to take an informed action in line with their internal AI policies and guidelines. Applied will provide updates to Users as it gains more insights into AI-generated text patterns.

2.3 System limitations

In general, the system is not suitable for:

  • Mission-critical insights requiring 100% precision.
  • Use in sensitive domains (e.g., medical diagnosis) without expert oversight.

Below are other limitations for each of the AI solutions offered by Applied’s AI System:

Assisting hiring teams in analysing and scoring candidate text responses:

The AI Screening assistant provides less accurate results for responses to the following types of questions: motivation questions (e.g. why do you want to work at [Customer]?), past behavioural questions (e.g. tell me about a time when you… ?), CV type questions (e.g. tell me about your experience as [role] and what have been your main achievements…?). 

These questions tend to show high variability in answers, and the AI screener has not been fine-tuned or trained on this type of question and could inadvertently score answers in ways that are not intended by hiring teams.

Giving examples of AI-generated responses:

Applied provides one instance of an AI response to a given question, and only OpenAI ChatGPT responses are provided. Therefore, it could miss out on other AI-generated answers by not allowing Users to give an alternative prompt or by using a different Large Language Model.

Detecting when AI may have been used by applicants when they submit text responses:

Certain types of inputs or scenarios may provide inconclusive or less accurate AI detection results, including the following:

  • If a given text response contains fewer than 50 words, then the model can’t conclude if an answer is AI-generated or not. 
  • If a given text response exceeds 400 words, then the results could be less accurate.
  • The AI detection solution doesn’t detect the originality of the ideas or topics discussed by applicants in any given response. 
  • The AI detection solution doesn’t detect any plagiarism, meaning that if a text response is taken from other webpages, books, news articles, or other sources.
  • Any candidate response submitted before 7 November 2025, when this feature is released, are not processed through the AI text detection tool. 

3. ACCEPTABLE USES

3.1 Authorised Applications

Users may employ Applied’s AI System to run fairer, more consistent, more efficient, and personalised hiring processes, always with the User’s human involvement and oversight.

These are authorised applications:

  • To build or set up job vacancies that are skills-based and use best practices, while reducing time to launch jobs. By starting with a draft version of AI-generated questions, skills, and scoring rubrics suggested by Applied, the User can save time identifying potential assessment options and then update them according to specific hiring needs and practices.
  • To make more informed decisions on how to allocate applications to additional review and analysis by the User’s hiring teams:
    • In the case of AI assistance in screening and scoring responses, Users can use the scores and insights from Applied’s AI screening assistant to understand which applicants are likely to receive low, moderate, good, strong, or great scores. Users can then apply an additional layer of review and analysis to top performers and conduct spot checks on candidates at each level before making any shortlisting and hiring decisions.
    • In the case of AI detection of candidate responses, Users can use the information about highly likely AI answers to decide which candidates should undergo a more thorough, anonymous review and scoring process, and which should be analysed in line with each User’s AI policies and guidelines.
  • To gain extra insights before making shortlisting decisions in a way that is objective, fair, consistent and supported by human oversight. All the outputs from the AI solutions provide are additional data points that Users can take into account when making informed hiring decisions or following up with applicants about their responses.

For any of these potential applications, Applied encourages Users to: 

  • Perform human oversight and verification of the ouputs provided by Applied’s AI solutions.
  • Sharing AI guidelines and policies with job applicants upfront so they know what to expect from a hiring process that uses any of Applied’s AI solutions. 

3.2 Access Methods

  • API Integration: The AI system supports robust API-based access for flexible integrations. More information can be provided upon request. Users must adhere to authentication protocols such as API key usage and IP whitelisting.

  • Platform Interface: Users can interact directly with the AI solutions through Applied:
    • Skills, competency and work questions, and scoring rubrics. Users with job admin and hiring manager permissions can generate skills, questions and scoring rubrics while setting up a role.
    • CV/résumé parsing. While applying for a job, applicants can upload their CV/résumé, edit the parsed file, and confirm that the structured data reflects the information in the original file. Hiring teams can score CVs/Résumés in a chunked, structured format by using the review flow for this file type. 
    • Assisting hiring teams in analysing and scoring candidate text responses. Users with organisation admin permissions can enable this AI solution via the settings section of each User’s account. Once this feature is enabled, Users with job admin permission can request the AI screening assistant for high-volume roles,  triggering an Applied internal process to ensure the minimum requirements for using this feature are met before enabling it. After the AI screening assistant analyses and scores questions, Users with job admin permissions can see the results on each role’s candidate management page.
    • Getting examples of AI responses. Hiring teams who score applicant responses via their personalised scoring links can see an example of an AI response, along with the scoring rubrics provided by the hiring team. 
    • AI detection of text responses. Users with organisation admin permissions can enable this AI solution via the settings section of each User’s account. Once this feature is enabled, any User with job admin access can see AI flags for each answer that is highly likely to be AI on each role’s candidate management page and on each applicant’s profile.
  • Authentication Requirements: Access is granted only after a secure onboarding process that includes two-factor authentication and periodic credential updates.

4. PROHIBITED USES

4.1 Strictly Prohibited Activities

  • Any illegal activity that violates local or international laws or unauthorised purposes, including but not limited to, infringing intellectual property rights, distributing malware, or engaging in fraudulent activities.
  • The creation of harmful or misleading content, such as phishing scams, disinformation, or offensive material.
  • Processing or generating outputs using unapproved inputs (e.g., sensitive personal data).
  • Making hiring decisions solely based on automated AI solutions, without meaningful human judgement/oversight. 

4.2 Data Usage Restrictions

  • Personal data must be provided and processed in compliance with data protection and AI regulations, including GDPR and the EU AI Act. This means implementing appropriate safeguards to protect applicants' and hiring managers' personal data, ensuring transparency in how AI is applied, and guiding hiring teams on the compliant use of Applied’s AI solutions, while emphasising the importance of maintaining meaningful human oversight in all hiring decisions.
  • Users may not upload the following types of data:
    • Health data (unless anonymised and explicitly approved).
    • Financial or credit card data.
    • Any data classified as restricted under applicable regulations.
  • Data retention periods must align with contractual agreements.

5. USER RESPONSIBILITIES

5.1 Security Requirements

  • API key management: Most AI solutions use Amazon Bedrock, so authorisation is handled via IAM policies. For AI solutions using OpenAI (where Applied isn’t sending any personal identifiable information), API keys are stored and encrypted in the AWS Parameter Store and injected into Applied’s platform during the deploy step on Applied’s CI platform.
  • Access control measures: The entire application uses email + password access. Emails and passwords are stored encrypted in the database and in transit. Each User's access to features on the Applied platform is driven by this, with their authorisation to access anything being checked on every API request.
  • Security best practices: Applied follows the same security practices as defined in the terms and conditions and privacy policy agreed upon with Users before they start using Applied’s platform.

5.2 Monitoring and Reporting

  • Usage monitoring requirements: Users must monitor their usage  of AI to ensure compliance with this AUP
  • Incident reporting procedures: Any security incidents, including unauthorised access, must be reported to Applied immediately or no later than three (3)  working days (72 hours) via email at hello@beapplied.com
  • Performance monitoring obligations: If Users notice the AI solutions are not performing as intended, they are required to report it to Applied.

6. COMPLIANCE REQUIREMENTS

6.1 Data Protection

Applied ensures compliance with GDPR, and other applicable data privacy and AI compliance regulations, such as the EU AI Act. Users are responsible for:

  • Ensuring lawful data input and adherence to regional data privacy laws.
  • Deleting data processed by the AI system promptly if required.

6.2 Industry-Specific Regulations

Users operating in regulated industries must follow sector-specific compliance requirements, including:

  • Participating in audits if required by regulatory bodies.

7. SERVICE LEVEL EXPECTATIONS

7.1 System Availability

Applied’s AI solutions are designed to provide a minimum uptime of 99.9%, excluding scheduled maintenance. 

7.2 Support Services
  • Technical support availability: Monday to Friday, 9 am to 5 pm GMT.
  • Issue resolution timeframes
    • For issues categorised by Applied as P0, resolution should be expected within a maximum of 6 hours
    • For issues categorised by Applied as P1, resolution should be expected within a maximum of 48 hours
    • For issues categorised as P2 or lower priority by Applied, resolution depends on the problem and its impact on Users

Contact procedures: Contact support via hello@beapplied.com.

8. VIOLATION CONSEQUENCES

A User’s violation of this Policy will be considered a material breach of the Terms of Service and/or other agreement governing the customer’s use of the services.

The User is solely responsible for ensuring that all AI solutions provided by Applied are implemented and used in a lawful, ethical, and compliant manner. This includes adhering to applicable data protection laws, ensuring that end users do not misuse or manipulate AI outputs, and maintaining appropriate internal controls. All decisions made using AI outputs must remain subject to human oversight and review.

Applied provides guidance and tools to support compliant use but does not monitor client activity or assume responsibility for how the AI solutions are used in practice.

9. POLICY UPDATES AND COMMUNICATION

9.1 Update Procedures

This policy is periodically reviewed and updated by all teams that have a direct or indirect involvement in the development, release, and onboarding of AI solutions, as well as in providing User Support. All changes are communicated via:

  • Emails are sent to registered users
  • Notifications via Applied’s platform
  • Users are required to acknowledge the policy updates by replying to emails sent to them or by accepting the updates notified on the platform.

9.2 Communication Channels

For inquiries or support, users may contact us via:

  • Official communication methods: Email
  • Support contact information: hello@beapplied.com
  • Emergency contacts: Applied’s support team at hello@beapplied.com or any of the User’s assigned Customer Success Manager, for whom they have direct access to their email address.

10. POLICY ACCESSIBILITY

This policy is available to users through multiple channels, including:

  1. Platform Integration
    • Referenced and accessible through Applied’s Privacy Policy and Terms of Use
    • Directly and timely accessible within Applied’s platform interface, when Users are about to enable or access any of Applied’s AI solutions
    • Prominent link on the Customer’s ‘features’ page, accessible to Users with Admin-level access to the platform.
    • Accessible through Applied’s Help Centre
    • Referenced during any existing or future API key generation processes
  2. Documentation Portal
    • Available in the developer documentation
    • Included in API documentation

  3. User Agreement Process
    • Presented during User onboarding
    • Required acknowledgement during account creation and renewal, by accepting Applied’s terms of use 
    • When Users are about to enable or access any of Applied’s AI solutions through their account settings
    • Referenced in service agreements

11. ACKNOWLEDGEMENT

By accessing and using Applied’s AI Solutions, users acknowledge that they have read, understood, and agree to comply with this Acceptable Use Policy. Continued use of the service constitutes ongoing acceptance of this policy and any subsequent updates.