ReadiNow Responsible AI

At ReadiNow, we are committed to innovation, ensuring that our customers can leverage the full potential of our AI features, both now and in the future. We believe that the transformative power of AI combined with the ReadiNow no-code platform will add significant value for our clients. 

However, we also recognize the importance of data security and the responsible use of AI technologies. In order to achieve this, all ReadiNow AI that are developed and deployed will adhere to relevant data protection standards, privacy standards and ethical considerations.

This document details relevant AI features and its implementation to ensuring transparency to ensure customers have the information required to have trust in ReadiNow.

ISO/IEC 42001 Certification

ReadiNow is proud to be certified under ISO/IEC 42001, the international standard for Artificial Intelligence Management Systems (AIMS). This certification is a formal recognition of our commitment to managing the risks and responsibilities associated with AI in a systematic, ethical, and accountable manner.

What is ISO/IEC 42001?
ISO/IEC 42001:2023 is the world’s first AI-specific management system standard. It provides a structured framework for organizations to establish, implement, maintain, and continually improve AI management practices. This includes governance, risk assessment, data privacy, algorithmic transparency, human oversight, and alignment with ethical principles.

Why It Matters
As AI becomes more embedded in critical business processes, it is essential to demonstrate trustworthiness and accountability. ISO/IEC 42001 certification:

  • Validates that ReadiNow applies best-practice governance over the lifecycle of our AI systems.

  • Reinforces our adherence to responsible AI principles, including fairness, transparency, privacy, and human agency.

  • Provides independent assurance to customers, regulators, and stakeholders about the quality and integrity of our AI management practices.

  • Helps us stay aligned with evolving regulatory expectations across jurisdictions.

This certification complements our existing Responsible AI commitments and enhances confidence in the AI-driven features of the ReadiNow platform.

Generative AI

Overview 

Generative AI is a technology that generates original content in response to user prompts. This technology uses Large Language Models that have been trained on large quantities of data from the internet. 

ReadiNow currently used Generative AI in AI Workflows, AI Agents and AI Suggest features

How we use Generative AI

ReadiNow uses private Microsoft Azure and AWS Bedrock tenancies to process Azure OpenAI and Anthropic Claude Generative AI requests, all hosted within Australia. All communication to and from the instance is fully encrypted.

User prompts (inputs) and completions (outputs) in this private instance:

  • Are NOT available to non-ReadiNow customers.

  • Are NOT shared across ReadiNow tenants.

  • Are NOT accessible to OpenAI or Anthropic.

  • Are NOT used to improve OpenAI or Anthropic models.

  • Are NOT used to enhance any Microsoft, AWS or third-party products or services.

  • Are NOT used for automatically improving Azure OpenAI or Anthropic models unless explicitly fine-tuned with user-provided training data for an individual tenancy.

Any fine-tuned models are exclusively for ReadiNow’s use. The services are fully hosted in ReadiNow environment within Azure and do not interact with any Anthropic or OpenAI-operated services (e.g., ChatGPT, OpenAI API, Claud.ai) 

For more details visit:

Data Privacy and Security

Tenant Isolation

  • Prompts and completions from a specific ReadiNow tenant are NOT used to serve requests or as prompts for other tenants.

  • Tenant data is NOT shared across tenants for query prompts or completion determination.

  • Tenant data is NOT publicly exposed during the usage of the AI Suggest service.

  • Usage metrics are collected per tenant to monitor for any unintended or malicious access and usage. 

Data Security

User data is protected against web exposure through rigorous security measures implemented across the Azure AI and AWS Bedrock service infrastructure. These include industry-standard encryption protocols, access controls, rate limiting, and continuous monitoring to detect and mitigate potential threats or vulnerabilities.

Responsible AI

Our organization leverages AI services through Microsoft Azure OpenAI and AWS Bedrock (Claude), both platforms emphasizing responsible AI practices guided by core principles:

  • Fairness: Ensuring non-discrimination in AI systems.

  • Reliability and Safety: Guaranteeing safe and dependable AI usage.

  • Privacy and Security: Protecting user data with robust security measures.

  • Inclusiveness: Designing accessible and beneficial AI systems for all.

  • Accountability: Holding stakeholders accountable for AI deployment and usage.

  • Transparency: Providing clear information about AI operations.

These principles are actively implemented across both Azure OpenAI and AWS Bedrock services through their respective safety frameworks and model architectures. Microsoft's Azure OpenAI incorporates these through their responsible AI framework, while AWS Bedrock implements similar safeguards through Anthropic's Constitutional AI approach for Claude models.

By utilizing both Azure OpenAI and AWS Bedrock services, these responsible AI principles become integral components of the AI features our platform provides.

Content Filtering

Our AI services implement comprehensive content filtering across both platforms:

Azure OpenAI Service includes a robust content filtering system that processes both prompts and completions through classification models to detect and prevent harmful content. The system targets categories such as hate, sexual content, violence, and self-harm, with filters set to a medium severity threshold.

AWS Bedrock provides content filtering through Amazon Bedrock Guardrails, which detect and filter harmful content including hate speech, insults, sexual content, violence, misconduct, and prompt injection attacks. These guardrails work alongside Claude's inherent Constitutional AI training that builds safety directly into the model's responses.

Both content filtering systems are trained and tested across multiple languages, including English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese, ensuring safe and appropriate user experiences regardless of language.

For more details, visit:

Abuse Monitoring

Azure OpenAI Service actively monitors and mitigates potential abuse through several components:

  • Content Classification: Detects harmful language or images in user prompts and completions.

  • Abuse Pattern Capture: Analyzes customer usage patterns to identify potential abuse.

  • Human Review and Decision: Authorized Microsoft employees review flagged content to confirm or correct classifications based on predefined guidelines.

  • Notification and Action: Customers are informed via email if abusive behavior is confirmed.

For more information, visit Azure OpenAI Service abuse monitoring.