Overview
At ReadiNow, we are committed to innovation, empowering our customers to fully harness the potential of our AI features both now and in the future. We believe that the transformative power of AI, when combined with the ReadiNow no-code platform, will deliver significant value for our clients.
However, we acknowledge the critical importance of data security and the responsible use of AI technologies. To ensure this, all ReadiNow AI features developed and deployed will comply with relevant data protection standards, privacy regulations, and ethical considerations.
This document outlines the relevant AI features and their implementation, ensuring transparency and providing customers with the information they need to build trust in ReadiNow.
Generative AI
What is Generative AI
Generative AI is a technology that creates original content in response to user prompts. It leverages Large Language Models that have been trained on extensive datasets sourced from the internet. See here to read more about Generative AI.
How we use Generative AI
ReadiNow utilizes a private Microsoft Azure tenancy to host an instance of Azure OpenAI for running ReadiNow Generative AI, all within Australia. All communications to and from the ReadiNow platform and the Microsoft instance are fully encrypted.
User prompts (inputs) and completions (outputs) in this private instance:
- Are NOT available to non-ReadiNow customers.
- Are NOT shared across ReadiNow tenants.
- Are NOT accessible to OpenAI.
- Are NOT used to improve OpenAI models.
- Are NOT used to enhance any Microsoft or third-party products or services.
- Are NOT used for automatically improving Azure OpenAI models unless explicitly fine-tuned with user-provided training data.
Any fine-tuned Azure OpenAI models are exclusively for ReadiNow's use. The Azure OpenAI Service is fully hosted within the ReadiNow environment on Azure and does not interact with any services operated by OpenAI. (e.g. ChatGPT, OpenAI API)
For more details on the Azure Data, privacy, and security for Azure OpenAI Service, please click here.
Data Privacy and Security
Tenant Isolation
A detailed explanation of the ReadiNow Tenant Isolation Model can be found here.
The ReadiNow AI features build on the foundational Tenant Isolation model. Specifically:
- Prompts and completions from a specific ReadiNow tenant are not utilized to serve requests or generate prompts for other tenants.
- Tenant data is not shared between tenants for query prompts or completion determinations.
- Tenant data remains confidential and is not publicly exposed during the use of the AI Suggest service.
- Usage metrics are collected individually for each tenant to monitor for any unauthorized or malicious access and usage.
Data Security
ReadiNow employs multiple layers of industry standard practices, protocols and techniques to mitigate the risk of unauthorised access or modification to data or systems. Refer here for more information on the ReadiNow platform security.
The Azure AI service infrastructure protects user data from web exposure through stringent security measures. These measures include industry-standard encryption protocols, access controls, rate limiting, and continuous monitoring to detect and mitigate potential threats or vulnerabilities.
Responsible AI
Microsoft, through its Azure OpenAI services, places a strong emphasis on responsible AI practices. The Microsoft Responsible AI model is guided by six core principles:
- Fairness and inclusiveness: AI systems should treat everyone fairly and ensure non-discrimination.
- Reliability and safety: AI systems should operate reliably, safely, and consistently.
- Transparency: AI systems should provide clarity and openness regarding how the AI operates.
- Privacy and security: AI systems must protect privacy and secure personal and business information .
- Accountability: Stakeholders must be held accountable for AI deployment and usage.
These principles are not merely theoretical constructs; they are actively integrated into the design and functionality of Azure OpenAI models. By using the Azure OpenAI model, these principles are incorporated into the AI Suggest feature offered by the ReadiNow platform.
Content Filtering
The Azure OpenAI Service features a robust content filtering system that analyses both prompts and completions using classification models to detect and prevent harmful content. This system targets categories such as hate speech, sexual content, violence, and self-harm, with filters set to a medium severity threshold to ensure a safe and appropriate user experience. Additionally, the content filtering models are specifically trained and tested in multiple languages, including English, German, Japanese, Spanish, French, Italian, Portuguese, and Chinese.
For more details, refer to Azure OpenAI Service content filtering.
Abuse Monitoring
The Azure OpenAI Service detects and mitigates instances of recurring content and/or behaviours that suggest use of the service in a manner that may violate the Microsoft Code of Conduct or other relevant product terms.
Full details of the Azure OpenAI Service abuse monitoring can be found on the Microsoft website:
https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/abuse-monitoring