Your web browser is out of date. Update your browser for more security, speed and the best experience on this site.

Update your browser

Juniper Square AI Legal, Privacy, & Compliance FAQs

Background & Summary

At Juniper Square, we want to empower our customers to use artificial intelligence within our products to accelerate their workflows and assist with doing things they didn’t necessarily have the bandwidth to do previously. Our approach to AI is grounded in the following principles and ethical guidelines, which shape how we design and deploy it responsibly across our platform:

  • Valid and reliable
    • Performing ongoing testing or monitoring that confirms a system is performing as intended

  • Safe
    • Employing safety considerations early to prevent harmful outcomes

  • Fair and unbiased
    • Strive to prevent AI from unfairly favoring or discriminating against any groups

  • Accountable and transparent
    • Systems built with human oversight that provide feedback and explanations. Aim to provide the “how, “why” of AI systems to customers wherever possible

  • Privacy-minded
    • Continue our commitment to privacy to build consent and offer clear control over personal data usage

  • Secure and resilient
    • Test and validate the security, availability of our AI systems

Security

Customer data is encrypted with AES-256 at rest and TLS 1.2+ in transit.

Input sanitization and output validation methodologies are in place to reduce the risk of common attack vectors (e.g. prompt injection).

Data shared with AI models by customers are subject to the same security and privacy controls as other customer-provided data. Data protection controls are tested annually as part of Juniper Square's SOC 2 Type 2 report. Additionally, AI models acting on behalf of a particular user of the system can only access the data that the user is entitled to; they are subject to the same data access authorization model as the user-facing parts of the software.

Models are provided by third-party vendors and subject to the contractual controls provided by those vendors (e.g. OpenAI). Prompts are hardened against potential attack vectors and stored in version-control systems with the same protections as application code for auditability and rollback.

Data shared with Juniper Square will continue to be governed by our publicly available Privacy Policy and contractual commitments to the confidentiality of the customer data. Data protection controls are tested annually as part of Juniper Square's SOC 2 Type 2 report.

AI Model Training and Usage

Juniper Square uses models hosted on commercial LLM-hosting providers like Google Cloud and Amazon Web Services. All vendors are risk-assessed prior to contracting and at contract renewal as part of our overall information security program. Contractual measures are in place to ensure existence and adherence to security and privacy controls.

Juniper Square AI products use various LLMs, and we rigorously evaluate them for the best user experience. This involves human and performance-based assessments.

Anonymized and aggregated customer data is used to train AI models while preserving customer IP confidentiality. In the event that we develop additional AI tooling in the future that would require training on individual customer data, we would uphold existing customer confidentiality commitments, such as training customer-specific models for the exclusive use of that customer.

  • Data provided to third-party subprocessors providing AI services are treated in the same way as other third-party subprocessors.
  • In particular, data is shared on an as-needed basis, and data deletion clauses are included in contracts and govern contractually defined data retention periods.
  • Subprocessor agreements stipulate that they cannot use customer data to train their models.

AI won't make fund lifecycle decisions independently; it will suggest actions for human approval and execution.

The ability to report data quality issues is built directly into the product, and can be flagged by users.

  • Juniper Square’s product and security teams actively collaborate to track and build features considering AI/ML regulation in scope.
  • Juniper Square has formed an AI Governance committee that includes legal, compliance, and technical experts who set ground rules, establish policies, and ensure alignment with evolving standards and regulations.
  • All Juniper Square personnel are required to participate in AI training at least once per year.

Juniper Square uses models hosted on commercial LLM-hosting providers like Google Cloud and Amazon Web Services.

Like all AI use cases, any output should not be relied upon before being reviewed by a human for accuracy.

Third-Party AI Usage

Our contractual agreements with these external vendors preclude them from using any of our data to train their underlying models. We also perform vendor security reviews prior to engagement with vendors with embedded AI functionality.

Our current AI systems do not access client sensitive data, and future systems that may process client sensitive data will incorporate the principle of least-privilege by limiting access to authorized users.

Junie AI animated chatbox

Introducing JunieAI

Work smarter. Move faster. Focus on what matters–relationships and returns. JunieAI is built into the Juniper Square platform to streamline workflows, turn data into insight, elevate the investor experience, and make fund administration more accurate, timely, and transparent.