fbpx

Einstein Trust Layer: How Salesforce Protects Data the Age of Generative AI

Are you using Chat GPT in a business context? Ever thought about what happens to the information you feed it? If you’re feeding it customer data or sensitive business information, the answer might surprise you! To save you the trouble, we asked Chat GPT. Here is the response:

Sharing data with ChatGPT risks exposure of sensitive information, as conversations may be retained for training. While anonymised, data could be accessed if required by law. Avoid sharing private, confidential, or personal details to mitigate potential security or privacy issues.

Not very reassuring is it?

Generative AI is embedded in lots of business applications. Without adequate safeguards, the use of these new technologies opens up risk for organisations.

Salesforce meets this challenge head on by leveraging the Einstein Trust Layer. This article explains what the Einstein Trust Layer is and the key features that make it indispensable.

What Is the Einstein Trust Layer?

The Einstein Trust Layer is Salesforce’s framework for securely integrating generative AI, like large language models (LLMs), into its ecosystem. It ensures that customer data remains private, responses align with ethical standards, and AI outputs are professional and accurate.

At its core, the Trust Layer creates a secure interaction environment for AI tools by implementing safeguards throughout the data journey, from generating prompts to delivering responses.

Key Features of the Einstein Trust Layer

Zero Data Retention

Unlike consumer-grade AI tools, Salesforce enforces a strict zero data retention policy. This means that once an LLM processes a prompt, neither the prompt nor the generated response is stored. This prevents external AI providers, such as OpenAI, from retaining sensitive customer data.

Toxic Language Detection

The Trust Layer scans AI-generated responses for toxic content, including hate speech, violent or sexual language, and profanity. This ensures that the responses are appropriate and professional.

Data Masking and Demasking

Before being sent to the LLM, personal data is masked/tokenised. On the return journey, that masked data is safely restored to personalise replies.

Prompt Defense / Guardrails

Prompt Builder adds guardrails to guide the LLM’s behaviour, preventing unintended or harmful outputs. It can include instructions to avoid generating responses outside its knowledge or providing responses relating to certain topics. This can be imperative in regulated industries where only qualified members of an organisation can give advice, e.g. financial or legal advice.

Comprehensive Audit Trail

Every interaction, from prompt creation to response delivery, is logged with an audit trail. This enhances accountability, allowing businesses to monitor how their data is processed and used.

In Summary

Generative AI as a basis for agentic support and sales is going to be transformational. Businesses that adopt it will be able to benefit from tangible efficiencies with lower staff costs and faster response times for customers.

But it’s not all sunlit uplands. The risks involved in using these tools is real and ignoring these risks will almost certainly open up liability for organisations.

With Einstein Trust Layer, Salesforce customers can harness the power of generative AI and agentic automation whilst maintaining data privacy, security, and ethical responsibility.

Subscribe to our email insights!

Get product updates, news and views straight to your inbox

PARTNERS/ACCREDITATION

CONTACT US

FIND US ON

AppDraft white

© Appdraft 2024

Appdraft Limited is registered in England under company number 11696760 at 128 City Road, London, EC1V 2NX 

Scroll to Top