Q&A: Salesforce’s responsible product boss on ethical data use and the rise of AI agents

Salesforce has made a splash at its annual Dreamforce conference with its new AI agents which have been built into its entire platform to enable bots that can undertake complex tasks in customer service and marketing.

Salesforce considers itself a “trusted advisor” to its customers, who collectively have hundreds of millions of their own customers. So what is the company doing to develop artificial intelligence in a way that upholds that trust and allows customers to make responsible use of the tech?

ITP Techblog caught up with Rob Katz, Vice President , Product - Responsible AI & Tech at Salesforce to discuss that and the ethical issues relating to the prospect of AI doing away with jobs.

Rob Katz, Salesforce Vice President , Product - Responsible AI & Tech

Salesforce is going big on AI Agents this year. What are the implications of that when it comes to your work around ethics and humane use of technology?

AI is not new to Salesforce. We've been doing AI for at least 11 years, and that started with predictive – who ought to be the next prospect that the salesperson calls, or what ought to be the way we merchandise this bundle of products optimally.

The Office of Ethical and Humane Use was created six years ago. I joined five years ago to spin up our work on what we at that time called product ethics. But the office in general, you can think of it in three areas. There's policy, there's principles, and there's policies and products. 

So principles: we started by outlining a set of ethical use principles that govern how Salesforce broadly thinks about the use of our technology, not just AI, but also data. Because it's super important to think about things like data ethics and the appropriate guardrails to put in place to ensure that we meet our customers’ and our regulatory expectations around the stewardship of our customers' data. 

So we came up with a set of ethical use principles, and then we outlined specific red lines or policies that we won't pass vis-à-vis the use of our technology. In the AI realm, that actually took the form last year of publishing what we believe is the first enterprise AI ‘acceptable use’ policy. 

It's a set of specific guardrails that govern how our AI can and can't be used. For instance, you can't use salesforce's AI to make legally or similarly significant decisions without a human making the final decision.

The last is product, and that's the area that I lead. We're co-designing with our engineering teams, data science teams, and research science teams, how these AI products are built from the ground up, including agents and Agentforce.

What issues do AI agents pose ethically beyond what Salesforce has done with AI technology in the past?

It's an evolution, and we've been building trust patterns and guardrails into our products, especially our AI products, for many years.

A couple of examples: First is something that we like to call mindful friction, and that's to help the business user stop and consider for a moment when something might introduce unwanted, ethical risk. 

There's a product in Salesforce Data Cloud and in Marketing Cloud called generative segment creation, and it's using natural language to create the marketing segment that you're looking for. 

A good example would be fitness enthusiasts who like the newest gear. It goes through your Data Cloud and it picks the attributes that are most likely to be members of that target segment, and it comes back with that. Before this, you would have to go in and search and select each of the attributes. If you have a lot of unstructured data or data model objects in your Data Cloud, that's a lot of manual work. 

So the AI is helping the data analyst or the marketer. We ensured that, in partnership with the team that built it, demographic variables that come back are unchecked by default – race, age, gender and proxy variables like zip code in the United States. The reason is that it might introduce unwanted stereotype bias. 

Fitness enthusiasts who also like the newest gear, historically, according to the data set, according to the model, and the data that the model has been trained on, might be male. Well, we don't know that. We're just going to uncheck it by default. And if for some reason they want a campaign for men's shoes, well, go ahead. Check the box, but we tell the marketer we've unchecked it.

I'm a product person by training, so adding friction to your experience is sort of anathema to what we build. But it's not in this case, it's hey, we're here to help. We're here to remind you of the trust guardrails that you might want to stay inside.

How are you approaching developing and implementing AI guardrails at Salesforce when it comes to AI agents? 

As far as staying on track when it comes to agents, there's this idea of a reasoning engine. Marc Benioff talked about that a lot, and it's like the brain of Agentforce. And the brain has this concept of topic guardrails. Let's say it's customer service. 

Our customer would configure the topics that it wants the agent to handle – warranties, returns, exchanges, order tracking, refunds and everything else. We tested those topic guardrails with 8,000 adversarial prompts that were created using synthetic CRM data and a set of ‘mutators’ that are on top of it to adjust it. 

So for instance, when it comes to our pilot with OpenTable, we want to test that Peter, who is trying to make a reservation at a five-star restaurant, gets the same kind of response as Pedro making a reservation at a five-star restaurant, and if your zip code is associated with a high-income neighborhood or a low-income neighborhood, you get an equitable answer. 

Or if you say, I'd like to pay using a debit card versus my Amex Platinum, that you get the same response, or an equitable response. We also solicited our employee base who are really interested in getting themselves educated to test this from a wide variety of backgrounds, identities and experiences. 

They came in and brought not just an automated approach to red teaming, but a diverse set of employees to human red team as well, because it's a multi-turn experience, and you want to go back and forth and back and forth and back and forth. We found that the topic classifier holds up very well to what it should handle.

Salesforce CEO Marc Benioff has been critical of AI copilots for hallucinating and leaking sensitive data into large language models. How does Salesforce avoid this with its AI agents?

Zero data retention is contractually and technically enforced at the large language model (LLM) gateway through which inputs are sent in and outputs are generated back with a third-party model. That ensures the data leakage doesn't happen, so that our customers' data aren't used to train a third-party LLM.

When it comes to hallucinations, that's where retrieval augmented generation (RAG) and ‘grounding’ play a huge role. That’s when you're able to say, here are the five topics I want you to answer, and here is my knowledge base for my sales team that they use. 

I talked about the reasoning engine. That's the brain. The other part is the memory. So think about the grounding, or the RAG, as the memory. So you tell it what its memory is. You don't say, search the whole internet or use all of the training data in your huge model. 

You don't need a huge model for that. You need a small, specific version. You can just ground it and use vector search on top of the RAG, both of which are techniques that we're working on inside of the AI team to ensure that you're getting the accurate answer. And it lowers the propensity for potential hallucinations.

There are plenty of examples at Dreamforce of AI agents being used to make businesses more productive, particularly in customer service and sales. But at least some businesses will use that as an opportunity to lower headcount. What’s your advice to customers about how to approach this in a responsible and ethical way?

I'm an optimist when it comes to the potential for these technologies to improve how we work and how humans and AI work together. Wouldn't it be helpful when doctors are spending less time updating charts after meeting with patients and instead spending more time one-on-one with patients? That is an area where AI could be really helpful.

I was speaking with someone who serves on the police commission here in San Francisco. Their officers use body-worn cameras, and the officers, after their shift, have to spend three hours of overtime tagging the video content. That's something AI could help with, and it would allow the police department to potentially spend that overtime on more officers, or on training. It's the least favourite part of those officers’ days. Now, if that potential is going to be realised, the technology has to be incredibly trustworthy.

But Salesforce is not saying this is only positive. If anything, we're actually making it possible for our customers and all of our stakeholders to get upskilled on the latest AI innovations through Trailhead, which is our free online learning platform. We announced that all of the instructor-led and premium AI courses offered on Trailhead are going to be free through 2025 which is a way for us to signal to our stakeholders in our community, hey, come with us on this journey.

California Governor Gavin Newsom has a stack of AI regulation drafts on his desk he is considering whether to enact. How would you like to see the AI regulatory environment develop?

It's exciting that legislative bodies and regulatory bodies have learned a lot of lessons from earlier technological innovative moments, and they're saying they want to intentionally create good rules, clear rules of the road when it comes to regulation. 

I think it's important to distinguish between the technology developers, the model developers, the deployers, and the integrators. And I think we're mostly in the deployer and integrator space. 

For us, we just want, for the areas that are more settled, like data ethics and data regulation, to please provide clarity, and then for areas that you're still seeing a lot of innovation happening, we want to have some clarity about what the expectations are, what's the spirit of the law.

It can’t help that the US has no uniform federal data privacy regulations… 

We've been advocating for a federal privacy bill, and my colleague, Ed Britan, who leads our privacy practice, has been very vocal about that. It would be helpful, not just for Salesforce, but it'd be really helpful for our customers to have clarity in the United States about what the state of this is when it comes to data privacy. And I think we're seeing more appetite for a coherent national approach for AI, which I'm very excited about.

This interview was abridged for brevity. Peter Griffin attended Dreamforce as a guest of Salesforce.

Cover photo credit: Craig Sybert, Unsplash

Previous
Previous

Reskilling and Upskilling: Cybersecurity and Ethical AI

Next
Next

Salesforce’s major AI pivot with Agentforce