Linda Saunders, Director Solution Engineering, Salesforce
We’re at a seminal moment for AI.
Advancements in generative capabilities are captivating imaginations and driving business adoption. These models have the potential to change every company and deliver new levels of success for their customers. For the workforce, they can remove certain repetitive tasks and lower the usage barrier, creating new opportunities for those who don’t have specific skills to succeed in new spheres.
AI is driving long term organisation-wide efficiency and transformation. Marketers, for example, are using AI to identify audience segments and auto-generate landing page images and copy. In sales, teams can easily compose emails, schedule meetings, and prepare for the next interaction. In customer service, agents can generate knowledge articles from past case notes, auto-generate agent chat replies to increase customer satisfaction through personalised and expedited service.
AI has long been integral to the Salesforce platform. Every week, there are over 1 trillion AI-powered predictions by Salesforce Einstein, helping businesses close deals faster, provide AI-powered human-like conversations for frequently asked questions, and better understand customer behaviour.
Generative AI will transform the way we work, yet it is not without risks. It gets a lot of things right but many things wrong. Company leaders want to embrace generative AI but recognize the need to balance value and risk.
Inclusive and intentional
Organisations need a clear and actionable framework for how to use generative AI and to align their generative AI goals with their business needs and values. It is critical that businesses consider trusted AI principles and embed ethical guardrails and guidance across products to help employees, partners and customers innovate responsibly and catch potential problems before they happen.
Companies can provide trusted, open, real-time generative AI that is enterprise ready by bringing together AI, data, analytics, and automation.
To help guide the responsible and ethical development and use of these technologies, companies should lean on five guidelines.
Accuracy and trustworthy: Data is fuel for AI — without high-quality, trusted data, it becomes ‘garbage in, garbage out.’ It’s crucial that its recommendations are reliable, that customers are enabled to train models on their own data and validate these responses. This can be done by citing sources, explaining why the AI gave the responses it did, and ensuring there’s a human in the loop when appropriate rather than relying entirely on automation.
Safety: Every effort to mitigate harmful outputs by conducting bias, explainability, and robustness assessments should always be a priority in AI. Organisations must protect the privacy of any personally identifying information (PII) present in the data used for training to prevent potential harm. Security assessments can help organisations identify vulnerabilities that may be exploited by bad actors.
Honesty: When collecting data to train and evaluate models, respect data provenance and ensure there is consent to use that data. When training models, using tags, for instance, teams can instruct their system to identify and not use training data including personal information in its output. We must also be transparent about the fact that an AI has created content when it is autonomously delivered. For example, a chatbot’s response to a consumer could include the use of watermarks.
Empowerment: AI should play a supporting role for the human. By keeping a human in the loop approach when developing and using generative AI technologies, businesses can validate and test automated workflows with human oversight before unleashing fully autonomous systems, which help build trust and confidence in the technology among stakeholders and customers.
Sustainability: Responsible AI also means sustainable AI. Language models are described as “large” based on the number of values or parameters it uses. When considering AI models, larger doesn’t always mean better. The bigger the model, the more energy and water is needed to run data centres powering them. As we strive to create more accurate models, we should develop right-sized models where possible to reduce our carbon footprint and water use.
A multi-stakeholder approach
It’s still early days of this transformative technology, but learning and partnering with others will unlock the positive potential of AI. When it comes to regulation, no one company or organisation has all of the answers. The real challenge for lawmakers is going to be dealing with the pace of change.
Adopting a multi-stakeholder approach across public and private sector, and civil society is crucial to identifying potential risk and sharing solutions to ensure these technologies are developed and used inclusively and with intention. Enterprises have the responsibility to ensure that they’re using this technology ethically and mitigating potential harm. Having guidelines and guardrails in place will help companies ensure that the tools they deploy are accurate, safe and trusted.