Sage calls on the global tech community to take responsibility for the ethical development of Artificial Intelligence (AI) in business. Kriti Sharma, Sage VP of AI and Bots, shared: ‘The Ethics of Code’: Developing AI for Business with Five Core Principles’. These five core principles for AI represent a set of values that Sage believes the tech community should focus on for the 4th industrial revolution.
Sharma explains, “Building chatbots and AI that helps our customers is the easy part. The wider questions that the rising tide of AI bring are broad and currently very topical. Because of this, we developed our AI within a set of guiderails. These are the core principles that we believe help us to ensure our products are safe and ethical. The ‘Ethics of Code’ are designed to protect the user and to ensure that tech giants, such as Sage, are building AI that is safe, secure, fits the use case and most importantly is inclusive and reflects the diversity of the users it serves. As a leader in AI for business we would like to call others to task — big businesses, small business and hackers alike — and ask them to bear these principles in mind when developing or deploying their own Artificial Intelligence.”
The ‘Ethics of Code’ are (sic) a set of values Sage developed as it built the world’s first accounting chat bot, Pegg. Sage proposes other companies utilize such values to ensure their approach to building AI is both ethical and responsible. The objective is to protect the future business consumers of this emerging technology.
The 5 core principles for AI
The five core principles are:
- AI should reflect the diversity of the users it serves. Both industry and community must develop effective mechanisms to filter bias as well as negative sentiment in the data that AI learns from — ensuring AI does not perpetuate stereotypes.
- AI must be held accountable — and so must users. Users (must) build a relationship with AI and start to trust it after just a few meaningful interactions. With trust, comes responsibility. AI needs to be held accountable for its actions and decisions, just like humans. Technology should not be allowed to become too clever to be accountable. ‘We don’t accept this kind of behaviour from other ‘expert’ professions, so why should technology be the exception.’
- Reward AI for ‘showing its workings’. Any AI system learning from bad examples could end up becoming socially inappropriate — we have to remember that most AI today has no cognition of what it is saying. Only broad listening and learning from diverse data sets will solve for this. One approach is to develop a reward mechanism when training AI. Reinforcement learning measures should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.
- AI should level the playing field. Voice technology and social robots provide newly accessible solutions, specifically to people disadvantaged by sight problems, dyslexia and limited mobility. The business technology community needs to accelerate the development of new technologies to level the playing field and broaden the available talent pool.
- AI will replace, but it must also create. There will be new opportunities created by the robotification of tasks, and we need to train humans for these prospects. If business and AI work together it will enable people to focus on what they are good at — building relationships and caring for customers.
Pegg as an example
The launch of chatbot Pegg demonstrates Sage’s commitment to this technology. Sage designed it to free customers from the mundane admin which prevents them from focusing on higher value tasks. Pegg acts as a smart assistant for small businesses. It enables users to track expenses and manage finances through popular messaging apps like Facebook Messenger and Slack. Sage claims 100% compliance with its own core AI principles,
One year on, Sage asserts that Pegg has tens of thousands of users across 135 countries worldwide. In its next phase of development, Sage will integrate Pegg with Sage One, its entry level cloud accounting tool. Sage One with Pegg will become available in Canada in the summer of 2017.
What does it mean?
Sage is eating its own dog food. For example, it hopes to deepen the AI talent pool with a UK-first, a rolling program of BotCamps. These will equip school leavers, from 16-25 years of age, with basic Bot and AI coding skills. Possessing these skills should open opportunities at the leading edge of technology. By providing accessible technology skills it will help people participate, and not fear, the 4th industrial revolution.
Equally, in its own products, Sage is using machine learning and AI. Pegg is its most obvious first example. Its popularity demonstrates that enterprises and smaller businesses have an appetite for anything with the potential to reduce administration, in this case accounting.
The publication of Sage’s 5 Core Principles is significant. It will not be the last step forward. Hopefully, each subsequent iteration will improve and broaden the base of core values which underpin AI exploitation. As with autonomous driving, familiarity should breed acceptance.