Cambodia’s Artificial Intelligence (AI) expert Dr. Supheakmungkol Sarin said that the rapid evolution of AI technology has made it difficult to keep up, with a need for regulations. Yet, he argued that these regulations may need to be clarified or updated, adding that their development should be a collective effort, involving not only government and industry, but users themselves.
Dr. Sarin, originally from Cambodia, is the Head of Data and AI Ecosystems at the World Economic Forum (WEF). Prior to his current position, he spent more than a decade at Google working on data management for machine learning and AI.
During a special in-depth discussion on AI and how it is shaping our world and businesses at Hewlett Packard Enterprise (HPE) Discover 2023 in Las Vegas, Dr. Sarin joined Dr. Eng Lim Goh, of HPE, and Chase Lochmiller, of Crusoe Energy, to give their insights into the current state of generative AI, the potential benefits for business and risks of the technology, and the challenges of developing and using it responsibly.
The discussion on Adapting to an AI World is available on YouTube’s Hewlett Packard Enterprise channel and was hosted by technology journalist Shibani Joshi.
Generative AI has the potential to revolutionize business
Dr. Eng Lim Goh, Senior Vice President of Data and AI at HPE, believes that generative AI chatbots can be used for both creative and business purposes.
Prompt engineering is created to improve users’ interactions with AI systems, especially for business purposes, he said. And, it is a process that allows users to input specific commands or questions so that the AI system can generate the most accurate and useful answer.
Prompt engineering can be used to improve the performance of AI systems in a number of ways, including improving accuracy, enhancing creativity and making AI systems more user-friendly.
Dr. Goh mentioned that prompt engineering can be used to ask chatbots to explain their answers in steps, which can help users to understand the reasoning behind its responses.
He said, “You could coax it [prompt engineering in AI] to describe your answer in steps, and at each step, it gives its reasons for your answer. This is one way of prompt engineering to get more out of the system.”
Chatbots have access to a vast amount of information, which can be used to enhance the potential of any users. Therefore, he noted that chatbots can be used to generate ideas that individuals may not have thought of on their own.
Dr. Goh used the analogy of a human reading 1,000 books in a lifetime versus a chatbot reading 10 million books to illustrate the potential value of chatbots.
“Humans probably in a lifetime would read the equivalent of maybe 1,000 books. But AI has read an equivalent of 10 million books. So, as you converse with these things, there are certain things that it has seen that could be very interesting for you because you've not read as many books as it has,” he said.
He continued, “On the creative side, there are ways to bounce ideas off these chatbots and get responses back. And there are times when you get ideas that you never thought of. So that's one value.”
Dr. Sarin also believes that generative AI has the potential to generate significant economic benefits. “There's a lot of benefits. For example, a recent Goldman Sachs research found that this technology could lead to a 7 percent increase in GDP,” he said.
Risks of Generative AI
When using chatbots for business, it is important to be careful with the information provided to them. Chase Lochmiller, founder and Chief Executive Officer at Crusoe Energy, said that users should be thoughtful of using AI for critical decision making.
Unlike when AI is used to generate fun photos like a unique picture of an elephant playing poker in the desert, information used for critical decision-making is an important matter.
He believes that it is difficult to regulate AI because it is global in nature, empowered by the internet and data. However, he believes there has to be certain regulations that apply to this technology.
Dr. Sarin also said that AI poses risks such as bias, disinformation, and malalignment with human values. He added, “But there's a lot of bias, disinformation, malalignment to human values and whatnot. So it's really risky.”
He compared the current state of AI regulation to roads in the 1900s, before there were traffic lights, lanes, or speed limits. He argued that the rapid evolution of AI technology has made it difficult to keep up with the need for regulation.
He acknowledges that there are existing regulations that could be used to address some challenges posed by AI, but he argues that these regulations may need to be clarified or updated. He also suggests that there may be a need for new regulations altogether.
A collective effort is needed to prevent risks
Dr. Sarin emphasized that the development of AI regulations should be a collective effort, involving not only government and industry, but also users themselves. He said, “There might be a need for some new regulations. But again, this is not something only one person or one group can decide. It should be a collective effort. It should be a discussion.”
He argued that users should be given a voice in the conversation about AI regulation, so that their concerns can be addressed. “How do we not only support them [users] to have a voice, but enable them to be in the conversation, raise their concern and see what is important for them?” he added.
That is why the AI Governance Alliance was founded to develop standards and guidelines for the responsible development and use of generative AI.
Dr. Sarin said, “In our summit back in April, we took it a step forward and launched the AI Governance Alliance because we know that this is really important and crucial at this point.”
The WEF is not the only organization working on responsible AI, and it will collaborate with other organizations to address the important issue. He said, “Of course, there are other important organizations that are working to solve this issue as well and then we will be working together with them on this issue.”
On July 26, OpenAI published an announcement about Anthropic, Google, Microsoft, and OpenAI having launched the Frontier Model Forum, an industry body focused on ensuring the safe and responsible development of frontier AI models.
The forum focused on three key areas: identifying best practices, advancing AI safety research, and facilitating information sharing among companies and governments.
In conclusion, generative AI has the potential to revolutionize business by generating new ideas, enhancing creativity, and making AI systems more user-friendly. However, there are also risks associated with generative AI, such as bias, disinformation, and malalignment with human values. It is important to be careful with the information provided to chatbots and to use them responsibly.