Home Technology 73% of consumers trust what generative AI wants us to see

73% of consumers trust what generative AI wants us to see

73% of consumers trust what generative AI wants us to see

Blue lines representing abstract AI

blackdovfx/Getty Images

Keen to use generative artificial intelligence (AI) to save effort and time, 73% of consumers trust content produced by such tools, with those in Norway and Singapore among the most and least trusting, respectively. 

At 51%, just over half were aware of the latest generative AI trends and had explored the available tools, according to a survey that polled 10,000 respondents across 13 markets, including Australia, Japan, Sweden, the US, and the UK. The findings were released by Capgemini Research Institute, which conducted the study for search analysis between April 2022 and March 2023 as well as for social analysis from October 2022 to April 2023. 

Also: The best AI chatbots

About 52% use generative AI tools to generate content such as email and essays, while 28% tap the technology for brainstorming, and 23% use the tools to gather general information on a range of topics, including science, history, and technology. 

Chatbots are the most popular, used by 15% of respondents who tap generative AI tools multiple times a week, followed by games such as generative AI-powered game studios or applications, which are used by 11% of respondents. 

Capgemini further noted that AI art featuring anime characters was popular among respondents in Asia-Pacific and Canada, but users expressed caution about using generative AI here due to incorrect or inaccurate results. 

Across the board, trust in content generated by such tools was the highest in Norway at 79%, followed by 75% in Spain. The lowest level of trust was in Singapore and Germany, at 70% in each of the markets.  

Also: How to access, install, and use AI ChatGPT-4 plugins (and why you should)

Capgemini attributed the high trust level possibly to the enhanced efficiency generated by these tools as well as the availability of personalized content in a “ready-to-use format”, particularly for tasks such as software coding and multimedia creation.

“Tools such as ChatGPT respond to users’ prompts in a clear, easily understood manner, and it is possible consumers equate this clarity with accuracy,” the report noted. “Further, popular generative AI applications such as ChatGPT are backed by renowned tech firms such as Microsoft and Alphabet, which endorsements also increase consumer trust levels.”

More than half, 53% globally, said they trust generative AI to help with their financial planning. Tools in this space include Alpha, which is powered by OpenAI’s GPT-4, and FinChat, also powered by ChatGPT. 

Also: Is humanity really doomed? Consider AI’s Achilles heel

In addition, 67% said they could benefit from receiving medical diagnoses and advice from generative AI, with another 63% excited about the possibility of generative AI supporting more accurate and efficient drug discovery. Some 66% would seek out generative AI-powered advice for personal relationships or life and career plans. 

At 49%, just under half said they were not concerned about the potential for generative AI to be used to create fake news stories. Just 34% were concerned about phishing attacks, while 33% expressed concerns about copyright issues, and an even lower 27% were worried about the use of generative AI tools to copy competitor’s product designs. 

“The awareness of generative AI among consumers globally is remarkable and the rate of adoption has been massive. Yet, the understanding of how this technology works and the associated risks are still very low,” said Niraj Parihar, CEO of Capgemini’s insights and data global business line. “While regulation is critical, business and technology partners also have an important role to play in providing education and enforcing the safeguards that address concerns around the ethics and misuse of generative AI.” 

Also: Generative AI: Just don’t call it an ‘artist’ says scholars in Science magazine

Noting that Capgemini helps its clients create use cases within an ethical framework, Parihar said: “Generative AI is not ‘intelligent’ in itself; the intelligence stems from the human experts who these tools will assist and support. The key to success, therefore, as with any AI, is the safeguards that humans build around them to guarantee the quality of its output.” 

The study found that 43% were keen for businesses to apply generative AI across customer interactions, with another 70% already using such tools for product or service recommendations, in place of traditional methods such as search. 

Source link

netbalaban news