Attitudes towards AI: there remains a lack of trust and general awareness from consumers

Artificial intelligence (AI) is becoming more prevalent globally, with machine-learning powering products in services spanning from retail, to healthcare, manufacturing, and transportation.

With this ever-growing expansion, there remains scepticism from consumers about how AI can be used, and the potential negative consequences it can have.

56% of respondents to a CISCO study reported that they were concerned with how businesses are using AI today and believed AI decision-making is often hard for the average person to understand. Likewise, 72% of respondents to a CISCO study believed organisations had the responsibility to only use AI responsibly and ethically (1). 

However, a key part of this scepticism is that consumers usually have a general lack of awareness. 

When considering the UK, in March 2022, the Centre for Data Ethics and Innovation (CDEI) published a report on how attitudes towards data and AI have changed over time, as well as the drivers of trust. It found that respondents had limited knowledge of AI, with only 13% feeling they could properly explain what it was, and those with the lowest digital familiarity and knowledge of AI more frequently associated AI with feelings of worry and fear (2). Another takeaway from the CDEI survey was that even when there was an acknowledgment of the benefits of AI, there were still concerns as to how those benefits would be distributed; it found that the public does not expect the benefits of AI to be felt equally across society. 

However, there are numerous countries worldwide designing AI strategies and legislative frameworks to ensure the technology is used responsibly. 

The EU’s proposed AI Regulation Act in April 2021, aims to regulate high-risk AI systems, and proposes banning AI systems that can manipulate people or exploit vulnerable individuals, are used for social scoring purposes, and those used for real-time biometric identification systems in public for law enforcement purposes, i.e. FRT (3). 

Public opinion on AI tends to be nuanced, with many people being more comfortable with the idea of AI in certain contexts and if it can prove beneficial. For example, 40% of respondents to a CISCO study accepted that AI can be useful in improving their lives, particularly in sectors such as healthcare and retail (4).   

Interestingly, public attitudes towards AI also differ depending on geography. The Pew Research Center found that AI was viewed a lot more favourably in Asia (60% of respondents in Singapore, South Korea, Taiwan, and Japan said they were good for society), in comparison to the West (5).

Some of the takeaways from these attitudes are that trust is essential, and helping consumers better understand the technology will only be a benefit.

Keeping an open dialogue between programmers, product owners, regulators, and consumers - as well as ensuring companies emphasise ethics in their practices, appointing independent ethics advisory boards, and data ethics scientists - is fundamental. 


This is part of a 5 part series, “Consumers are moving to services that protect their data and privacy”, which will explore consumer attitudes towards data privacy, social media and video surveillance - in an age where technology is relying more and more on personal and biometric data.


Previous
Previous

Privacy concerns and mistrust towards video surveillance and facial recognition technology

Next
Next

Consumers want social media companies to take more responsibility for data privacy and misinformation on their platforms