Most consumers in EMEA believe that AI will have a considerable impact in the next five years. However, a substantial portion of them have doubts about AI’s value, and a significant number express concerns about its applications.
According to research conducted by Alteryx, an enterprise analytics AI firm.
Ever since ChatGPT was unveiled by OpenAI in November 2022, there has been much excitement surrounding the game-changing possibilities of generative AI. It is widely regarded as one of the most groundbreaking technologies of our era.
Given the high percentage of organizations acknowledging the positive impact of generative AI on their business, it is clear that there is a need to bridge the gap and showcase the value of AI to consumers in their personal and professional spheres. Based on a report called ‘Market Research: Attitudes and Adoption of Generative AI,’ there are some significant concerns regarding trust, ethics, and skills that could hinder the successful implementation and broader acceptance of generative AI. The report surveyed 690 IT business leaders and 1,100 members of the general public in EMEA.
The effects of false information, errors, and AI delusions
These hallucinations, in which artificial intelligence produces erroneous or nonsensical results, are a serious worry. For consumers and corporate executives alike, trusting the output of generative AI is a significant concern. While half of business executives report struggling with disinformation generated by generative AI, more than a third of the public express anxiety about AI’s capacity to create false news (36%) and its abuse by hackers (42%). At the same time, 50% of business executives have seen their companies struggling with false information generated by generative AI.
Furthermore, some have questioned the accuracy of the data produced by generative AI. According to public feedback, 38% of the data derived via AI was deemed obsolete, and half needed to be corrected. Concerns on the business side include generative AI causing unexpected or unwanted outputs (36%) and violating copyright or intellectual property rights (40%).
AI hallucinations are a major trust risk for the public (74%) and corporations (62%), respectively. Enterprises must address these issues using generative AI in suitable use cases bolstered by proper technology and safety protocols. Over half of consumers (45%) support regulations pertaining to the use of AI.
Generated AI raises ethical and risk issues
In addition to these difficulties, consumers and corporate executives have strong opinions on generative AI’s hazards and ethical issues. Most people (53%) are against using generative AI to make moral judgments. Meanwhile, 41% of company respondents worried about its use in crucial areas where decisions are made. There are differences in the particular domains in which its use is discouraged; for example, consumers strongly object to its use in politics (46%), while companies express caution about its application in the healthcare industry (40%).
The study results justify these concerns by highlighting gaps in organizational practices. Merely 33% of executives attested to their companies’ efforts to guarantee that the data used for generative AI training is impartial and diversified. Moreover, 52% of organizations have created data security and privacy policies, and only 36% have specified ethical standards for generative AI applications.
Businesses are at risk because of this disregard for ethical issues and data integrity. According to 63% of corporate executives, the main worry they have with generative AI is ethics. Data-related concerns come in second (62%). This hypothetical situation highlights how crucial improved governance is to fostering trust and reducing dangers associated with workers’ usage of generative AI at work.
Increasing generative AI abilities and data literacy
The full potential of generative AI will be realized by establishing skill sets and enhancing them to continue to evolve. Consumers utilize generative AI technologies in various contexts, such as information retrieval, email communication, and skill acquisition.
Business executives assert that they employ generative AI for cybersecurity, data analysis, and customer support. However, despite the success of pilot projects, obstacles persist. Despite the reported success of experimental projects, several challenges persist, such as security concerns, data privacy concerns, and output quality and reliability.
Trevor Schulze, Alteryx’s Chief Information Officer, underscored the importance of a comprehensive understanding of AI’s value and the resolution of common concerns among enterprises and the general public as they navigate the initial phases of generative AI adoption.
He observed that addressing trust issues, ethical concerns, skills constraints, fears of privacy invasion, and algorithmic bias is imperative. Schulze emphasized the importance of enterprises accelerating their data journey, implementing rigorous governance, and enabling non-technical individuals to access and analyze data securely and reliably. This is necessary to legitimately capitalize on this “game-changing” technology while addressing privacy and bias concerns.