customer experience management digital transformation
Business Wire
Published on : Nov 16, 2023
More than half (55%) of consumers surveyed believe they understand how Generative AI (GenAI) models are trained. However, nearly two-thirds (60%) were not aware that some media companies (recently, The New York Times) have restricted access to its information and data, including articles and general site content, in the training of GenAI models.
That’s according to a recent TELUS International survey of 1,000 U.S. adults who are familiar with GenAI.
On the impact of GenAI not being informed by media companies, over half of all consumers indicate concerns the content will be inaccurate (54%) and biased (59%). When asked what alternative sources they would most likely trust to educate and inform GenAI models, higher education institutions (48%) and scientific journals (44%) were the top choices. Conversely, the most untrusted sources were social media conversations (45%), reviews websites (27%) and brand websites (25%).
Transparency And Responsibility Lies With Brands
“There is growing concern by media companies and content creators about what becomes of their intellectual property when it is used as source material to train GenAI models, so naturally they are beginning to set guardrails. Many media companies have already updated their terms and conditions to include rules that forbid its content from being used to train AI systems and are blocking AI web crawlers from accessing their text, images, audio and video clips, and photos,” said Siobhan Hanna, VP and Managing Director, AI Data Solutions, TELUS International. “Given that we are in the early stages of developing industry regulations for all aspects of GenAI, including the sourcing of data, it's crucial that companies take responsibility to do the right thing from the very beginning. To protect themselves from potential fines, penalties, legal action and negative brand impacts, those working on AI deliverables must carefully consider where they are scraping or otherwise extracting the data from to power their models. Moreover, this is where a ‘humanity-in-the-loop’ approach to AI is so critical. Despite the fact that regulations and permissions around copyrighted material may still be emerging, companies need to consider the broader societal impacts of their actions to ensure that they are operating in a fair and ethical manner.”
No matter the source, 75% of consumers want companies to be transparent and explicit about how they’re sourcing the data they’re using to power their GenAI models. Additionally, more than half (52%) of all consumers believe the companies developing and building GenAI applications have a responsibility to “police” the information that’s being used and determine if it is being used with the content creator’s permission.
Consumers Unsure About Accuracy, Concerned About Bias; Sentiment Varies by Generation
“Since its launch a year ago, we have witnessed the unprecedented rise and development of GenAI. While many of the use cases are incredible, including its ability to fast track medical diagnoses and pharmaceutical developments, as well as aiding accessibility for those with disabilities, so too are the potential threats posed by its irresponsible use and the possibility of perpetuating and exacerbating false or biased data,” said Hanna. “To effectively mitigate against this and to ensure a safe online experience for all users, companies must source trusted and diverse training datasets created by leveraging a combination of technology and humans to oversee the output when deploying GenAI.”