Algorithmic consent: Why informed consent matters in an age of artificial intelligence
The last few years have witnessed a progressive shift in the technological, legal and cultural impact of the way consumers perceive, share, secure, and consent to the use of their data on the internet. This evolution in consumer behaviour has changed the approach to business for everyone - from advertisers to financial companies, healthcare providers, and social media platforms, among others. It is this very influx of data that has contributed to the growth of big data, and thereby its applications in artificial intelligence (AI) across industries.
Big data is commonly characterized by 3 ‘V’s – volume (signifying the quantum of data required for an effective and accurate AI model), velocity (the rate at which the data needs to stream for dynamic results), and variety (the multiple source formats necessary to make it an agile and flexible predictor).
AI models work far better and meaningfully when they are able to learn from larger and more diverse sets of representative data. This brings us to the required consent for the collection, storage, processing, and sharing of said data - the very element of this framework that puts privacy and compliance in the spotlight. We know today that as artificial intelligence evolves, its capacity to manipulate personal information can fringe on violation of privacy interests.
Third-party sites allowing to capture consumer data and behaviour for this purpose have become commonplace today. However, the way these websites and apps set up consent frameworks can have serious ramifications, as evidenced by the infamous Cambridge Analytica-Facebook case.
Consent is the cornerstone of privacy in AI. While it sounds fairly straightforward as a concept, consent is far more nuanced than the generic ‘accept all cookies’ pop-up that we thoughtlessly hit on each time we’re on the internet. Consent is only meaningful and valid when it is informed; when the consumer knows exactly what data they are consenting to share, with whom, what their data will be used for, how and where it’ll be stored, and for how long.
What’s more, consent cannot be limited to only being required at the time of data collection; each time the business pulls out a consumer’s data to reuse or repurpose beyond what the original reason was, the nature, purpose, and consequences of the collection need to be reiterated explicitly, and consent re-solicited. In addition to providing consumers with unequivocal control over their data, a major challenge for companies to check is to do so in a user-friendly, conspicuous and legitimate manner.
Several studies in this respect have indicated that consumers simply do not pay attention to consent requests placed alongside a barrage of other pop-ups, with lengthy and vague descriptions of the reason behind data collection. This format of consent collection, also known as ‘notice-and-consent’, does not enable individuals to be informed well enough in their decisions to share the data and is strongly advised against
Instead, it relies on their lack of scrutiny towards the fine print to acquire their personal information. This also ties in closely with users being able to give their consent freely. If declining to share personal data is detrimental to the user’s purpose or denies them access to services, it does not qualify as true consent.
The implications of this heightened awareness and regulation of user consent and privacy don’t end there. The world’s most well-known data protection framework, the GDPR (General Data Protection Regulation), places heavy emphasis on users having the ability to not only provide informed consent but also take it back. This principle affords consumers the right to erase any personal information that companies may have stored, and obtain proof of erasure as well. This is significant for controllers of data banks for AI models since it would mean retracting this data from all applications that are using the data to learn. This may also mean any results or outcomes that are derived from the individual’s personal data. A big ask!
While strides are being made in the direction of empowering users with control over their personal data, companies across sectors are still debating its implications on the progress of effective and accurate artificial intelligence. However, the middle ground here is for AI designers to build a bridge of trust between their technology, the individuals who provide their data, and the consumers of the AI processed data, enabling them to feel secure in sharing their data in a way that benefits the evolution of artificial intelligence, but not at a cost to personal data protection.
This article has been written by Barry Cook, Group Data Protection Officer, VFS Global