In times of increasing inflation and precarization, more and more people can afford less food and have fewer financial resources to secure their livelihoods. In the face of this dire predicament, it may not be obvious to ask about algorithmically-driven consumption and digitally-optimized payment methods. But by examining the politics of smile-to-pay technologies, Mathana show that these questions are essential to any critique of and resistance to capitalism.
*
The study of shopping, and the way we pay for the things we consume is big business. Corporate marketing teams intensely research personal preferences and cultural attitudes towards consumption, data aggregators design algorithms allowing advertisers to micro-target customers, and entire university centers are dedicated to consumer insights, researching how we shop. While individual attitudes towards how we pay for a given item when there are multiple payment options available are complex, much is being done to have individuals opt for non-cash payments at points of sale. In more and more places, non-cash payment is becoming a preference.
As forces that drive commerce transition preference away from cash and towards contactless, automated payment processing, companies are rolling out a new generation of point-of-sale technologies, and systems are being introduced that scan parts of human bodies to use as unique identifiers for payment authentication. In this new quiet technological consumer evolution of algorithmic authentication biometric sensors are becoming increasingly prevalent at points of sale, using algorithmic technology to identify specific attributes of the human body to carry out transactions. Prominent biometric payment systems currently being used in stores include systems like MasterCard’s iris-scanner and Amazon’s palm print reader, and other large global payment processes like Visa and Stripe are leaning into unique biometric identifiers for payment authentication.
As facial recognition technologies have become more popular and accurate, a number of companies like Mastercard and Alipay have gravitated towards a very specific trend in gesture-based automation at check-out: using one’s smile to trigger a payment authentication. This human-emotion-detection sensor technology builds on existing technology by combining 3D cameras, facial recognition software and gesture recognition algorithms to cross reference enrolled shoppers against the company’s biometric database, and then triggers an authentication mechanism for payment processors when the customer smiles. The marketing teams of the companies rolling out these point of sale technologies brand them as ‘convenient’ alternatives to cash, card and other contactless payments like mobile. While this new payment processing ecosystem is billed as consumer-friendly tech, it is actually rife with privacy implications, security concerns and ecological impacts of the massive data centers required to power such technologies.
Next-Gen consumption: Risks versus rewards
While it may sound pulled-from-science-fiction, the baseline technologies to use our emotive states as a form of payment have actually existed for a while. The first reported use of pay-with-your-smile technologies took place in 2017 at the KPro eatery, a KFC spin-off, in Hangzhou, China, through a partnership with Alipay, the ubiquitous Chinese payment platform. Since then, the companies have increasingly adopted this very specific payment processing mechanism where customers must feign happiness to complete their purchase. Eight years later, pay with your smile tech has spreaded around the world. In Russia, the country’s largest food retailer, X5 Group, has teamed up with Sber, a banking-and-financial-services company that is majority-owned by the state, to launch world’s largest smile-to-pay service in thousands of Perekrestok stores, Russia’s largest supermarket chain. The phenomenon is now going global: MasterCard, the world’s second largest payment processing company, is trialing a ‘smile to pay’ system in Brazil, with future plans to roll out the system in the Middle East and Asia. As large companies and major retailers lean into this technology, consumer protections will be essential to avoid losses of privacy and ensure security. Without checks and balances, the broader social impact of algorithmic automated payment processing could have far reaching implications.
Beyond tracking our purchases, the introduction of facial recognition-enabled terminals could warrant privacy concerns for even shoppers that aren’t enrolled in services as more and more of the private infrastructure of our cities become sentinel stations capable of identifying biometric signatures of passers by. The very presence of biometric and emotion detecting sensors at every-day-shopping places like supermarkets changes the privacy dynamics between people and city, and creates a new input surface that could be utilized by hackers, police, spies or third party automated tracking systems. As a new technology, human detection algorithms will need to have oversight to function safely. The fusion of technology and finance means entities with enormous resources are able to trial different, experimental forms of payment technology in different markets, putting an onus on regional regulators to map out consumer protections within the context of a global technology market.
Policy matters: A European view of “emotion” “recognition”
On August 1st, 2024, the EU AI Act came into force opening a two year window before the law becomes fully applicable in 2026. It was heralded by the European Commission as the “world’s first comprehensive regulation on artificial intelligence” created to “ensure that AI developed and used in the EU is trustworthy, with safeguards to protect people’s fundamental rights.” While the bill seeks to do many things, one of its major components is a new system of risk categories for specific applications of AI technologies. Article 6 of the Act enshrines classification rules for “high-risk” AI systems, and one of these high-risk categories is “emotion recognition.” Further, Annex III of the EU AI Act seems pretty straightforward when describing the type of biometric algorithms that detect human affect are considered “high-risk”: “AI systems intended to be used for emotion recognition.” According to the Act’s definition in Article III, an “‘emotion recognition system” means “an AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data.”
This definition however, creates an almost metaphysical quirk in the way that the EU AI Act describes what constitutes emotion recognition. The Act essentially pins the capture of an outward expression to an underlying emotional state. This actually has significant implications for creating a regulatory impetus to govern ‘smile to pay’ systems. If companies can make the case to European regulators that a ‘smile to pay’ system does not constitute emotion recognition because the smile is not inherently tied to an underlying emotional state, then such systems could skirt the ‘high-risk’ category completely. This is an instance where perhaps a well-meaning aim of public policy is preemptively curtailed by underlying definitional rationale. If compelling a human to perform a gesture in order to complete a financial transaction is not seen by the EU AI Act as strictly a case of emotion recognition, then it could become permissible. While a number of prominent digital rights organizations called for an outright ban on emotion recognition, and the inclusion of emotion recognition as a ‘high-risk’ category, it seems that smile-to-pay systems could potentially be rolled out in Europe in compliance with the AI Act. While policymakers in the EU have attempted to implement regulatory safe-guards for certain uses of AI, how the Act will be regulated will be crucial. A close examination of the policy language in the Act creates an impression that there are potential work-arounds for companies seeking to roll-out pay-with-your-smile technologies.
Even as Europe is seeking to position itself as a new benchmark for how to responsibly regulate algorithms, the AI Act contains carve outs for ‘national security’ purposes, and advocacy organizations have already called out the legislation for failing to protect vulnerable populations. In a political climate that sees an increase in anti-immigration sentiment and a rise of the far-right in prominent member states, fundamental protections will be tested by new technological capacities. Enforcement of algorithm compliance will sit with the EU’s new AI Office, established within the European Commission, meaning the political headwinds of individual nationstates could influence enforcement decisions in the long-term.
Reclaiming consumption in the age of the algorithm
As both vendors and payment processors seek to collect more transaction information for ‘consumer insights,’ the companies that manage point-of-sale interfaces that use biometric payment processing may be inclined to encourage uses to both use their bodies to pay and also consumption dynamics like seeking to drive demand through tactic such as offering discounts for customers. Companies like MasterCard have entire units dedicated to marketing services and lobbied the EU in the build up to the AI Act. The dynamics of this next generation of consumption technology is clear: more technology, less cash; more data collection, less consumer privacy.
While pay-with-your-smile may save a few seconds by seemingly streamlining point of sale purchasing, the broader implications of handing over high resolution details of our body’s immutable attributes for the sake of convenience is somewhat of a Faustian exchange. In the long term, we don’t know why and how our bio-prints will be used and utilized in the future. Before affect detection systems like pay-with-your-smile systems are the norm, society should examine the ethics and social trade-offs that come with humans performing emotion to machines to pay for their shopping.
As new intermediaries are inserted into payment processing infrastructures, creative responses and research initiatives will be needed. Academics, researchers, policymakers and civil society all have a role to play to better inform individuals of the potential personal risks that arise from the confluence of big tech and big finance as they roll cyber-physical systems to use individual human attributes of our bodies to carry out monetary exchange. Academic centers like the UK’s Bangor University Emotional AI Lab are researching the social implication of emotion-detecting algorithms, and civil organizations including AcessNow, EDRi and Article19 have banded together to call for a clear prohibition of emotion recognition technologies. While research and advocacy is critically important, universities and activist organizations can’t stem the tide alone. Legislative bodies can be encouraged to draft robust policy based on rights and not risks and country-level data protection authorities, could extend national bans on specific proprietary biometric capture systems.
At a time of rising food prices and consumer inflation, compelling a smile to complete a purchase seems borderline emotional manipulation, and not the quirky gimmick it seems designed to be. Ultimately, though, how we choose to consume is a personal decision, but by raising awareness around the risks-versus-rewards of payment technologies we can potentially encourage friends and family to differ on enrolling in the latest payment trend, and push back against the creeping global data-intense automated check-out ecosystem that knows minute details about when and where we have shopped. If capitalism is going to persist as a global system of exchange, then perhaps in the age of automation, using cash for shopping can be a small, but important, radical act of resistance.
Editor’s note: The author wrote this article as part of their reporting fellowship at the nonprofit organization AlgorithmWatch.