At some point last year, Google’s repeated requests for proof that I am human became increasingly aggressive. The simple, slightly too-cute button that stated “I am not a robot” was increasingly being followed by requests to demonstrate it — by selecting all the traffic lights, crosswalks, and storefronts in an image grid. Soon, the traffic lights had been obscured by distant foliage, the crosswalks had become warped and half-circled, and the storefront signage had become blurry and in Korean. There is something uniquely depressing about being tasked with identifying a fire hydrant and failing miserably.
These tests are known as CAPTCHA, an acronym for Completely Automated Public Turing Test to Distinguish Computers from Humans, and they have previously reached this level of inscrutability. In the early 2000s, simple text images were sufficient to fool the majority of spambots. However, a decade later, after Google purchased the programme from Carnegie Mellon researchers and began using it to digitise Google Books, texts had to be increasingly warped and obscured in order to stay ahead of improving optical character recognition programmes — programmes that, in a roundabout way, all those humans solving CAPTCHAs were aiding in the development of.
Because CAPTCHA is such an elegant tool for training AI, any given test can only be temporary, as its creators recognised from the start. With all those researchers, scammers, and everyday humans solving billions of puzzles on the cusp of what AI is capable of, the machines were bound to pass us by at some point. In 2014, Google pitted one of its machine learning algorithms against humans in order to solve the most distorted text CAPTCHAs: the computer correctly answered 99.8 percent of the time, while humans correctly answered only 33% of the time.
Google then switched to NoCaptcha ReCaptcha, which analyses user data and behaviour in order to allow some humans to pass through with a click of the “I’m not a robot” button while presenting others with the image labelling we see today. However, the machines are catching up once more. All those awnings that could be storefronts but aren’t? They represent the zenith of humanity’s arms race with machines.
Jason Polakis, a computer science professor at the University of Illinois at Chicago, attributes the recent increase in the difficulty of CAPTCHA to himself. In 2016, he published a paper in which he demonstrated how to solve Google’s image CAPTCHAs with 70% accuracy using off-the-shelf image recognition tools, including Google’s own reverse image search. Using Google’s own audio recognition programmes, other researchers have cracked Google’s audio CAPTCHA challenges.
According to Polakis, machine learning is now on par with humans at performing basic text, image, and voice recognition tasks. Indeed, algorithms are probably superior at it: “We’ve reached a point where making it more difficult for software makes it too difficult for many people.” We require an alternative, but there is no concrete plan in place at the moment.”
The CAPTCHA literature is littered with false starts and bizarre attempts to discover something other than text or image recognition that humans are universally good at but machines struggle with. The researchers asked users to classify images of people based on their facial expression, gender, and ethnic origin. (Imagine how smoothly that went.)
There have been proposals for CAPTCHAs based on trivia and CAPTCHAs based on nursery rhymes popular in the area in which the user allegedly grew up. These cultural CAPTCHAs are not only aimed at bots, but also at the humans who work in overseas CAPTCHA farms solving puzzles for pennies on the dollar.
Individuals have attempted to stymie image recognition by asking users to identify, say, pigs, but then turning the pigs into cartoons and giving them sunglasses. The researchers investigated the possibility of asking users to identify objects hidden within Magic Eye-like blotches. In an intriguing twist, researchers proposed in 2010 that CAPTCHAs be used to index ancient petroglyphs, owing to computers’ inability to decipher gestural sketches of reindeer scrawled on cave walls.
Recently, efforts have been made to develop game-like CAPTCHAs, tests that require users to rotate objects to specific angles or arrange puzzle pieces according to instructions provided in symbols or implied by the game board’s context.
The hope is that humans will comprehend the logic of the puzzle, but computers, which lack clear instructions, will be stumped. Other researchers have attempted to capitalise on the fact that humans have bodies by employing device cameras or augmented reality to provide interactive proof of humanity.
The issue with many of these tests is not that bots are too intelligent — it’s that humans are terrible at them. And it is not so much that humans are stupid as they are wildly diverse in terms of language, culture, and experience. After removing all of that to create a test that any human can pass without prior training or much thought, you’re left with brute-force tasks like image processing, which is precisely what a tailor-made AI will excel at.
“The tests are constrained by human capacity,” Polakis explains. “It’s not just about our physical capabilities; you need something that transcends cultural and linguistic boundaries.” You require a type of challenge that is compatible with individuals from Greece, Chicago, South Africa, Iran, and Australia. And it must be free of cultural complexities and distinctions.
You want something that is simple for the average human, not restricted to a particular subgroup of people, and difficult for computers at the same time. That severely limits your options. And it has to be something that a human can do quickly and without becoming irritated.”
Fixing those blurry image quizzes quickly leads to philosophical territory: what is the universal human quality that can be demonstrated to a machine but cannot be replicated? What does it mean to be human mean?
Perhaps our humanity is measured not by how well we perform a task, but by how we navigate the world — or, in this case, the internet. CAPTCHAs in games, video CAPTCHAs, or any other type of CAPTCHA test will eventually be broken, according to Shuman Ghosemajumder, who previously worked at Google combating click fraud before joining Shape Security as chief technology officer.
Rather than tests, he prefers “continuous authentication,” which entails observing a user’s behaviour and looking for signs of automation. “A real human being lacks sufficient control over their own motor functions, and thus cannot move the mouse in the same way across multiple interactions, even if they try extremely hard,” Ghosemajumder explains.
While a bot can interact with a page without moving the mouse or with a very precise movement of the mouse, human actions have a high degree of “entropy,” according to Ghosemajumder.
Google’s own CAPTCHA team is considering a similar approach. The latest version, reCaptcha v3, released late last year, employs “adaptive risk analysis” to classify traffic based on its appearance of being suspicious; website owners can then choose to present suspicious users with a challenge, such as a password request or two-factor authentication.
Google would not disclose the factors that go into that score, other than to say that Google observes what a site’s “good traffic” looks like and uses that to detect “bad traffic,” according to Cy Khormaee, a product manager on the CAPTCHA team. According to security researchers, the attack is most likely a result of a combination of cookies, browser attributes, traffic patterns, and other factors.
One disadvantage of the new bot detection model is that it can make navigating the web while minimising surveillance an annoyance, as VPNs and anti-tracking extensions can cause you to be flagged as suspicious and challenged.
According to Aaron Malenfant, engineering lead for Google’s CAPTCHA team, the move away from Turing tests is intended to circumvent the competition humans continue to lose. “As more investment is made in machine learning, these types of challenges will have to become increasingly difficult for humans, which is precisely why we launched CAPTCHA V3, to stay ahead of the curve.” Malenfant believes that CAPTCHA challenges will be rendered obsolete in five to ten years. Rather than that, much of the web will run a continuous, secret Turing test in the background.
Brian Christian, in his book The Most Human Human, enters a Turing Test competition as the human foil and discovers that proving your humanity in conversation is actually quite difficult. On the other hand, bot creators have found it relatively easy to pass, not by being the most eloquent or intelligent conversationalist, but by evading questions with non sequitur jokes, making typos, or, in the case of the bot that won a Turing competition in 2014, claiming to be a 13-year-old Ukrainian boy with a limited command of the English language. After all, it is human to make errors. It’s possible that a similar future awaits CAPTCHA, the world’s most widely used Turing test — a new arms race to create bots that make mistakes, miss buttons, become distracted, and switch tabs. “I believe that people are beginning to recognise that there is a use for simulating the average human user… or dumb humans,” Ghosemajumder says.
CAPTCHA tests may continue to exist in this world as well. In 2017, Amazon was granted a patent for a scheme involving optical illusions and logic puzzles that humans have a difficult time deciphering. The Turing Test via Failure is a test in which the only way to pass is to get the answer wrong.
If These Non-Profits Fail, Facebook’s Metaverse Might Be Taken Over By Deep Fakes And Other False Information Simply called Meta, Mark Zuckerberg’s virtual reality world,
Customer Relationship Management Techniques Your Company Needs to Use in 2022 Create a Multi-Channel Presence This type of traditional marketing doesn’t only employ one channel