Back to News
technology

On the Internet, Nobody Knows You’re a Human

Bloomberg Technology
Loading...
10 min read
1 views
0 likes
On the Internet, Nobody Knows You’re a Human

Summarize this article with:

Share this articleHany Farid couldn’t shake the feeling that he wasn’t actually talking to Barack Obama.It was 2023, and an Obama aide had reached out to Farid, a professor of computer science at the University of California at Berkeley specializing in image analysis and digital forensics, to ask if he’d talk to the former president about deepfake technology. As the video call went on, the experience of talking one-on-one with Obama—his voice and cadence as distinctive as ever—started to feel a little uncanny. “The whole time, I’m like, ‘This is a deepfake. This is not Obama,’ ” Farid says. “I wanted to tell him: ‘Put your hand in front of your face.’ ”At the time, asking someone to wave a hand in front of their face was one way to identify a deepfake. The image on the screen would either become distorted and give the fraudster away, or it wouldn’t. But Farid couldn’t bring himself to look into the former president’s eyes and ask him to prove those eyes were real. “So for 10 minutes, I’m like, ‘Am I just being punked?’ ” Farid says.He wasn’t being punked, as it turned out. But Farid’s suspicions reflected just how much artificial intelligence has stoked paranoia among humans interacting online. The technology is also developing rapidly to bypass human beings’ most obvious defenses. The hand-waving trick is already becoming obsolete, as Farid demonstrated on a recent video call with Bloomberg Businessweek by swapping out his face for that of OpenAI Chief Executive Officer Sam Altman. There was some lag between his voice and the video and a dash of deadness behind the eyes, but Farid could scratch his cheek and shine a flashlight at his head without disturbing the image at all. “As a general rule,” Farid says, “this idea that you’re on a video call with somebody, so you can trust that, is over.”People have been preparing for the day when machines could convincingly behave as humans since at least 1950, when Alan Turing proposed an evaluation he called the “imitation game.” In what’s now known as the Turing test, a human judge would have a written conversation with a machine and another human, then try to guess which was which. If the machine could fool the judge, it passed. Decades later, websites regularly ask users to prove their humanness by deciphering those contorted letters and numbers known as captchas, which humans read easily enough but computers have struggled with. (The acronym stands for Completely Automated Public Turing test to tell Computers and Humans Apart.) As automated tools got more sophisticated, these digital traps did too. They also got weirder, requiring people to identify photos of dogs smiling—and, in doing so, contemplate whether dogs can smile at all—just to buy some concert tickets.The advent of large language models has busted through those defenses. With careful prompting, research has found, AI agents can solve complex captchas. Another recent study of 126 participants put several LLMs to the Turing test and found that they guessed OpenAI’s GPT-4.5 was the human 73% of the time.In a world mediated by the internet, trust is breaking down, making any interaction—be it with a potential employer, a would-be romantic partner, your mom calling from her vacation abroad or a former US president—vulnerable to high-level deception. Already, voice clone fraudsters have impersonated US Secretary of State Marco Rubio to communicate with foreign ministers. A Hong Kong-based employee at a multinational firm sent $25 million to a scammer who used a deepfake to pose as the company’s chief financial officer, CNN reported in 2024.Now, in the absence of better solutions, both individuals and institutions are devising their own informal Turing tests on the fly. They are jumping through new hoops that often defy existing social norms to verify their sentience and asking others to do the same. As machines get better at acting like humans, humans are changing how they act. They’re altering the way they write, the way they hire and the way they interact with strangers, all to avoid being taken for—or taken by—AI.Elizabeth Zaborowska was once known around her office as “the em dash queen,” so much so that her colleagues at her first marketing job in New York City hung a sign outside her cubicle saying so. Zaborowska saw the punctuational flourish as an elegant way to merge two ideas without muddying up her sentences. When she started her own public-relations firm, Bhava Communications, her team adopted its boss’s writing style, em dashes and all. But ChatGPT also has a thing for em dashes, the use of which has ended up on the short list of rhetorical ticks now seen as tells of AI-generated prose. At a time when companies are firing their public-relations firms in favor of free AI tools, Zaborowska couldn’t risk anyone thinking she was passing off a chatbot’s work as her own. This past spring, she gathered her team for its weekly virtual meeting and ordered the em dash banished for good. (OpenAI, it should be noted, is working on ridding its chatbot of its em dash obsession.)In the grand scheme of AI-driven disruption, it was the smallest of sacrifices. But to Zaborowska it portended something bigger: To sound human, she was suppressing her instincts. “It’s changing the pace of language,” she says, “and I’m obsessed with language.”For Sarah Suzuki Harvard, a copywriter at the content agency storyarb, the internetwide effort to identify AI writing has become “a witch hunt” that’s driven her to censor her own work. She recently sounded off on LinkedIn about all the common writing constructs now seen as AI red flags. Irritatingly, those red flags are actually all human quirks the computers picked up while being trained. “We’re the ones who built the machines,” Suzuki Harvard says. “They’re plagiarizing from us.”Regardless, it has fallen on humans to prove themselves. The issue is perhaps most pronounced on college campuses, where professors use Reddit to swap strategies for catching their students ChatGPT-cheating, and students take to TikTok to rage about being punished for their hard work that’s mistaken for AI. Wikipedia editors, meanwhile, have banded together to weed out AI-generated entries, scanning the site for obvious giveaways, such as fake citations, and less obvious ones, like entries that overuse the word “delves.”The goal isn’t to eliminate all AI-generated content on Wikipedia, says Ilyas Lebleu, a founding member of the editor group WikiProject AI Cleanup, but to strip out the slop. “If the information is right and there are real citations, it’s good,” Lebleu says. “The biggest gripe is with unreviewed AI content.”In the world of hiring, generative AI has made it easy for anyone to create a thorough job application with one click, leading to a deluge of cover letters and résumés for overworked human resources teams to sift through. Kyle Arteaga, CEO of the Seattle-based communications firm Bulleit Group, says his company has recently received thousands of applications per position and often winds up interviewing people who used AI to seek employment on such an industrial scale they don’t remember applying at all. His team has responded by planting AI trip wires in job listings, asking applicants to include a pop-culture reference in their cover letter and to name their Hogwarts House. In the case of one recent listing, Arteaga says, less than 3% of the more than 900 applicants followed the instructions, dramatically reducing the pool of contenders. (The job, for what it’s worth, went to a Ravenclaw.)To mitigate the risk of AI scams, people are also turning to a range of what Farid calls “analog solutions.” OpenAI’s Altman recently suggested one defense against voice clones posing as people’s loved ones may be for families to come up with secret “code words” they can ask for in times of crisis. Late last year, UK-based Starling Bank pushed out a marketing campaign encouraging its customers to do the same. According to Starling, 82% of people who saw the message said they were likely to take the advice.Others are adopting go-to trick questions to confirm humanity on the web. During a recent chat with an online customer service agent, Pranava Adduri says he asked the agent to define Navier-Stokes, a set of mathematical equations that describe the motion of viscous fluids. Any try-hard chatbot, trained on all the world’s knowledge, would eagerly supply an answer, says Adduri, who’s co-founder and chief technology officer of the Menlo Park, California-based security company Bedrock Data Inc. But the agent answered only, “I have no idea.”“I’m like, ‘OK, you’re human,’ ” Adduri says.A growing number of companies, including Alphabet Inc.’s Google, are bringing back in-person interviews, which chatbots, as a general rule, cannot attend. A Google spokesperson says the company made the shift to acclimate new hires to its culture, as well as “to make sure candidates have the fundamental coding skills necessary for the roles they’re interviewing for.”But Google and other major corporations have other good reasons to want to meet new hires in person. According to the Federal Bureau of Investigation, undercover North Korean IT workers have successfully landed jobs at more than 100 US companies by posing as remote workers. These scams, which often rely on AI tools, have funneled hundreds of millions of dollars a year to North Korea and have even ensnared American accomplices.“It’s no secret that there’s fake candidates out there,” says Kelly Jones, Cisco Systems Inc.’s chief people officer, who says the company has added biometric verification to its application process.The increasing demand for proof of humanness has, naturally, given rise to an emerging economy of technological solutions. On one side are deepfake-busting tools that plug into existing platforms, such as Zoom, and purport to detect synthetic audio and video in real time. JPMorgan Chase & Co. uses one such tool, called Reality Defender, on its communications network. Farid has co-founded his own company, GetReal Security, which also offers real-time detection of deepfakes, as well as other digital forensics services.On the flip side are tools that promise to verify that people are actually people through cryptographic or biometric methods. The most prominent is the Orb, an eyeball-scanning device developed by Tools for Humanity, co-founded by Altman. The Orb uses scans of people’s irises to produce an identification code its creators liken to a digital passport. The code, called the World ID, can then be validated with the World App anytime a user needs to confirm their identity. While the developers say the system doesn’t store any personal data about users, the idea has triggered a good deal of dystopian dread. Some countries, including Brazil, have gone so far as to ban it.The Orb and other proof of personhood ideas would rely on a single entity, be it a private company, a government or a nongovernmental organization, to issue the credentials in the first place, putting tremendous power in the issuer’s hands. They’re also long-term fixes, requiring widespread societal and institutional buy-in, and, when it comes to the crisis in trust online, Farid says, “we’ve got a problem right now.”Each of these supposed solutions, from the end of the em dash to the Orb, involves trade-offs, whether requiring human beings to give up little bits of themselves or turning the act of being human itself into a kind of performance. We’ve reached a point in AI’s evolution where we’ve trained probabilistic machines to be a little too good at predicting, in any given circumstance, what a human would do. Now, to distinguish ourselves from them, we’re stuck trying to figure out the same thing.Animations: Benjamin Freedman for Bloomberg Businessweek

Read Original

Source Information

Source: Bloomberg Technology