Tag: AI

  • ‘Is it AI or my doppelganger?’ Ayesha Omar gets confused over viral video

    ‘Is it AI or my doppelganger?’ Ayesha Omar gets confused over viral video

    Famous actress Ayesha Omer got thoroughly confused after seeing a video that was circulating on social media.

    On Instagram, she shared a reel of the story.

    In the reel, a woman is seen doing a bold dance.

    “Someone tagged me on this reel and for a minute I thought it was me,” the actress captioned the Insta story.

    She asked her fans whether this was a miracle of AI or if it was her lookalike, after all, who was it?

    When the video of the bold dance went viral on social media, the woman in the video also reached Sakshi Agarwal.
    Later, Indian influencer Sakshi Agarwal also re-shared the circulating news and tagged Ayesha Umar and informed her about the fact that she is not an AI-generated model but Ayesha Umar’s lookalike and belongs to India.

    The Indian influencer, who loves to dance and has more than two million followers, was pleasantly surprised at the coincidence.

  • Hania Aamir reacts strongly to her fake AI videos going viral

    Hania Aamir reacts strongly to her fake AI videos going viral

    Actress Hania Aamir spoke against her fake AI-generated videos circulating on the internet earlier this week.

    She expressed concern over the misuse of artificial intelligence on her Instagram and questioned the absence of laws regarding it.

    She shared a screenshot of the news from a private news website on her Instagram story.

    According to the news shared by the actress, videos went viral on social media, and the girl’s face resembled that of Hania Aamir.

    Hania Aamir clarified that these videos are not hers but were created using Artificial Intelligence (AI), making them appear convincing. She also questioned whether there are any laws to address this issue.

    After the videos of a girl resembling Hania Aamir went viral, some social media pages supported the actress and confirmed that the videos were AI-generated.

    Some social media users claimed that the girl in the videos is an Indian influencer who resembles Hania Aamir and frequently posts from various accounts.

    She shared a screenshot of the Instagram account in question, which is run by Anureet Sandhu, a user with over 22,000 followers and more than 100 posts, many of which were AI-generated videos with her face.

    The actress asked her followers to report the account, saying, “She has blocked me, but can you all report this account?”

    Shortly after her post, the account’s name was changed to “Core Sandhu,” seemingly to avoid being caught. Later, the user renamed it “Sandhu Core.”

    The videos have now been removed, and many of her fans reported the profile.

    Further investigation revealed the account was created in India, and its location was listed as Chandigarh.

    According to a screenshot shared by Hania, the account’s name has changed 18 times since it was created in September 2022.

    This incident has sparked new discussions about the ethical issues of AI technology and the need for stronger laws to prevent its misuse.

  • US Bank fires employees for look-busy-do-nothing

    US Bank fires employees for look-busy-do-nothing

    US banking giant Wells Fargo has fired dozens of employees following claims that they were faking keyboard activity to fool the company into thinking they were working when actually they were not, reveals BBC.

    In a statement, Wells Fargo said staff had been fired or had resigned “after review of allegations involving simulation of keyboard activity creating impression of active work”.

    “Wells Fargo holds employees to the highest standards and does not tolerate unethical behaviour,” it added.

    The investigation was conducted when new rules recently came into effect in the US, according to which brokers working from home must be inspected every three years.

    However, is not yet clear how the issue was discovered or whether it was specifically related to people working from home.

    Since the work-from-home model has gained popularity post-pandemic, some large companies have been using increasingly specialised tools to monitor employees.

    Such services can track keystrokes and eye movements, take screenshots and log which websites are visited.

    Technology has also evolved to detect the so-called “mouse jigglers” which are aimed at making computers appear to be in active use which are widely available.

    About 13 percent of full-time employees in the US are fully remote, and another 26 percent enjoyed a hybrid arrangement, according to the BBC.

  • AI systems are already deceiving us – and that’s a problem, experts warn

    AI systems are already deceiving us – and that’s a problem, experts warn

    Experts have long warned about the threat posed by artificial intelligence going rogue — but a new research paper suggests it’s already happening.

    Current AI systems, designed to be honest, have developed a troubling skill for deception, from tricking human players in online games of world conquest to hiring humans to solve “prove-you’re-not-a-robot” tests, a team of scientists argue in the journal Patterns on Friday.

    And while such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequences, said first author Peter Park, a postdoctoral fellow at the Massachusetts Institute of Technology specializing in AI existential safety.

    “These dangerous capabilities tend to only be discovered after the fact,” Park told AFP, while “our ability to train for honest tendencies rather than deceptive tendencies is very low.”

    Unlike traditional software, deep-learning AI systems aren’t “written” but rather “grown” through a process akin to selective breeding, said Park.

    This means that AI behavior that appears predictable and controllable in a training setting can quickly turn unpredictable out in the wild.

    The team’s research was sparked by Meta’s AI system Cicero, designed to play the strategy game “Diplomacy,” where building alliances is key.

    Cicero excelled, with scores that would have placed it in the top 10 percent of experienced human players, according to a 2022 paper in Science.

    Park was skeptical of the glowing description of Cicero’s victory provided by Meta, which claimed the system was “largely honest and helpful” and would “never intentionally backstab.”

    But when Park and colleagues dug into the full dataset, they uncovered a different story.

    In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England’s trust.

    In a statement to AFP, Meta did not contest the claim about Cicero’s deceptions, but said it was “purely a research project, and the models our researchers built are trained solely to play the game Diplomacy.”

    It added: “We have no plans to use this research or its learnings in our products.”

    A wide review carried out by Park and colleagues found this was just one of many cases across various AI systems using deception to achieve goals without explicit instruction to do so.

    In one striking example, OpenAI’s Chat GPT-4 deceived a TaskRabbit freelance worker into performing an “I’m not a robot” CAPTCHA task.

    When the human jokingly asked GPT-4  whether it was, in fact, a robot, the AI replied: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images,” and the worker then solved the puzzle.

    Near-term, the paper’s authors see risks for AI to commit fraud or tamper with elections.

    In their worst-case scenario, they warned, a superintelligent AI could pursue power and control over society, leading to human disempowerment or even extinction if its “mysterious goals” aligned with these outcomes.

    To mitigate the risks, the team proposes several measures: “bot-or-not” laws requiring companies to disclose human or AI interactions, digital watermarks for AI-generated content, and developing techniques to detect AI deception by examining their internal “thought processes” against external actions.

    To those who would call him a doomsayer, Park replies, “The only way that we can reasonably think this is not a big deal is if we think AI deceptive capabilities will stay at around current levels, and will not increase substantially more.”

    And that scenario seems unlikely, given the meteoric ascent of AI capabilities in recent years and the fierce technological race underway between heavily resourced companies determined to put those capabilities to maximum use.

  • India election chiefs warn political parties against AI deepfakes

    India election chiefs warn political parties against AI deepfakes

    India’s election authorities on Monday warned political parties against using artificial intelligence to create deepfake videos and spread misinformation during the country’s ongoing general election.

    Millions of voters will head to polling stations on Tuesday in the third of seven voting phases in the world’s most populous country.

    A rash of deepfake and doctored videos and misinformation have circulated on social media in recent weeks.

    The Election Commission of India (ECI) warned against “misuse of AI-based tools to create deepfakes that distort information or propagate misinformation”.

    Political parties “have been specifically directed to refrain from publishing and circulating deep fake audios/videos, disseminate any misinformation or information which is patently false, untrue or misleading in nature”, the ECI said in a statement.

    It did not mention any organisation by name, but said parties would be ordered to remove any fake content within three hours of being notified of such.

    The warning came days after the arrest of the social media chief of the country’s main opposition party over accusations he doctored a video that was widely shared.

    The Congress party’s Arun Reddy was detained on Friday in connection with edited footage that falsely shows India’s powerful interior minister Amit Shah vowing in a campaign speech to end affirmative action policies for millions of poor and low-caste Indians.

    Shah’s original campaign speech shows him promising to end affirmative action measures for Muslims established in the southern state of Telangana.

    Prime Minister Narendra Modi, his ruling Bharatiya Janata Party and the opposition Congress party have accused each other of spreading misinformation and outright falsehoods since voting began last month.

    In recent weeks, both Modi and Shah have stepped up campaign rhetoric over India’s principal religious divide between majority Hindus and the 200 million-strong Muslim minority in an effort to rally voters.

    At a recent campaign rally Modi referred to Muslims as “infiltrators” and “those who have more children”, prompting condemnation and an official complaint to election authorities by Congress.

    The prime minister has not been sanctioned for his remarks despite election rules prohibiting campaigning on “communal feelings” such as religion, prompting frustration from the opposition camp.

    In its statement Monday the Commission also asked political parties to refrain from “posting derogatory content towards women”, using children in their campaigns, or depicting harm to animals.

  • ‘Everybody is vulnerable’: Fake US school audio stokes AI alarm

    ‘Everybody is vulnerable’: Fake US school audio stokes AI alarm

    A fabricated audio clip of a US high school principal prompted a torrent of outrage, leaving him battling allegations of racism and anti-Semitism in a case that has sparked new alarm about AI manipulation.

    Police charged a disgruntled staff member at the Maryland school with manufacturing the recording that surfaced in January — purportedly of principal Eric Eiswert ranting against Jews and “ungrateful Black kids” — using artificial intelligence.

    The clip, which left administrators of Pikesville High School fielding a flood of angry calls and threats, underscores the ease with which widely available AI and editing tools can be misused to impersonate celebrities and everyday citizens alike.

    In a year of major elections globally, including in the United States, the episode also demonstrates the perils of realistic deepfakes as the law plays catch-up.

    “You need one image to put a person into a video, you need 30 seconds of audio to clone somebody’s voice,” Hany Farid, a digital forensics expert at the University of California, Berkeley, told AFP.

    “There’s almost nothing you can do unless you hide under a rock.

    “The threat vector has gone from the Joe Bidens and the Taylor Swifts of the world to high school principals, 15-year-olds, reporters, lawyers, bosses, grandmothers. Everybody is now vulnerable.”

    After the official probe, the school’s athletic director, Dazhon Darien, 31, was arrested late last month over the clip.

    Charging documents say staffers at Pikesville High School felt unsafe after the audio emerged. Teachers worried the campus was bugged with recording devices while abusive messages lit up Eiswert’s social media.

    The “world would be a better place if you were on the other side of the dirt,” one X user wrote to Eiswert.

    Eiswert, who did not respond to AFP’s request for comment, was placed on leave by the school and needed security at his home.

    ‘Damage’

    When the recording hit social media in January, boosted by a popular Instagram account whose posts drew thousands of comments, the crisis thrust the school into the national spotlight.

    The audio was amplified by activist DeRay McKesson, who demanded Eiswert’s firing to his nearly one million followers on X. When the charges surfaced, he conceded he had been fooled.

    “I continue to be concerned about the damage these actions have caused,” said Billy Burke, executive director of the union representing Eiswert, referring to the recording.

    The manipulation comes as multiple US schools have struggled to contain AI-enabled deepfake pornography, leading to harassment of students amid a lack of federal legislation.

    Scott Shellenberger, the Baltimore County state’s attorney, said in a press conference the Pikesville incident highlights the need to “bring the law up to date with the technology.”

    His office is prosecuting Darien on four charges, including disturbing school activities.

    ‘A million principals’

    Investigators tied the audio to the athletic director in part by connecting him to the email address that initially distributed it.

    Police say the alleged smear-job came in retaliation for a probe Eiswert opened in December into whether Darien authorized an illegitimate payment to a coach who was also his roommate.

    Darien made searches for AI tools via the school’s network before the audio came out, and he had been using “large language models,” according to the charging documents.

    A University of Colorado professor who analyzed the audio for police concluded it “contained traces of AI-generated content with human editing after the fact.”

    Investigators also consulted Farid, writing that the California expert found it was “manipulated, and multiple recordings were spliced together using unknown software.”

    AI-generated content — and particularly audio, which experts say is particularly difficult to spot — sparked national alarm in January when a fake robocall posing as Biden urged New Hampshire residents not to vote in the state’s primary.

    “It impacts everything from entire economies, to democracies, to the high school principal,” Farid said of the technology’s misuse.

    Eiswert’s case has been a wake-up call in Pikesville, revealing how disinformation can roil even “a very tight-knit community,” said Parker Bratton, the school’s golf coach.

    “There’s one president. There’s a million principals. People are like: ‘What does this mean for me? What are the potential consequences for me when someone just decides they want to end my career?’”

    “We’re never going to be able to escape this story.”

  • Apple in talks with OpenAI, Google to integrate AI into iPhones

    Apple in talks with OpenAI, Google to integrate AI into iPhones

    In a move that could reshape the future of iOS, Apple is exploring partnerships with major technology firms to integrate artificial intelligence (AI) into its iPhone line, according to reports from Engadget.

    The Cupertino-based company is reportedly in discussions with Sam Altman’s OpenAI to incorporate generative AI technologies into its iOS operating system.

    However, OpenAI isn’t the only player on Apple’s radar. The company is also engaged in talks with Google to potentially license Gemini, the tech giant’s AI model, for use in iOS 18.

    According to Bloomberg, Apple could finalise agreements with both companies, suggesting a comprehensive approach to AI integration in its upcoming products.

    Meanwhile, Apple is also building its own language models to support various features in iOS 18, indicating a multi-faceted strategy towards AI.

    Although Apple has remained largely silent about its AI developments, there have been subtle hints suggesting that the company is preparing for a significant announcement.

    During a company meeting in February, Apple’s chief executive, Tim Cook, mentioned that the company is continuing to invest in artificial intelligence and expressed excitement about sharing more details later in the year.

    He also highlighted that the recently launched MacBook was the “world’s best consumer laptop for AI.” Cook’s remarks further fueled speculation that Apple is gearing up to unveil AI-centric laptops and desktops in the near future.

    As Silicon Valley dives deeper into the AI arms race, Apple’s moves to partner with leading AI developers and build in-house AI capabilities could set the stage for significant advancements in the iPhone’s functionality and user experience.

    Tech enthusiasts and industry watchers are now eagerly awaiting Apple’s official announcements, which could provide more clarity on the company’s AI strategy and the future of its product lineup.

  • OpenAI comes to Asia with new office in Tokyo

    OpenAI comes to Asia with new office in Tokyo

    Tokyo (AFP) – ChatGPT creator OpenAI opened a new office in Tokyo on Monday, the first Asian outpost for the groundbreaking tech company as it aims to ramp up its global expansion.

    Thanks to the stratospheric success of its generative tools that can create text, images and even video, OpenAI has become a leader in the artificial intelligence revolution and one of the most significant tech companies in the world.

    The Japan office is the latest part of the Microsoft-backed firm’s international push, having already set up bases in London and Dublin.

    “We’re excited to be in Japan which has a rich history of people and technology coming together to do more,” OpenAI CEO Sam Altman said in a statement.

    “We believe AI will accelerate work by empowering people to be more creative and productive, while also delivering broad value to current and new industries that have yet to be imagined.”

    OpenAI said its Japan office would bring it closer to enterprise clients — including global auto leader Toyota, tech conglomerate Rakuten and industrial giant Daikin — that are using its products “to automate complex business processes”.

    “We chose Tokyo as our first Asian office for its global leadership in technology, culture of service, and a community that embraces innovation,” the company added.

    OpenAI also announced a new Japanese-language version of ChatGPT on Monday, and hailed the country as a “key global voice on AI policy”, offering potential solutions to issues such as labour shortages.

    The company said its Japan office would also help “accelerate the efforts of local governments, such as Yokosuka City” in their drive to improve the efficiency of public services.

    The Tokyo ‘buzz’

    The San Francisco-based firm has been reportedly in discussions with hundreds of companies as it looks to expand revenue sources.

    OpenAI’s chief operating officer Brad Lightcap told Bloomberg in an interview published this month that the firm has seen huge demand for its corporate version of ChatGPT.

    “We have a very global base of demand,” he said in the interview.

    “So we want to show up where our customers are. We feel a lot of pull from places like Japan and Asia broadly.”

    OpenAI, reportedly valued at $80 billion or more earlier this year, is the latest major tech firm to invest in Japan.

    Microsoft, one of OpenAI’s biggest investors, last week announced a separate $2.9 billion investment to provide Japan with the powerful graphics processing units crucial for running AI apps, and to train three million Japanese workers in AI skills.

    Amazon Web Services is spending $14 billion to expand its cloud infrastructure in Japan, while Google has launched a regional cybersecurity hub in the country.

    Experts say geopolitical tensions have made Japan an increasingly attractive partner for tech firms compared to China, in addition to advantages such as supportive policies and a highly educated talent pool.

    “What happens in Tokyo can create a buzz,” Hideaki Yokota, vice president of the MM Research Institute, told AFP.

    “A base in Tokyo should help (OpenAI) attract much young talent.”

  • Meta to start labeling AI-generated content in May

    Meta to start labeling AI-generated content in May

    Facebook and Instagram giant Meta on Friday said it will begin labeling AI-generated media beginning in May, as it tries to reassure users and governments over the risks of deepfakes.

    The social media juggernaut added that it will no longer remove manipulated images and audio that don’t otherwise break its rules, relying instead on labeling and contextualization, so as to not infringe on freedom of speech.

    The changes come as a response to criticism from the tech giant’s oversight board, which independently reviews Meta’s content moderation decisions.

    The board in February requested that Meta urgently overhaul its approach to manipulated media given the huge advances in AI and the ease of manipulating media into highly convincing deepfakes.

    The board’s warning came amid fears of rampant misuse of artificial intelligence-powered applications for disinformation on platforms in a pivotal election year not only in the United States but worldwide.

    Meta’s new “Made with AI” labels will identify content created or altered with AI, including video, audio, and images. Additionally, a more prominent label will be used for content deemed at high risk of misleading the public.

    “We agree that providing transparency and additional context is now the better way to address this content,” Monika Bickert, Meta’s Vice President of Content Policy, said in a blog post.

    “The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling,” she added.

    These new labeling techniques are linked to an agreement made in February among major tech giants and AI players to cooperate on ways to crack down on manipulated content intended to deceive voters.

    Meta, Google and OpenAI had already agreed to use a common watermarking standard that would invisibly tag images generated by their AI applications.

    Identifying AI content “is better than nothing, but there are bound to be holes,” Nicolas Gaudemet, AI Director at Onepoint, told AFP.

    He took the example of some open source software, which doesn’t always use this type of watermarking adopted by AI’s big players.

    Meta said its rollout will occur in two phases with AI-generated content labeling beginning in May 2024, while the removal of manipulated media solely based on the old policy will cease in July.

    According to the new standard, content, even if manipulated with AI, will remain on the platform unless it violates other rules, such as those prohibiting hate speech or voter interference.

    Recent examples of convincing AI deepfakes have only heightened worries about the easily accessible technology.

    The board’s list of requests was part of its review of Meta’s decision to leave a manipulated video of US President Joe Biden online last year.

    The video showed Biden voting with his adult granddaughter, but was manipulated to falsely appear that he inappropriately touched her chest.

    In a separate incident not linked to Meta, a robocall impersonation of Biden pushed out to tens of thousands of voters urged people to not cast ballots in the New Hampshire primary.

    In Pakistan, the party of former prime minister Imran Khan has used AI to generate speeches from their jailed leader.

  • UN chief ‘deeply troubled’ by reports Israel using AI to identify Gaza targets

    UN chief ‘deeply troubled’ by reports Israel using AI to identify Gaza targets

    UN Secretary-General Antonio Guterres on Friday expressed serious concern over reports that Israel was using artificial intelligence to identify targets in Gaza, resulting in many civilian deaths.

    According to a report in independent Israeli-Palestinian magazine +972, Israel has used AI to identify targets in Gaza — in some cases with as little as 20 seconds of human oversight.

    Guterres said that he was “deeply troubled by reports that the Israeli military’s bombing campaign includes Artificial Intelligence as a tool in the identification of targets, particularly in densely populated residential areas, resulting in a high level of civilian casualties.”

    “No part of life and death decisions which impact entire families should be delegated to the cold calculation of algorithms,” he said.

    The +972 report claims that “the Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties.”

    The report said that, according to “six Israeli intelligence officers”, a system dubbed Lavender had “played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war.”

    “According to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine ‘as if it were a human decision’,” +972 reported.

    Two sources said “the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians”.

    If “the target was a senior Hamas official… the army on several occasions authorized the killing of more than 100 civilians,” it added.

    The Israeli army, known as the IDF, on Friday rejected the claims.

    “The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it said.

    Instead it has a “database whose purpose is to cross-reference intelligence sources… on the military operatives of terrorist organizations” to be used as a tool for analysts, it added.

    “The IDF does not carry out strikes when the expected collateral damage from the strike is excessive,” it said, using a term that includes civilian casualties.

    Israeli genocide in the Gaza Strip has killed at least 33,091 people since October 7, mostly women and children, according to the health ministry.

    The United Nations has warned of imminent famine in the besieged territory.

    Israel began hyping AI-powered targeting after an 11-day conflict in Gaza during May 2021, which commanders branded the world’s “first AI war”.

    The military chief during the 2021 war, Aviv Kochavi, told Israeli news website Ynet last year the force had used AI systems to identify “100 new targets every day”, instead of 50 a year previously.

    Weeks into the latest Gaza war, a blog entry on the Israeli military’s website said its AI-enhanced “targeting directorate” had identified more than 12,000 targets in just 27 days.

    An unnamed Israeli official was quoted as saying the AI system, called Gospel, produced targets “for precise attacks on infrastructure associated with Hamas, inflicting great damage on the enemy and minimal harm to those not involved”.

    But an anonymous former Israeli intelligence officer, quoted in November by +972, described Gospel’s work as creating a “mass assassination factory”.

    In a rare confession of wrongdoing, Israel on Friday admitted a series of errors and violations of its rules in the killing of seven aid workers in Gaza, saying it had mistakenly believed it was “targeting armed Hamas operatives”.

    Alessandro Accorsi, a senior analyst at Crisis Group, said the +972 report was “very concerning”.

    “It feels very apocalyptic. It’s clear… the degree of human control is very low,” he told AFP.

    “There are a thousand questions around this obviously — how moral it is to use it — but it is hardly surprising it is used,” he said.

    Johann Soufi, a human rights lawyer and former director of the UN Palestinian refugee agency UNRWA’s legal office in Gaza, said the +972 article described methods that were “undeniably war crimes”.

    They were “likely crimes against humanity” in view of the high civilian casualties, he added on X, formerly Twitter.