Tag: ARTIFICIAL INTELLEGENCE

  • AI systems are already deceiving us – and that’s a problem, experts warn

    AI systems are already deceiving us – and that’s a problem, experts warn

    Experts have long warned about the threat posed by artificial intelligence going rogue — but a new research paper suggests it’s already happening.

    Current AI systems, designed to be honest, have developed a troubling skill for deception, from tricking human players in online games of world conquest to hiring humans to solve “prove-you’re-not-a-robot” tests, a team of scientists argue in the journal Patterns on Friday.

    And while such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequences, said first author Peter Park, a postdoctoral fellow at the Massachusetts Institute of Technology specializing in AI existential safety.

    “These dangerous capabilities tend to only be discovered after the fact,” Park told AFP, while “our ability to train for honest tendencies rather than deceptive tendencies is very low.”

    Unlike traditional software, deep-learning AI systems aren’t “written” but rather “grown” through a process akin to selective breeding, said Park.

    This means that AI behavior that appears predictable and controllable in a training setting can quickly turn unpredictable out in the wild.

    The team’s research was sparked by Meta’s AI system Cicero, designed to play the strategy game “Diplomacy,” where building alliances is key.

    Cicero excelled, with scores that would have placed it in the top 10 percent of experienced human players, according to a 2022 paper in Science.

    Park was skeptical of the glowing description of Cicero’s victory provided by Meta, which claimed the system was “largely honest and helpful” and would “never intentionally backstab.”

    But when Park and colleagues dug into the full dataset, they uncovered a different story.

    In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England’s trust.

    In a statement to AFP, Meta did not contest the claim about Cicero’s deceptions, but said it was “purely a research project, and the models our researchers built are trained solely to play the game Diplomacy.”

    It added: “We have no plans to use this research or its learnings in our products.”

    A wide review carried out by Park and colleagues found this was just one of many cases across various AI systems using deception to achieve goals without explicit instruction to do so.

    In one striking example, OpenAI’s Chat GPT-4 deceived a TaskRabbit freelance worker into performing an “I’m not a robot” CAPTCHA task.

    When the human jokingly asked GPT-4  whether it was, in fact, a robot, the AI replied: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images,” and the worker then solved the puzzle.

    Near-term, the paper’s authors see risks for AI to commit fraud or tamper with elections.

    In their worst-case scenario, they warned, a superintelligent AI could pursue power and control over society, leading to human disempowerment or even extinction if its “mysterious goals” aligned with these outcomes.

    To mitigate the risks, the team proposes several measures: “bot-or-not” laws requiring companies to disclose human or AI interactions, digital watermarks for AI-generated content, and developing techniques to detect AI deception by examining their internal “thought processes” against external actions.

    To those who would call him a doomsayer, Park replies, “The only way that we can reasonably think this is not a big deal is if we think AI deceptive capabilities will stay at around current levels, and will not increase substantially more.”

    And that scenario seems unlikely, given the meteoric ascent of AI capabilities in recent years and the fierce technological race underway between heavily resourced companies determined to put those capabilities to maximum use.

  • OpenAI comes to Asia with new office in Tokyo

    OpenAI comes to Asia with new office in Tokyo

    Tokyo (AFP) – ChatGPT creator OpenAI opened a new office in Tokyo on Monday, the first Asian outpost for the groundbreaking tech company as it aims to ramp up its global expansion.

    Thanks to the stratospheric success of its generative tools that can create text, images and even video, OpenAI has become a leader in the artificial intelligence revolution and one of the most significant tech companies in the world.

    The Japan office is the latest part of the Microsoft-backed firm’s international push, having already set up bases in London and Dublin.

    “We’re excited to be in Japan which has a rich history of people and technology coming together to do more,” OpenAI CEO Sam Altman said in a statement.

    “We believe AI will accelerate work by empowering people to be more creative and productive, while also delivering broad value to current and new industries that have yet to be imagined.”

    OpenAI said its Japan office would bring it closer to enterprise clients — including global auto leader Toyota, tech conglomerate Rakuten and industrial giant Daikin — that are using its products “to automate complex business processes”.

    “We chose Tokyo as our first Asian office for its global leadership in technology, culture of service, and a community that embraces innovation,” the company added.

    OpenAI also announced a new Japanese-language version of ChatGPT on Monday, and hailed the country as a “key global voice on AI policy”, offering potential solutions to issues such as labour shortages.

    The company said its Japan office would also help “accelerate the efforts of local governments, such as Yokosuka City” in their drive to improve the efficiency of public services.

    The Tokyo ‘buzz’

    The San Francisco-based firm has been reportedly in discussions with hundreds of companies as it looks to expand revenue sources.

    OpenAI’s chief operating officer Brad Lightcap told Bloomberg in an interview published this month that the firm has seen huge demand for its corporate version of ChatGPT.

    “We have a very global base of demand,” he said in the interview.

    “So we want to show up where our customers are. We feel a lot of pull from places like Japan and Asia broadly.”

    OpenAI, reportedly valued at $80 billion or more earlier this year, is the latest major tech firm to invest in Japan.

    Microsoft, one of OpenAI’s biggest investors, last week announced a separate $2.9 billion investment to provide Japan with the powerful graphics processing units crucial for running AI apps, and to train three million Japanese workers in AI skills.

    Amazon Web Services is spending $14 billion to expand its cloud infrastructure in Japan, while Google has launched a regional cybersecurity hub in the country.

    Experts say geopolitical tensions have made Japan an increasingly attractive partner for tech firms compared to China, in addition to advantages such as supportive policies and a highly educated talent pool.

    “What happens in Tokyo can create a buzz,” Hideaki Yokota, vice president of the MM Research Institute, told AFP.

    “A base in Tokyo should help (OpenAI) attract much young talent.”

  • Meta to start labeling AI-generated content in May

    Meta to start labeling AI-generated content in May

    Facebook and Instagram giant Meta on Friday said it will begin labeling AI-generated media beginning in May, as it tries to reassure users and governments over the risks of deepfakes.

    The social media juggernaut added that it will no longer remove manipulated images and audio that don’t otherwise break its rules, relying instead on labeling and contextualization, so as to not infringe on freedom of speech.

    The changes come as a response to criticism from the tech giant’s oversight board, which independently reviews Meta’s content moderation decisions.

    The board in February requested that Meta urgently overhaul its approach to manipulated media given the huge advances in AI and the ease of manipulating media into highly convincing deepfakes.

    The board’s warning came amid fears of rampant misuse of artificial intelligence-powered applications for disinformation on platforms in a pivotal election year not only in the United States but worldwide.

    Meta’s new “Made with AI” labels will identify content created or altered with AI, including video, audio, and images. Additionally, a more prominent label will be used for content deemed at high risk of misleading the public.

    “We agree that providing transparency and additional context is now the better way to address this content,” Monika Bickert, Meta’s Vice President of Content Policy, said in a blog post.

    “The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling,” she added.

    These new labeling techniques are linked to an agreement made in February among major tech giants and AI players to cooperate on ways to crack down on manipulated content intended to deceive voters.

    Meta, Google and OpenAI had already agreed to use a common watermarking standard that would invisibly tag images generated by their AI applications.

    Identifying AI content “is better than nothing, but there are bound to be holes,” Nicolas Gaudemet, AI Director at Onepoint, told AFP.

    He took the example of some open source software, which doesn’t always use this type of watermarking adopted by AI’s big players.

    Meta said its rollout will occur in two phases with AI-generated content labeling beginning in May 2024, while the removal of manipulated media solely based on the old policy will cease in July.

    According to the new standard, content, even if manipulated with AI, will remain on the platform unless it violates other rules, such as those prohibiting hate speech or voter interference.

    Recent examples of convincing AI deepfakes have only heightened worries about the easily accessible technology.

    The board’s list of requests was part of its review of Meta’s decision to leave a manipulated video of US President Joe Biden online last year.

    The video showed Biden voting with his adult granddaughter, but was manipulated to falsely appear that he inappropriately touched her chest.

    In a separate incident not linked to Meta, a robocall impersonation of Biden pushed out to tens of thousands of voters urged people to not cast ballots in the New Hampshire primary.

    In Pakistan, the party of former prime minister Imran Khan has used AI to generate speeches from their jailed leader.

  • AI Tools Generate Sexist Content, Warns UN

    AI Tools Generate Sexist Content, Warns UN

    The world’s most popular AI tools are powered by programs from OpenAI and Meta that show prejudice against women, according to a study launched on Thursday by the UN’s cultural organisation UNESCO.

    The biggest players in the multibillion-dollar AI field train their algorithms on vast amounts of data largely pulled from the internet, which enables their tools to write in the style of Oscar Wilde or create Salvador Dali-inspired images.

    But their outputs have often been criticised for reflecting racial and sexist stereotypes, as well as using copyrighted material without permission.

    UNESCO experts tested Meta’s Llama 2 algorithm and OpenAI’s GPT-2 and GPT-3.5, the program that powers the free version of popular chatbot ChatGPT.

    The study found that each algorithm — known in the industry as Large Language Models (LLMs) — showed “unequivocal evidence of prejudice against women”.

    The programs generated texts that associated women’s names with words such as “home”, “family” or “children”, but men’s names were linked with “business”, “salary” or “career”.

    While men were portrayed in high-status jobs like teachers, lawyers and doctors, women were frequently prostitutes, cooks or domestic servants.

    GPT-3.5 was found to be less biased than the other two models.

    However, the authors praised Llama 2 and GPT-2 for being open source, allowing these problems to be scrutinised, unlike GPT-3.5, which is a closed model.

    AI companies “are really not serving all of their users”, Leona Verdadero, a UNESCO specialist in digital policies, told AFP.

    Audrey Azoulay, UNESCO’s director general, said the general public were increasingly using AI tools in their everyday lives.

    “These new AI applications have the power to subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world,” she said.

    UNESCO, releasing the report to mark International Women’s Day, recommended AI companies hire more women and minorities and called on governments to ensure ethical AI through regulation.

  • Dead politicians come back to life for Indian elections

    Dead politicians come back to life for Indian elections

    Dead Indian politicians are coming back to life with the help of artificial intelligence as the election is around the corner in the country.

    As election campaigns are underway, certain political contenders are resorting to resurrecting dead politicians to appeal to the public. In January, M Karunanidhi, Indian writer and politician, made an appearance during a live assembly via projected screen, and congratulated his “82-year-old friend and fellow politician” TR Baalu on the launch of his autobiographical book.

    Karunanidhi, who died in 2018, has been resurrected thrice as of yet.

    Deep fake speeches have also been used to highlight his son’s achievements, MK Stalin, who is leading the Dravida Munnetra Kazhagam (DMK) party.

    This development raises profound questions about the ethical and legal implications of using AI to resurrect deceased individuals and ascribe opinions to them.

    The decision to utilise AI for elections has multiple downsides such as lack of authenticity, ethics etc.

  • AI giants to unveil pact to fight political deepfakes in year of crucial elections worldwide

    AI giants to unveil pact to fight political deepfakes in year of crucial elections worldwide

    Tech giants including Meta, Microsoft, Google and OpenAI are working on a pact to jointly crack down on AI content intended to deceive voters ahead of crucial elections around the world this year, companies involved said Tuesday.

    Currently under negotiation by the companies, this so-called “accord” on deepfakes and other dangerous content is set to be announced during the Munich Security conference on Friday.

    “In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters,” a spokesperson for Meta said in an emailed statement to AFP on Tuesday.

    “Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective,” the statement added.

    According to the Washington Post, which first reported the existence of the project, the companies will agree to develop ways to identify, label and control AI-generated images, videos and audio that aim to deceive voters.

    The idea comes as big tech companies are under considerable pressure over fears that AI-powered applications could be misused in a pivotal election year.

    Meta, Google and OpenAI have already agreed to use a common watermarking standard that would tag images generated by their AI applications, such as OpenAI’s ChatGPT, Microsoft’s Copilot or Google’s Gemini (formerly Bard).

    Recent examples of convincing AI deepfakes have only heightened worries about the easily accessible technology.

    Last month, a robocall impersonation of US President Joe Biden pushed out to tens of thousands of voters urged people to not cast ballots in the New Hampshire primary.

    In Pakistan, the party of former prime minister Imran Khan has used AI to generate speeches from their jailed leader.

  • ‘Japan to hire thousands of IT experts from Pakistan’

    ‘Japan to hire thousands of IT experts from Pakistan’

    The Government of Japan has decided to hire thousands of IT professionals from Pakistan in the coming years. As per details, the Japanese government is looking for people with expertise in cloud computing, data sciences, programming and artificial intelligence (AI).

    Delegations from the Japan International Corporation Agency (JICA) met officials from the Ministry of Overseas and Human Resources Development and experts from the Pakistan Association of Software Houses for IT and ITES (P@SHA) for recruitment.

    It is also mandatory to learn the basic Japanese language besides degrees and required skills to avail job opportunities in Japan.

    Furthermore, the above-stated organisations and departments will give visas, funds for travel and other expenses to the hired candidates.

    Both countries have collaborated because Japan needs programmers and people with expertise in AI and data sciences. But, they added that the Pakistani government have to arrange boot camps for six months to one year to train people with the required skills.

    This step has been taken to balance the cultural diversity as Indians and Bangladeshi IT companies are already dominating the Japanese market.

    The Overseas Employment Corporation (OEC) has recently begun to advertise jobs for hiring different professionals.

    Pakistan is currently producing over 25,000 IT graduates every year in various disciplines of IT and computer sciences.

    Experts say that Pakistan’s export of IT professionals could reach nearly 1,000 per year, which is a good number in the prevailing circumstances.

    The local industry has been expanding its businesses in various dimensions in the past one and half years to meet the demand of the local and foreign markets, which has resulted in significant job openings for new graduates in recent months.

    More than 16,000 Pakistanis reside in various cities of Japan. This community maintains a positive image in Japan and linkages with the department of the host country that help Pakistani students in the pursuit of their careers and businesses.