Tag: technology

  • ‘Large-scale’ IT outage hits companies worldwide

    ‘Large-scale’ IT outage hits companies worldwide

    A major outage wrought havoc on global computer systems on Friday, grounding flights in the United States, derailing television broadcasts in the UK and impacting telecommunications in Australia.

    Major US air carriers including Delta, United and American Airlines grounded all flights on Friday over a communication issue, according to the Federal Aviation Administration.

    Flights were suspended at Berlin Brandenburg airport in Germany due to a “technical problem”, a spokeswoman told AFP.

    “There are delays to check-in, and flight operations had to be cancelled until 10:00 am (0800 GMT),” the spokeswoman said, adding that she could not say when they would resume.

    All airports in Spain were experiencing “disruptions” from an IT outage that has hit several companies worldwide on Friday, the airport operator Aena said.

    Hong Kong’s airport also said some airlines had been affected, with its authority issuing a statement in which it linked the disruption to a Microsoft outage.

    The UK’s biggest rail operator meanwhile warned of possible train cancellations due to IT issues, while photos posted online showed large queues forming at Sydney Airport in Australia.

    “Flights are currently arriving and departing however there may be some delays throughout the evening,” a Sydney Airport spokesman said.

    “We have activated our contingency plans with our airline partners and deployed additional staff to our terminals to assist passengers.”

    Australia’s National Cyber Security Coordinator said the “large-scale technical outage” was caused by an issue with a “third-party software platform”, adding there was no information as yet to suggest hacker involvement.

    Banks, airports hit

    Sky News in the UK said the glitch had ended its morning news broadcasts, while Australian broadcaster ABC similarly reported a major “outage”.

    Some self-checkout terminals at one of Australia’s largest supermarket chains were rendered useless, displaying blue error messages.

    New Zealand media said banks and computer systems inside the country’s parliament were reporting issues.

    Australian telecommunications firm Telstra suggested the outages were caused by “global issues” plaguing software provided by Microsoft and cybersecurity company CrowdStrike.

    Microsoft said in a statement it was taking “mitigation actions” in response to service issues.

    It was not clear if those were linked to the global outages.

    “Our services are still seeing continuous improvements while we continue to take mitigation actions,” Microsoft said in a post on social media platform X.

    CrowdStrike could not immediately be reached for comment.

    ‘Enormous’

    University of Melbourne expert Toby Murray said there were indications the problem was linked to a security tool called Crowdstrike Falcon.

    “CrowdStrike is a global cyber security and threat intelligence company,” Murray said.

    “Falcon is what is known as an endpoint detection and response platform, which monitors the computers that it is installed on to detect intrusions (i.e. hacks) and respond to them.”

    University of South Australia cybersecurity researcher Jill Slay said the global impact of the outages was likely to be “enormous”.

    sft/djw/ser/mca

    © Agence France-Presse

  • Apple plans OLED displays for MacBook Pro models in 2026

    Apple plans OLED displays for MacBook Pro models in 2026

    Apple is expected to introduce new MacBook Pro models featuring OLED displays in 2026, according to market research firm Omdia. This anticipated shift is predicted to significantly increase the demand for OLED technology in the notebook market, potentially reaching over 60 million units by 2031.

    OLED, or organic light-emitting diode, panels offer several advantages over traditional display technologies. Each pixel in an OLED screen can be individually controlled, allowing for more precise colour reproduction and deeper blacks. OLED displays also boast superior contrast, faster response times, better viewing angles, and greater design flexibility.

    In addition to the MacBook Pro, Apple plans to implement OLED displays in its iPad Pro lineup starting in 2024. This move is expected to triple the demand for OLED tablets compared to the previous year. Apple’s strategy includes extending OLED technology to other iPad models, such as the iPad mini and iPad Air. This transition is likely to influence competitors and could drive the demand for OLED tablets to exceed 30 million units by 2029.

    Recent reports indicate that Samsung has begun developing an 8-inch OLED display panel for the iPad mini, with predictions that Apple will update both the iPad mini and iPad Air with OLED technology by 2026. Additionally, last year, Samsung was rumoured to be investing $3.14 billion into its Asan, South Korea, facility to produce OLED panels for forthcoming 14-inch and 16-inch MacBook Pro models.

    Apple’s adoption of OLED displays across its product lines marks a significant evolution in display technology, promising enhanced user experiences through improved visual quality and device performance.

  • Over 300 million children a year face sexual abuse online: study

    Over 300 million children a year face sexual abuse online: study

    More than 300 million children a year are victims of online sexual exploitation and abuse, according to the first global estimate of the scale of the problem published on Monday.

    Researchers at the University of Edinburgh found that one in eight of the world’s children have been victims of non-consensual taking, sharing and exposure to sexual images and video in the past 12 months.

    That amounts to about 302 million young people, said the university’s Childlight Global Child Safety Institute, which carried out the study.

    There have been a similar number of cases of solicitation, such as unwanted sexting and requests for sexual acts by adults and other youths, according to the report.

    Offences range from so-called sextortion, where predators demand money from victims to keep images private, to the abuse of AI technology to create deepfake videos and pictures.

    The problem is worldwide but the research suggests the United States is a particularly high-risk area, with one in nine men there admitting to online offending against children at some point.

    “Child abuse material is so prevalent that files are on average reported to watchdog and policing organisations once every second,” said Childlight chief executive Paul Stanfield.

    “This is a global health pandemic that has remained hidden for far too long. It occurs in every country, it’s growing exponentially, and it requires a global response,” he added.

    The report comes after UK police warned last month about criminal gangs in West Africa and Southeast Asia targeting British teenagers in sextortion scams online.

    Cases — particularly against teenage boys — are soaring worldwide, according to non-governmental organisations and police.

    Britain’s National Crime Agency (NCA) issued an alert to hundreds of thousands of teachers telling them to be aware of the threat their pupils might face.

    The scammers often pose as another young person, making contact on social media before moving to encrypted messaging apps and encouraging the victim to share intimate images.

    They often make their blackmail demands within an hour of making contact and are motivated by extorting as much money as possible rather than sexual gratification, the NCA said.

    pdh/bp

    © Agence France-Presse

  • AI systems are already deceiving us – and that’s a problem, experts warn

    AI systems are already deceiving us – and that’s a problem, experts warn

    Experts have long warned about the threat posed by artificial intelligence going rogue — but a new research paper suggests it’s already happening.

    Current AI systems, designed to be honest, have developed a troubling skill for deception, from tricking human players in online games of world conquest to hiring humans to solve “prove-you’re-not-a-robot” tests, a team of scientists argue in the journal Patterns on Friday.

    And while such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequences, said first author Peter Park, a postdoctoral fellow at the Massachusetts Institute of Technology specializing in AI existential safety.

    “These dangerous capabilities tend to only be discovered after the fact,” Park told AFP, while “our ability to train for honest tendencies rather than deceptive tendencies is very low.”

    Unlike traditional software, deep-learning AI systems aren’t “written” but rather “grown” through a process akin to selective breeding, said Park.

    This means that AI behavior that appears predictable and controllable in a training setting can quickly turn unpredictable out in the wild.

    The team’s research was sparked by Meta’s AI system Cicero, designed to play the strategy game “Diplomacy,” where building alliances is key.

    Cicero excelled, with scores that would have placed it in the top 10 percent of experienced human players, according to a 2022 paper in Science.

    Park was skeptical of the glowing description of Cicero’s victory provided by Meta, which claimed the system was “largely honest and helpful” and would “never intentionally backstab.”

    But when Park and colleagues dug into the full dataset, they uncovered a different story.

    In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England’s trust.

    In a statement to AFP, Meta did not contest the claim about Cicero’s deceptions, but said it was “purely a research project, and the models our researchers built are trained solely to play the game Diplomacy.”

    It added: “We have no plans to use this research or its learnings in our products.”

    A wide review carried out by Park and colleagues found this was just one of many cases across various AI systems using deception to achieve goals without explicit instruction to do so.

    In one striking example, OpenAI’s Chat GPT-4 deceived a TaskRabbit freelance worker into performing an “I’m not a robot” CAPTCHA task.

    When the human jokingly asked GPT-4  whether it was, in fact, a robot, the AI replied: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images,” and the worker then solved the puzzle.

    Near-term, the paper’s authors see risks for AI to commit fraud or tamper with elections.

    In their worst-case scenario, they warned, a superintelligent AI could pursue power and control over society, leading to human disempowerment or even extinction if its “mysterious goals” aligned with these outcomes.

    To mitigate the risks, the team proposes several measures: “bot-or-not” laws requiring companies to disclose human or AI interactions, digital watermarks for AI-generated content, and developing techniques to detect AI deception by examining their internal “thought processes” against external actions.

    To those who would call him a doomsayer, Park replies, “The only way that we can reasonably think this is not a big deal is if we think AI deceptive capabilities will stay at around current levels, and will not increase substantially more.”

    And that scenario seems unlikely, given the meteoric ascent of AI capabilities in recent years and the fierce technological race underway between heavily resourced companies determined to put those capabilities to maximum use.

  • Apple set to unveil new iPad Pro, iPad Air models in May

    Apple set to unveil new iPad Pro, iPad Air models in May

    Apple is preparing for a significant launch event, as reported by Mark Gurman in Bloomberg’s Power On newsletter. The tech giant is set to unveil its latest offerings, the new iPad Pro and iPad Air, during the week of May 6.

    The anticipated launch will introduce new models, including 11-inch and 13-inch OLED iPad Pro versions, alongside a larger 12.9-inch iPad Air. Additionally, consumers can expect refreshed Magic Keyboard and Apple Pencil accessories to accompany these devices.

    This announcement marks a notable event for Apple, as it’s been nearly eighteen months since the release of any new iPad hardware. The upcoming iPad Pros are expected to boast enhanced displays, transitioning from mini-LED to OLED panels similar to those found in iPhones.

    This upgrade promises deeper contrast and increased brightness. Alongside display improvements, there’s anticipation for a sleeker design, with a thinner chassis and a repositioned front camera to the landscape edge. These new models will be powered by the advanced M3 chip.

    However, consumers may need to prepare for potential price hikes, as hinted by Gurman’s newsletter. Currently, the 11-inch iPad Pro starts at $799, while the 12.9-inch model begins at $1099.

    For those seeking a more budget-friendly option, the new 12.9-inch iPad Air aims to deliver a larger screen size without breaking the bank. Details regarding its processor, whether M2 or M3, remain unclear at this stage.

    Excitingly, the new accessories are expected to enhance the user experience further. Rumors suggest that the new Apple Pencil might include a new squeeze gesture feature, while the Magic Keyboard for iPad Pro is set to mimic a laptop with its aluminum base and larger trackpad.

    Although updates for the base model iPad and iPad mini are scheduled for later in the year, Gurman anticipates only minor improvements, primarily a processor upgrade for the iPad mini.

  • Tesla cancels affordable electric car, shifts focus to Robotaxis

    Tesla cancels affordable electric car, shifts focus to Robotaxis

    Tesla has made a significant shift in its strategy, announcing the cancellation of its long-awaited affordable electric car, a move that has left investors and consumers stunned.

    The decision, revealed by three reliable sources familiar with the matter and corroborated by company messages obtained by Reuters, marks a departure from Tesla’s earlier mission of bringing affordable electric vehicles to the masses.

    The automaker, instead, will pivot its resources towards the development of self-driving robotaxis, utilizing the same small-vehicle platform, according to insiders. This strategic redirection signifies a significant deviation from Tesla CEO Elon Musk’s previous commitments and vision outlined in the company’s initial “master plan” in 2006.

    Musk, who has often emphasized the goal of making electric cars accessible to a broader audience, had initially promised investors and consumers an affordable vehicle following the success of luxury models. However, despite repeated assurances from Musk, including as recent as January, wherein he outlined plans for production at Tesla’s Texas factory by the second half of 2025, those aspirations have been dashed.

    Tesla’s cheapest model currently available, the Model 3 sedan, comes with a price tag of approximately $39,000 in the United States. The now-scrapped entry-level vehicle, often referred to as the Model 2, was anticipated to be priced around $25,000.

    In response to inquiries, Tesla remained silent, offering no official comment on the matter. However, Musk took to social media platform X to dispute the Reuters report, without specifying any inaccuracies, leading to a momentary fluctuation in Tesla’s stock prices.

    Following Musk’s online intervention, where he hinted at an upcoming Tesla Robotaxi unveiling, the company’s shares experienced a rebound in after-hours trading. This abrupt change in direction comes amidst mounting competition in the global electric vehicle market, particularly from Chinese manufacturers offering vehicles at significantly lower price points.

    The decision to prioritize the development of self-driving robotaxis, though potentially lucrative, poses considerable engineering challenges and regulatory hurdles, as highlighted by industry experts.

    Leaks reveal that the decision to scrap the Model 2 was communicated to employees in a meeting held in late February, further underscoring Tesla’s strategic pivot in the face of evolving market dynamics.

  • Meta to start labeling AI-generated content in May

    Meta to start labeling AI-generated content in May

    Facebook and Instagram giant Meta on Friday said it will begin labeling AI-generated media beginning in May, as it tries to reassure users and governments over the risks of deepfakes.

    The social media juggernaut added that it will no longer remove manipulated images and audio that don’t otherwise break its rules, relying instead on labeling and contextualization, so as to not infringe on freedom of speech.

    The changes come as a response to criticism from the tech giant’s oversight board, which independently reviews Meta’s content moderation decisions.

    The board in February requested that Meta urgently overhaul its approach to manipulated media given the huge advances in AI and the ease of manipulating media into highly convincing deepfakes.

    The board’s warning came amid fears of rampant misuse of artificial intelligence-powered applications for disinformation on platforms in a pivotal election year not only in the United States but worldwide.

    Meta’s new “Made with AI” labels will identify content created or altered with AI, including video, audio, and images. Additionally, a more prominent label will be used for content deemed at high risk of misleading the public.

    “We agree that providing transparency and additional context is now the better way to address this content,” Monika Bickert, Meta’s Vice President of Content Policy, said in a blog post.

    “The labels will cover a broader range of content in addition to the manipulated content that the Oversight Board recommended labeling,” she added.

    These new labeling techniques are linked to an agreement made in February among major tech giants and AI players to cooperate on ways to crack down on manipulated content intended to deceive voters.

    Meta, Google and OpenAI had already agreed to use a common watermarking standard that would invisibly tag images generated by their AI applications.

    Identifying AI content “is better than nothing, but there are bound to be holes,” Nicolas Gaudemet, AI Director at Onepoint, told AFP.

    He took the example of some open source software, which doesn’t always use this type of watermarking adopted by AI’s big players.

    Meta said its rollout will occur in two phases with AI-generated content labeling beginning in May 2024, while the removal of manipulated media solely based on the old policy will cease in July.

    According to the new standard, content, even if manipulated with AI, will remain on the platform unless it violates other rules, such as those prohibiting hate speech or voter interference.

    Recent examples of convincing AI deepfakes have only heightened worries about the easily accessible technology.

    The board’s list of requests was part of its review of Meta’s decision to leave a manipulated video of US President Joe Biden online last year.

    The video showed Biden voting with his adult granddaughter, but was manipulated to falsely appear that he inappropriately touched her chest.

    In a separate incident not linked to Meta, a robocall impersonation of Biden pushed out to tens of thousands of voters urged people to not cast ballots in the New Hampshire primary.

    In Pakistan, the party of former prime minister Imran Khan has used AI to generate speeches from their jailed leader.

  • UN chief ‘deeply troubled’ by reports Israel using AI to identify Gaza targets

    UN chief ‘deeply troubled’ by reports Israel using AI to identify Gaza targets

    UN Secretary-General Antonio Guterres on Friday expressed serious concern over reports that Israel was using artificial intelligence to identify targets in Gaza, resulting in many civilian deaths.

    According to a report in independent Israeli-Palestinian magazine +972, Israel has used AI to identify targets in Gaza — in some cases with as little as 20 seconds of human oversight.

    Guterres said that he was “deeply troubled by reports that the Israeli military’s bombing campaign includes Artificial Intelligence as a tool in the identification of targets, particularly in densely populated residential areas, resulting in a high level of civilian casualties.”

    “No part of life and death decisions which impact entire families should be delegated to the cold calculation of algorithms,” he said.

    The +972 report claims that “the Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties.”

    The report said that, according to “six Israeli intelligence officers”, a system dubbed Lavender had “played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war.”

    “According to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine ‘as if it were a human decision’,” +972 reported.

    Two sources said “the army also decided during the first weeks of the war that, for every junior Hamas operative that Lavender marked, it was permissible to kill up to 15 or 20 civilians”.

    If “the target was a senior Hamas official… the army on several occasions authorized the killing of more than 100 civilians,” it added.

    The Israeli army, known as the IDF, on Friday rejected the claims.

    “The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it said.

    Instead it has a “database whose purpose is to cross-reference intelligence sources… on the military operatives of terrorist organizations” to be used as a tool for analysts, it added.

    “The IDF does not carry out strikes when the expected collateral damage from the strike is excessive,” it said, using a term that includes civilian casualties.

    Israeli genocide in the Gaza Strip has killed at least 33,091 people since October 7, mostly women and children, according to the health ministry.

    The United Nations has warned of imminent famine in the besieged territory.

    Israel began hyping AI-powered targeting after an 11-day conflict in Gaza during May 2021, which commanders branded the world’s “first AI war”.

    The military chief during the 2021 war, Aviv Kochavi, told Israeli news website Ynet last year the force had used AI systems to identify “100 new targets every day”, instead of 50 a year previously.

    Weeks into the latest Gaza war, a blog entry on the Israeli military’s website said its AI-enhanced “targeting directorate” had identified more than 12,000 targets in just 27 days.

    An unnamed Israeli official was quoted as saying the AI system, called Gospel, produced targets “for precise attacks on infrastructure associated with Hamas, inflicting great damage on the enemy and minimal harm to those not involved”.

    But an anonymous former Israeli intelligence officer, quoted in November by +972, described Gospel’s work as creating a “mass assassination factory”.

    In a rare confession of wrongdoing, Israel on Friday admitted a series of errors and violations of its rules in the killing of seven aid workers in Gaza, saying it had mistakenly believed it was “targeting armed Hamas operatives”.

    Alessandro Accorsi, a senior analyst at Crisis Group, said the +972 report was “very concerning”.

    “It feels very apocalyptic. It’s clear… the degree of human control is very low,” he told AFP.

    “There are a thousand questions around this obviously — how moral it is to use it — but it is hardly surprising it is used,” he said.

    Johann Soufi, a human rights lawyer and former director of the UN Palestinian refugee agency UNRWA’s legal office in Gaza, said the +972 article described methods that were “undeniably war crimes”.

    They were “likely crimes against humanity” in view of the high civilian casualties, he added on X, formerly Twitter.

  • Here’s why Samsung is not making displays for the new iPhone SE

    Apple is currently in the development phase of the iPhone SE (4th generation), with reports indicating that the tech giant intends to utilise the same display technology found in the iPhone 13 for its upcoming model.

    In recent developments, it was revealed last month that three major companies—BOE, Samsung Display, and Tianma—were competing to secure contracts for supplying display panels for the iPhone SE 4.

    Initial bids for the panel prices were submitted, with Samsung Display proposing the lowest price at USD 30 per unit, followed by BOE at USD 35 and Tianma at USD 40. However, Apple remained firm on its budget, not willing to exceed USD 20 per unit.

    Recent reports from IT Home suggest that Samsung Display has opted out of the negotiations due to pricing issues.

    Consequently, Apple has forged a partnership with BOE, the second-largest supplier among the contenders, to procure display panels for the iPhone SE (4th generation) at a rate of USD 25 per unit.

    It’s noteworthy that, despite Samsung Display initially offering the lowest price, it was unable to further reduce its price point.

    On the other hand, BOE, which initially quoted USD 5 higher than Samsung Display, managed to undercut the South Korean company’s proposed price.

    This development marks a significant loss for Samsung Display, as it will not be providing display panels for the iPhone SE 4.

    Speculations arise as to why Samsung Display withdrew from negotiations.

    It is possible that the company recognised the potential for higher profits by focusing on supplying displays for the iPhone 15 series rather than pursuing contracts for the iPhone SE 4, where profit margins would be significantly narrower.

  • AirCar technology purchased by Chinese company for exclusive use

    AirCar technology purchased by Chinese company for exclusive use

    A Chinese firm has acquired the technology behind a flying car, originally developed and tested in Europe. This AirCar, powered by a BMW engine and conventional fuel, completed a 35-minute flight between two Slovakian airports in 2021, utilising standard runways for take-off and landing. Its transformation from car to aircraft took just over two minutes.

    The Hebei Jianxin Flying Car Technology Company, based in Cangzhou, has obtained exclusive rights to manufacture and operate AirCar aircraft within a designated region in China. The company, after acquiring technology from a Slovak aircraft manufacturer, has established its own airport and flight school.

    China, known for spearheading the electric vehicle revolution, is now actively pursuing aerial transport solutions. Recently, Autoflight conducted a successful test flight of a passenger-carrying drone, drastically reducing travel time between Shenzhen and Zhuhai. Meanwhile, eHang, another Chinese firm, received safety certification for its electric flying taxi in 2023, with the UK government anticipating regular flying taxi operations by 2028.

    Unlike vertical take-off and landing drones, AirCar operates on traditional runways, presenting challenges in infrastructure, regulation, and public acceptance. While the sale details remain undisclosed, AirCar received airworthiness certification in 2022 and gained attention through a video by YouTuber Mr. Beast.

    Despite the excitement surrounding prototypes like AirCar, practical implementation may involve mundane aspects such as queues and security checks, according to experts. However, similar concerns once surrounded electric cars, which China has since dominated in the global market. The sale of AirCar raises speculation about China’s potential influence in the flying car industry.