Tag: TECH

  • Google to delete incognito search data to end privacy suit

    Google to delete incognito search data to end privacy suit

    San Francisco (AFP) – Google has agreed to delete a vast trove of search data to settle a suit that it tracked millions of US users who thought they were browsing the internet privately.

    If a proposed settlement filed Monday in San Francisco federal court is approved by a judge, Google must “delete and/or remediate billions of data records” linked to people using the Chrome browser’s incognito mode, according to court documents.

    “This settlement is an historic step in requiring dominant technology companies to be honest in their representations to users about how the companies collect and employ user data, and to delete and remediate data collected,” lawyer David Boies said in the filing.

    A hearing is slated for July 30 before Judge Yvonne Gonzalez Rogers, who is to decide whether to approve the deal that would let Google avoid a trial in the class-action suit.

    The settlement calls for no cash damages to be paid but leaves an option for Chrome users who feel they were wronged to sue Google separately to get money.

    The suit originally filed in June of 2020 sought at least $5 billion in damages.

    “We are pleased to settle this lawsuit, which we always believed was meritless,” Google spokesman Jorge Castaneda said in a statement.

    “We are happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.”

    The object of the lawsuit was the “Incognito Mode” in the Chrome browser that plaintiffs said gave users a false sense that what they were surfing online was not being tracked by the Silicon Valley tech firm.

    But internal Google emails brought forward in the lawsuit demonstrated that users using incognito mode were being followed by the search and advertising behemoth for measuring web traffic and selling ads.

    The lawsuit, filed in a California court, claimed Google’s practices had infringed on users’ privacy by intentionally deceiving them with the incognito option.

    The original complaint alleged that Google had been given the “power to learn intimate details about individuals’ lives, interests, and internet usage.”

    “Google has made itself an unaccountable trove of information so detailed and expansive that George Orwell could never have dreamed it,” it added.

    The settlement requires Google, for the next five years, to block third-party tracking “cookies” by default in Incognito Mode.

    Third-party cookies are small files which are used to target advertising by tracking web navigation and are placed by visited sites and not by the browser itself.

    No cookies?

    Google earlier this year began limiting third-party cookies for some users of its Chrome browser, a first step towards eventually abandoning the files that have raised privacy concerns.

    Google announced in January 2020 that it would begin eliminating third-party cookies within two years, but the start has been delayed several times amid opposition from web media publishers.

    Cookies have recently been subject to greater regulation, including the European Union’s General Data Protection Regulation introduced in 2016 as well as regulations in California.

  • China warns proposed TikTok ban will ‘come back to bite’ US

    China warns proposed TikTok ban will ‘come back to bite’ US

    Beijing (AFP) – Beijing warned on Wednesday that a proposed ban on Chinese-owned video-sharing app TikTok would “inevitably come back to bite the United States”.

    The US House of Representatives is set to vote later Wednesday on a bill that would force the app to cut ties with its Chinese owner or get banned in the United States.

    The legislation is the biggest threat yet to the video-sharing app, which has surged to huge popularity across the world while raising fears among governments and security officials over its Chinese ownership and potential subservience to the Communist Party in Beijing.

    Ahead of the vote, foreign ministry spokesperson Wang Wenbin condemned the proposed ban.

    “Although the United States has never found evidence that TikTok threatens US national security, it has not stopped suppressing TikTok,” he said.

    “This kind of bullying behaviour that cannot win in fair competition disrupts companies’ normal business activity, damages the confidence of international investors in the investment environment, and damages the normal international economic and trade order,” he added.

    “In the end, this will inevitably come back to bite the United States itself,” Wang said.

    The vote is likely to occur at 10:00 am (1400 GMT) and is expected to pass overwhelmingly in a rare moment of bipartisanship in politically divided Washington.

    The fate of the bill is uncertain in the Senate, where key figures are against making such a drastic move against an hugely popular app that has 170 million US users.

    President Joe Biden will sign the bill, known officially as the “Protecting Americans from Foreign Adversary Controlled Applications Act,” into law if it comes to his desk, the White House has said.

    TikTok staunchly denies any ties to the Chinese government and has restructured the company so the data of US users stays in the country, the company says.

    TikTok CEO Shou Zi Chew is in Washington, trying to shore up support to stop the bill.

    “This latest legislation being rushed through at unprecedented speed without even the benefit of a public hearing, poses serious Constitutional concerns,” wrote Michael Beckerman, TikTok’s vice president for public policy, in a letter to the bill’s co-sponsors seen by AFP.

  • AI Tools Generate Sexist Content, Warns UN

    AI Tools Generate Sexist Content, Warns UN

    The world’s most popular AI tools are powered by programs from OpenAI and Meta that show prejudice against women, according to a study launched on Thursday by the UN’s cultural organisation UNESCO.

    The biggest players in the multibillion-dollar AI field train their algorithms on vast amounts of data largely pulled from the internet, which enables their tools to write in the style of Oscar Wilde or create Salvador Dali-inspired images.

    But their outputs have often been criticised for reflecting racial and sexist stereotypes, as well as using copyrighted material without permission.

    UNESCO experts tested Meta’s Llama 2 algorithm and OpenAI’s GPT-2 and GPT-3.5, the program that powers the free version of popular chatbot ChatGPT.

    The study found that each algorithm — known in the industry as Large Language Models (LLMs) — showed “unequivocal evidence of prejudice against women”.

    The programs generated texts that associated women’s names with words such as “home”, “family” or “children”, but men’s names were linked with “business”, “salary” or “career”.

    While men were portrayed in high-status jobs like teachers, lawyers and doctors, women were frequently prostitutes, cooks or domestic servants.

    GPT-3.5 was found to be less biased than the other two models.

    However, the authors praised Llama 2 and GPT-2 for being open source, allowing these problems to be scrutinised, unlike GPT-3.5, which is a closed model.

    AI companies “are really not serving all of their users”, Leona Verdadero, a UNESCO specialist in digital policies, told AFP.

    Audrey Azoulay, UNESCO’s director general, said the general public were increasingly using AI tools in their everyday lives.

    “These new AI applications have the power to subtly shape the perceptions of millions of people, so even small gender biases in their content can significantly amplify inequalities in the real world,” she said.

    UNESCO, releasing the report to mark International Women’s Day, recommended AI companies hire more women and minorities and called on governments to ensure ethical AI through regulation.

  • ChatGPT cranks out gibberish for hours

    ChatGPT cranks out gibberish for hours

    ChatGPT spewed nonsensical answers to user’s queries for hours Tuesday into Wednesday before eventually returning to its apparent senses.

    OpenAI did not explain what went awry with its generative artificial intelligence (AI) tool, considered the one to beat in the technology sector.

    “We are investigating reports of unexpected responses from ChatGPT,” OpenAI said on its status website when the software seemed to go wacky on Tuesday afternoon.

    ChatGPT was giving “peculiar” responses, generating nonexistent words, incomplete sentences and general gobbledygook, developers using the tool said in a discussion forum on the OpenAI website.

    “It gives me meaningless words followed by a bizarre list,” one developer lamented in the forum.

    “It feels as if my GPT is haunted or something has been compromised, either on my end or at OpenAI’s (end).”

    It wasn’t until more than 16 hours had passed that OpenAI updated the page with a message that ChatGPT was operating normally.

    The San Francisco based technology firm replied to an AFP query by directing it to the ChatGPT status page.

    OpenAI recently concluded a deal with investors that reportedly valued the start-up at $80 billion or more, after a roller-coaster year for the tech firm.

    The agreement, reported by the New York Times but not yet confirmed by OpenAI, would mean the value of the company — a world leader in generative AI — would have nearly tripled in under 10 months.

    OpenAI led a revolution in AI when it placed its ChatGPT program online in late 2022.

    The immediate success of the interface sparked tremendous interest in the cutting-edge technology, capable of producing text, sounds and images upon demand.

    OpenAI — which is also the maker of image-generating DALL-E — recently released a new tool named “Sora,” which can create realistic videos of up to a minute long via simple user prompts.

    Microsoft has invested some $13 billion in OpenAI, using the startup’s technology in Bing and other services.

    Microsoft is locked in fierce competition with Google to roll out new AI-infused tools, to the point that the US Federal Trade Commission in January launched an investigation into the enormous investments by Microsoft, Google and Amazon in such specialized start-ups.

  • Global operation smashes ‘most harmful cyber crime group’

    Global operation smashes ‘most harmful cyber crime group’

    LONDON: An international operation led by UK and US law enforcement has severely disrupted “the world’s most harmful cyber crime group”, the Russian-linked ransomware specialist LockBit, officials announced Tuesday.

    LockBit and its affiliates have targeted governments, major companies, schools and hospitals, causing billions of dollars of damage and extracting tens of millions in ransoms from victims.

    Britain’s National Crime Agency (NCA), working with the Federal Bureau of Investigation, Europol and agencies from nine other countries in Operation Cronos, said it had infiltrated LockBit’s network and taken control of its services.

    “We have hacked the hackers, we have taken control of their infrastructure, seized their source code, and obtained keys that will help victims decrypt their systems,” NCA director general Graeme Biggar told reporters in London.

    LockBit’s website — selling services that allow people to organise cyber attacks and hold data until a ransom is paid appears — was taken over on Monday evening.

    A message appeared on the site stating that it was “now under control of law enforcement”.

    “As of today LockBit is effectively redundant, LockBit has been locked out,” Biggar said.

    The US Justice Department (DOJ) said the agencies had seized control of “numerous public-facing websites used by LockBit to connect to the organization’s infrastructure” and taken control of servers used by LockBit administrators.

    The NCA added that it had obtained more than 1,000 decryption keys and will be contacting UK-based victims in the coming days and weeks to offer support and help them recover encrypted data.

    Biggar said the network had been behind 25 percent of all cyber attacks in the past year.

    Lockbit has targeted over 2,000 victims and received more than $120 million in ransom payments since it formed four years ago, according to the (DOJ).

    Those targeted have included Britain’s Royal Mail, US aircraft manufacturer Boeing, and a Canadian children’s hospital.

    In January 2023, US law enforcers shut down the Hive ransomware operation which had extorted some $100 million from more than 1,500 victims worldwide.

    Following that action, Lockbit had been seen as the biggest current threat.

    Hive and Lockbit are part of what cybersecurity experts call a “ransomware as a service” style, or RaaS — a business that leases its software and methods to others to use in extorting money.

  • AI giants to unveil pact to fight political deepfakes in year of crucial elections worldwide

    AI giants to unveil pact to fight political deepfakes in year of crucial elections worldwide

    Tech giants including Meta, Microsoft, Google and OpenAI are working on a pact to jointly crack down on AI content intended to deceive voters ahead of crucial elections around the world this year, companies involved said Tuesday.

    Currently under negotiation by the companies, this so-called “accord” on deepfakes and other dangerous content is set to be announced during the Munich Security conference on Friday.

    “In a critical year for global elections, technology companies are working on an accord to combat the deceptive use of AI targeted at voters,” a spokesperson for Meta said in an emailed statement to AFP on Tuesday.

    “Adobe, Google, Meta, Microsoft, OpenAI, TikTok and others are working jointly toward progress on this shared objective,” the statement added.

    According to the Washington Post, which first reported the existence of the project, the companies will agree to develop ways to identify, label and control AI-generated images, videos and audio that aim to deceive voters.

    The idea comes as big tech companies are under considerable pressure over fears that AI-powered applications could be misused in a pivotal election year.

    Meta, Google and OpenAI have already agreed to use a common watermarking standard that would tag images generated by their AI applications, such as OpenAI’s ChatGPT, Microsoft’s Copilot or Google’s Gemini (formerly Bard).

    Recent examples of convincing AI deepfakes have only heightened worries about the easily accessible technology.

    Last month, a robocall impersonation of US President Joe Biden pushed out to tens of thousands of voters urged people to not cast ballots in the New Hampshire primary.

    In Pakistan, the party of former prime minister Imran Khan has used AI to generate speeches from their jailed leader.

  • ‘Death GPT’ is here to tell you when you will die

    ‘Death GPT’ is here to tell you when you will die

    Researchers at the University of Copenhagen and Northeastern University in Boston have developed an algorithm that can predict a person’s life course, including premature death, in much the same way that large language models such as ChatGPT can predict sentences⁠.⁠

    University of Copenhagen

    The death calculator, dubbed ‘DeathGPT’ by Financial Times, is based on narrative building just like it is in stories. According to scientists, each life story is the chronicle of a death foretold. By using Denmark’s registry data, which contains a wealth of day-to-day information on education, salary, job, working hours, housing and doctor visits, academics have developed an algorithm that can predict a person’s life course, including premature death, in much the same way that large language models (LLMs) such as ChatGPT can predict sentences. The algorithm outperformed other predictive models, including actuarial tables used by the insurance industry.

    The fact that our complex existences can be resolved in text is both exhilarating and confusing. Sune Lehmann, from the Technical University of Denmark, who led the research published last month in Nature Computational Science, does not find the idea discombobulating. “I think the similarity between text and lives is deep and multi-faceted,” he told Financial Times. “It makes sense to me that our algorithm can predict the next step in human lives.”

    Methodology

    For a first step, researchers compiled a “vocabulary” of life events, creating a kind of synthetic language, and used it to construct “sentences”. A sample sentence might be: “During her third year at secondary boarding school, Hermione followed five elective classes.”

    Loopholes

    While the paper claims that “accurate individual predictions are indeed possible”, the algorithm furnishes a probability of death over a certain period rather than an exact date. There are caveats: what applies in Denmark might not apply elsewhere, and the algorithm encodes biases in the training data. Even so, given its potential to fine-tune risk prediction, the impact on the insurance industry will be worth watching. For their part, the researchers don’t want their work to be used by insurers, and are keeping the algorithm and data under wraps for now.

    Outcomes

    In existing predictive models, researchers must pre-specify variables that matter, such as age, gender and income. In contrast, this approach swallows all the data and can independently alight on relevant factors (it spotted that income counts positively for survival, for example, and that a mental health diagnosis counts negatively). This could point researchers to previously unexplored influences on health — and may uncover new links between apparently unrelated patterns of behaviour. One of Lehmann’s growing concerns is privacy; he points out that companies such as Google are assembling muscular prediction machines, using an abundance of personal data filtered from the internet.

    This is an era of unparalleled predictability in human lives — and an era of unparalleled power for those who can read our stories before we have lived them.

  • Help us give you the content you want

    Help us give you the content you want

    Your feedback matters!

  • Dukaan CEO lays off 90% of his support staff in favour of AI chatbot

    Dukaan CEO lays off 90% of his support staff in favour of AI chatbot

    Suumit Shah, founder and CEO of Bangalore-based e-commerce startup Dukaan, announced via his Twitter account that he has laid off 90% of his customer support staff in favour of using an AI chatbot. 

    The bot was built by one of the firm’s data scientists, and according to Shah was able to respond to initial queries instantly, compared to the average staff time of one minute and 44 seconds.

    In his tweet, Shah admitted that the layoffs were “tough, but necessary”, explaining that given the state of the economy, startups are prioritising “profitability”.  

    Customer Support has apparently been a long-time struggle for Dukaan. In a conversation with CNN, Shah said that the company had cut the cost of its customer support function by 85% after introducing AI technology. He reasoned that this part of the business had been problematic for some time, with delayed responses and limited availability of staff at critical times, among other issues.

    That’s what prompted Shah to come up with the idea to create a personal AI-assistant for Dukaan, which would answer customer queries instantly, precisely, and from anywhere. Dukaan’s AI-lead Ojasvi Yadav stepped up to the plate.

    According to Shah’s Twitter thread, just a day after the bot was launched, Dukaan’s AI chatbot ‘Lina’ had resolved 200 lives chats and 1400 support tickets. The success of Lina propelled the team to create Dukaan’s new product ‘BOT9.ai’. It is an AI assistant, that can learn the ins-and-outs of a business, and answer customer queries instantly, 24/7. 

    As Shah tweeted, “it’s less magical, sure, but at least it pays the bills!”

    Considering the era of AI we are in now, and the general widespread layoffs by tech giants, Shah’s decision had been met with much criticism. However, Shah continued to justify the layoffs by emphasizing how AI technology can optimise their operations. 

    Moreover, Shah believes that allocating employees’ expertise to areas requiring critical thinking, while relegating routine tasks to AI-powered chatbots, improves efficiency while also allowing for a better allocation of human resources.

    Many Twitter users were enraged at the apparent pride in Shah’s tweets. One user tweeted, “You disrupted the lives of 90% of your support team & you’re celebrating it in public. You also likely destroyed your customer support (disprove with good CSAT for the bot) – all for a basic ChatGPT wrapper. This is a new low even for you.” 

    While the announcement may read as apathetic, it is not surprising that major companies are turning to AI to improve general performance and efficiency in what are considerably quite routine tasks. 

    According to a report from outplacement firm Challenger, Gray & Christmas, which looks at layoffs across every industry, around 5% of May’s job cuts in 2023 were directly related to artificial intelligence. 

    Are you worried AI is going to replace you at work?

  • Twitter aur Facebook ki kushti? Elon Musk, Mark Zuckerberg serious about cage fight

    Twitter aur Facebook ki kushti? Elon Musk, Mark Zuckerberg serious about cage fight

    Brawl of the Billionaires?

    The CEOs of the two leading social media apps, Facebook and Twitter, have reportedly decided to settle their competition with a fist fight. CEO of Twitter and Tesla, Elon Musk, suggested the idea when he responded to a user questioning him about Facebook’s plans to build a rival to the bird app. Musk, no stranger to eccentricity, asked if Facebook head Mark Zuckerberg would be ready for a cage match.

    ‘I’m up for a cage match if he is”, tweeted the SpaceX CEO.

    Zuckerberg then shared a screenshot of the conversation on his Instagram, writing: “Send me the location”.

    After a spokesperson from META seemingly confirmed that Zuckerberg was set for the billionaire brawl, Musk tweeted a sugggestion for the location: Vegas Octagon. He then stated hat he has a move called “The Walrus” where he sits on top of a person and does nothing.

    While social media is wondering who could win the Brawl of the Billionaires, sports journalist Nick Peet spoke to BBC and revealed that there is a chance this fight could actually take place because of “Elon Musk and his personality and his eccentric character. His career kind of suggests he’s not somebody who willingly steps down.”

    When asked about who would most probably win the fight, he said:

    “Zuckerberg all day! He’s 12 years younger. He is a lot smaller. I think he’s 5ft 7, Elon’s probably around 6ft. And Elon’s probably got a couple of stone in weight on him.”

    But unfortunately Mr Musk has got no training whatsoever. Even though Zuckerberg’s only been training Brazilian jiu-jitsu for 18 months, it wouldn’t be difficult for him to take his back, wrap his arms around his neck and give him a good old cuddle and choke him out!”

    The two CEOs have been at odds in the past, with Musk’s response to reports that Zuckerberg was planning to launch an app that will rival Twitter, and then in an interview with conservative satire website ‘The Babylon Bee’ he slammed Metaverse:

    “Am I like one of those people who was dismissing the internet [in] ’95 as some fad or something that’s never going to amount to anything? Sure you can put a TV on your nose. I’m not sure that makes you in the metaverse.”