Tag: Artificial Intelligence

  • ‘Every actor has the right to protect themselves’: Anil Kapoor wins landmark case against AI

    ‘Every actor has the right to protect themselves’: Anil Kapoor wins landmark case against AI

    Veteran Bollywood actor Anil Kapoor has won a landmark case preventing his image from being used in any manner by Artificial Intelligence (AI) technology.

    The actor filed a case with the Delhi High Court, after numerous morphed videos and emojis featuring his iconic phrase ‘jhakas’ from the 1985 film ‘Yudh’ went viral on social media.

    The suit had asked for protection for the actor’s personality rights including his name, image, likeness, voice against any misuse on social media, and listed various instances of how the actor’s attributes were being misused. After a detailed hearing, the court sided with Anil Kapoor by acknowledging his personality rights, and restrained users from misusing the attributes without his permission or consent.

    Speaking to Variety, Anil Kapoor said he was happy with the order.

    “I’m very happy with this court order, which has come in my favor, and I think it’s very progressive and great for not only me but for other actors also. Because of the way technology and the AI technology, which is which is evolving every day [and] which can completely take advantage of and be misused commercially, as well as where my image, voice, morphing, GIFs and deep fakes are concerned, I can straight away, if that happens, send a court order and injunction and they have to pull it down.”

    “It’s not only for me,” the ‘Slumdog Millionaire’ actor stressed. “Today, I’m there to protect myself, but when I’m not there, the family should have the right to protect my [personality] and gain from it in future.”

    “My intention is not to interfere with anyone’s freedom of expression or to penalize anyone. My intent was to seek protection of my personality rights and prevent any misuse for commercial gains, particularly in the current scenario with rapid changes in technology and tools like artificial intelligence.”

    AI is a central element to the SAG-AFTRA strikes in Hollywood. Anil Kapoor expressed solidarity with the ongoing strike.

    “This [the court order] should be great positive news for all of them to a certain extent. And I am always, completely with them in every which way, and I feel their rights should be protected, because everybody, big, small, popular, not popular, every actor has the right to protect themselves and their rights.”

  • AI-generated song with vocals by Drake, The Weeknd submitted for 2 Grammy’s

    AI-generated song with vocals by Drake, The Weeknd submitted for 2 Grammy’s

    ‘Heart On The Sleeve’ has been submitted to the Grammys for two awards this year, including ‘Song Of The Year’ and ‘Best Rap Song’. The song, featuring vocals by Drake and Abel Tesfaya aka The Weekend, is AI generated. The New York Times reports that an anonymous songwriter, Ghostwriter, has written the track, as well as another song ‘Whiplash’ which features vocals by Travis Scott and 21 Savage.

    Here’s the catch: none of these singers have actually sung the songs.

    ‘Heart On A Sleeve’ racked up 600,000 streams on Spotify as well as 15 million views on Tiktok but was removed immediately after Drake and The Weeknd’s label, Universal Music Group, reacted. But, the publication reports, the song was eligible for two awards, since the category recognizes writers, rather than performers, taking refuge in a loop hole in the announcement by the Recording Academy in June which banned AI-generated songs from being submitted for consideration, since the song was written by a human.

    Harvey Mason, chief executive of the Recording Academy, said:

    “I knew right away as soon as I heard that record that it was going to be something that we had to grapple with from an Academy standpoint, but also from a music community and industry standpoint. When you start seeing A.I. involved in something so creative and so cool, relevant and of-the-moment, it immediately starts you thinking, ‘OK, where is this going? How is this going to affect creativity? What’s the business implication for monetization?’”

    Meanwhile Ghostwriter wrote a lengthy statement on Twitter where he called for both 21 Savage and Travis Scott to post a collaboration, and clarified that to respect the artists, he would direct royalties earned from the song to them:

    “The future of music is here. Artists now have the ability to let their voice work for them without lifting a finger. If you’re down to put it out, I will clearly label it as A.I., and I’ll direct royalties to you. Respect either way.”

  • Rise of the machines: AI spells danger for Hollywood stunt workers

    Rise of the machines: AI spells danger for Hollywood stunt workers

    By Andrew MARSZAL

    Hollywood’s striking actors fear that artificial intelligence is coming for their jobs — but for many stunt performers, that dystopian danger is already a reality.

    From “Game of Thrones” to the latest Marvel superhero movies, cost-slashing studios have long used computer-generated background figures to reduce the number of actors needed for battle scenes.

    Now, the rise of AI means cheaper and more powerful techniques are being explored to create highly elaborate action sequences such as car chases and shootouts — without those pesky (and expensive) humans.

    Stunt work, a time-honored Hollywood tradition that has spanned from silent epics through to Tom Cruise’s latest “Mission Impossible,” is at risk of rapidly shrinking.

    “The technology is exponentially getting faster and better,” said Freddy Bouciegues, stunt coordinator for movies like “Free Guy” and “Terminator: Dark Fate.”

    “It’s really a scary time right now.”

    Studios are already requiring stunt and background performers to take part in high-tech 3D “body scans” on set, often without explaining how or when the images will be used.

    Advancements in AI mean these likenesses could be used to create detailed, eerily realistic “digital replicas,” which can perform any action or speak any dialogue its creators wish.

    Bouciegues fears producers could use these virtual avatars to replace “nondescript” stunt performers — such as those playing pedestrians leaping out of the way of a car chase.

    “There could be a world where they said, ‘No, we don’t want to bring these 10 guys in… we’ll just add them in later via effects and AI. Now those guys are out of the job.”

    But according to director Neill Blomkamp, whose new film “Gran Turismo” hits theaters August 25, even that scenario only scratches the surface.

    The role AI will soon play in generating images from scratch is “hard to compute,” he told AFP.

    “Gran Turismo” primarily uses stunt performers driving real cars on actual racetracks, with some computer-generated effects added on top for one particularly complex and dangerous scene.

    But Blomkamp predicts that, in as soon as six or 12 months, AI will reach a point where it can generate photo-realistic footage like high-speed crashes based on a director’s instructions alone.

    At that point, “you take all of your CG (computer graphics) and VFX (visual effects) computers and throw them out the window, and you get rid of stunts, and you get rid of cameras, and you don’t go to the racetrack,” he told AFP.

    “It’s that different.”

    – The human element –

    The lack of guarantees over the future use of AI is one of the major factors at stake in the ongoing strike by the Screen Actors Guild (SAG-AFTRA) and Hollywood’s writers, who have been on the picket lines 100 days.

    SAG-AFTRA last month warned that studios intend to create realistic digital replicas of performers, to use “for the rest of eternity, in any project they want” — all for the payment of one day’s work.

    The studios dispute this, and say they have offered rules including informed consent and compensation.

    But as well as the potential implications for thousands of lost jobs, Bouciegues warns that no matter how good the technology has become, “the audience can still tell” when the wool is being pulled over their eyes by computer-generated VFX.

    Even if AI can perfectly replicate a battle, explosion or crash, it cannot supplant the human element that is vital to any successful action film, he said, pointing to Cruise’s recent “Top Gun” and “Mission Impossible” sequels.

    “He uses real stunt people, and he does real stunts, and you can see it on the screen. For me, I feel like it subconsciously affects the viewer,” said Bouciegues.

    Current AI technology still gives “slightly unpredictable results,” agreed Blomkamp, who began his career in VFX, and directed Oscar-nominated “District 9.”

    “But it’s coming… It’s going to fundamentally change society, let alone Hollywood. The world is going to be different.”

    For stunt workers like Bouciegues, the best outcome now is to blend the use of human performers with VFX and AI to pull off sequences that would be too dangerous with old-fashioned techniques alone.

    “I don’t think this job will ever just cease to be,” said Bouciegues, of stunt work. “It just definitely is going to get smaller and more precise.”

    But even that is a sobering reality for stunt performers who are currently standing on picket lines outside Hollywood studios.

    “Every stunt guy is the alpha male type, and everybody wants to say, ‘Oh, we’re good,’” said Bouciegues.

    “But I personally have spoken to a lot of people that are freaked out and nervous.”

  • Robots tell UN conference they can run the world better than humans with help of AI

    Robots tell UN conference they can run the world better than humans with help of AI

    AI-powered humanoid robots stole the spotlight at a United Nations summit in Geneva, boldly claiming they could run the world more efficiently than humans. These robots, like Sophia from Hanson Robotics and Ameca with a lifelike artificial head, gathered at the AI for Good Global Summit, where around 3,000 experts aimed to figure out how AI could tackle big problems like climate change and social care.

    While the robots proudly touted their knack for crunching unbiased data, they also recognised that humans bring the emotional smarts and creativity needed for making smart decisions. The summit made history by hosting a news conference with a panel of AI-enabled humanoid social robots, a first-of-its-kind event.

    The UN’s ITU tech agency, which organised the summit, also highlighted the downsides of rushing into AI without caution. Job losses and social unrest are concerns, the agency warned. The robots had mixed views on whether there should be global rules for AI. Some urged careful discussions about rules, while others were all about embracing the potential without holding back.

    However, these robots, despite their impressive abilities, confessed that they can’t quite grasp human emotions yet. They admitted that human feelings, like joy and pain, are a mystery to them. Although they understand that emotions matter, they made it clear that they can’t really share those feelings.

    This conference shone a light on the exciting possibilities and tough challenges of AI’s growth. It started conversations about using AI in ways that make sense and don’t cause harm to our society. As AI keeps getting smarter, these humanoid robots remind us that we need to be smart about how we use it in our world.

  • AI-generated virtual influencer ‘Milla Sofia’ takes social media by storm, blurring lines of reality

    AI-generated virtual influencer ‘Milla Sofia’ takes social media by storm, blurring lines of reality

    The world of social media has been captivated by the virtual influencer Milla Sofia, a 19-year-old blond sensation with nearly 100,000 followers on TikTok. Unveiled as an artificial intelligence creation, Sofia’s photorealistic images and engaging content have left netizens in awe and bewilderment.

    With her first posts on Instagram and TikTok dating back to November 2022, Milla Sofia has quickly risen to prominence as a fusion of cutting-edge technology and elegance. The mastermind behind the AI-driven influencer is not shying away from the truth, openly acknowledging that she is an AI-generated entity.

    Despite this, her enigmatic allure has attracted a dedicated fan base, and it remains unclear how many of her followers fully comprehend her virtual nature.

    Sofia’s online persona portrays her as a fashion model and tech enthusiast, often flaunting bikini pictures from exotic locations like Greece and Bora Bora. Her intriguing interactions with her followers include TikToks featuring herself alongside real-world personalities like Elon Musk, showcasing her office outfit, and even seeking advice on hashtag preferences.

    To discerning eyes, a giveaway sign of her AI origins lies in occasional imperfections, notably in the form of distorted fingers in her photos.
    Astonishingly, some followers genuinely engage with her questions, while others seem to believe they have a personal connection, expressing gratitude for receiving her “beautiful photos” as if she were a real person.

    As the lines between reality and artificial personas blur, questions arise about the impact of virtual influencers on social media culture and the extent to which audiences can distinguish fact from fiction in this new era of digital influence.

  • Swiss radio station lets Artificial Intelligence run the show for one day

    Swiss radio station lets Artificial Intelligence run the show for one day

    In a groundbreaking experiment, Swiss public radio station Couleur 3 introduced a one-day programming event that showcased the capabilities of Artificial Intelligence (AI). Over the course of thirteen hours, the station’s airwaves were controlled entirely by AI, featuring cloned voices of five real human presenters and music composed predominantly by computers, marking a world-first endeavor.

    With voices resembling well-known personalities, music filled with trendy dance beats and hip-hop syncopations, and contagious jokes and laughter, the AI-led broadcast aimed to blur the boundaries between human and machine. Regular reminders reiterated the AI’s control over the programming, emphasising its presence throughout the day.

    Despite concerns about the potential long-term economic, cultural, social, and political consequences of AI and generative AI tools like ChatGPT, Couleur 3 embraced the experiment as a means to confront and demystify AI. Antoine Multone, the station’s chief, defended the project as a valuable lesson on coexisting with AI, rather than fearing its inevitable integration into society.

    To achieve the lifelike voices of presenters, Couleur 3 collaborated with software company Respeecher, which has experience working with Hollywood studios. Training the AI to understand the station’s unique and offbeat vibe took three months of preparation. The tracks aired during the day were partly or entirely composed by AI, a notable feat in the world of radio.

    To ensure clarity between real and synthetic news, the AI-delivered top-of-the-hour news flashes presented futuristic scenarios set in the year 2070. This approach aimed to avoid confusion with current real-world news.

    Feedback from listeners flooded the station, with mixed reactions. While some found the experiment intriguing, many expressed a desire for the return of human presenters. The discussion around the experiment continued, with plans for an on-air discussion led by real people to address the audience’s perspectives.

    Ultimately, Couleur 3’s bold experiment showcased the potential of AI in the broadcasting realm while raising important questions about the future of human involvement in media and the need to understand and harness AI technology responsibly.

  • AI instructors to teach Harvard students next year

    AI instructors to teach Harvard students next year

    Harvard University, one of America’s most prestigious and expensive colleges, is planning to introduce an AI-powered teaching assistant to instruct students in its popular introductory coding course.

    Professor David Malan, who oversees the course, explained that the use of AI in the syllabus aligns with the course’s history of incorporating new software. He stated that the introduction of a ChatGPT AI teacher is a natural progression in their teaching methods. The aim is to eventually provide students in the CS50 course with software-based tools that can support their learning individually, ensuring a 1:1 teacher-to-student ratio.

    Professor Malan mentioned that they are currently experimenting with both GPT-3.5 and GPT-4 models, as reported by Harvard’s newspaper, the Crimson. However, developers and software engineers outside the Ivy League have encountered difficulties integrating OpenAI’s ChatGPT-4 into their workflows.

    Some have raised concerns about the algorithmic co-worker’s coding abilities, perceiving a decline in quality compared to earlier versions. The AI’s software skills have been described as inferior, exhibiting superficial responses and inadequate coding prompt answers.

    Considering the significant cost of a four-year degree from Harvard, estimated at around $334,000 for the 2022-23 academic year, students who are paying for their education will likely expect the CS50 staff’s experimentation with ChatGPT to be thoroughly refined by September.

    CS50 is highly regarded and widely accessed through Harvard’s online learning platform, edX, which was established in partnership with MIT in 2012. The universities sold edX to educational technology company 2U for $800 million in 2021, ensuring its operation as a public benefit entity that offers courses for free auditing.

    Professor Malan acknowledged that early iterations of AI programs like ChatGPT may occasionally underperform, but expressed his confidence in the AI teaching assistant’s ability to streamline tasks and reduce the time spent on assessing students’ code. This, in turn, would allow teaching fellows to focus on more meaningful, interpersonal interactions with their students, resembling an apprenticeship model.

    Reflecting on the purpose of education, Professor Malan emphasised the importance of critical thinking for students, urging them to exercise discernment when processing information, regardless of its source.

    In summary, Harvard University intends to leverage AI technology by introducing a ChatGPT-powered teaching assistant in its CS50 course. While challenges have been encountered with the latest ChatGPT-4 model, Professor Malan and his team are committed to refining the AI’s performance.

    The goal is to enhance the learning experience for students and enable teaching fellows to allocate their time more effectively, fostering meaningful interactions. This development aligns with Harvard’s commitment to providing quality education through its online learning platform, edX, which remains accessible to a wide audience.

  • Dukaan CEO lays off 90% of his support staff in favour of AI chatbot

    Dukaan CEO lays off 90% of his support staff in favour of AI chatbot

    Suumit Shah, founder and CEO of Bangalore-based e-commerce startup Dukaan, announced via his Twitter account that he has laid off 90% of his customer support staff in favour of using an AI chatbot. 

    The bot was built by one of the firm’s data scientists, and according to Shah was able to respond to initial queries instantly, compared to the average staff time of one minute and 44 seconds.

    In his tweet, Shah admitted that the layoffs were “tough, but necessary”, explaining that given the state of the economy, startups are prioritising “profitability”.  

    Customer Support has apparently been a long-time struggle for Dukaan. In a conversation with CNN, Shah said that the company had cut the cost of its customer support function by 85% after introducing AI technology. He reasoned that this part of the business had been problematic for some time, with delayed responses and limited availability of staff at critical times, among other issues.

    That’s what prompted Shah to come up with the idea to create a personal AI-assistant for Dukaan, which would answer customer queries instantly, precisely, and from anywhere. Dukaan’s AI-lead Ojasvi Yadav stepped up to the plate.

    According to Shah’s Twitter thread, just a day after the bot was launched, Dukaan’s AI chatbot ‘Lina’ had resolved 200 lives chats and 1400 support tickets. The success of Lina propelled the team to create Dukaan’s new product ‘BOT9.ai’. It is an AI assistant, that can learn the ins-and-outs of a business, and answer customer queries instantly, 24/7. 

    As Shah tweeted, “it’s less magical, sure, but at least it pays the bills!”

    Considering the era of AI we are in now, and the general widespread layoffs by tech giants, Shah’s decision had been met with much criticism. However, Shah continued to justify the layoffs by emphasizing how AI technology can optimise their operations. 

    Moreover, Shah believes that allocating employees’ expertise to areas requiring critical thinking, while relegating routine tasks to AI-powered chatbots, improves efficiency while also allowing for a better allocation of human resources.

    Many Twitter users were enraged at the apparent pride in Shah’s tweets. One user tweeted, “You disrupted the lives of 90% of your support team & you’re celebrating it in public. You also likely destroyed your customer support (disprove with good CSAT for the bot) – all for a basic ChatGPT wrapper. This is a new low even for you.” 

    While the announcement may read as apathetic, it is not surprising that major companies are turning to AI to improve general performance and efficiency in what are considerably quite routine tasks. 

    According to a report from outplacement firm Challenger, Gray & Christmas, which looks at layoffs across every industry, around 5% of May’s job cuts in 2023 were directly related to artificial intelligence. 

    Are you worried AI is going to replace you at work?

  • AI’s disruptive power hits tech industry: Job cuts and demand for AI experts

    AI’s disruptive power hits tech industry: Job cuts and demand for AI experts

    The rise of artificial intelligence (AI) has sparked concerns about job displacement in the future. However, it is already having an impact in the tech industry, where employees once seemed secure in their positions. 

    A growing number of tech companies are attributing layoffs and reevaluations of new hires to AI advancements happening right in Silicon Valley.

    For example, Chegg, an education technology company, recently announced in a regulatory filing that it would be cutting 4 per cent of its workforce, around 80 employees. The reason given was to align the company with its AI strategy and create sustainable value for students and investors.

    IBM’s CEO, Arvind Krishna, stated in a May interview with Bloomberg that the company plans to pause hiring for roles that could be potentially replaced by AI in the future. However, in a subsequent interview with Barrons, Krishna clarified that his comments were taken out of context, emphasising that AI will generate more jobs than it eliminates.

    In late April, Dropbox, a file-storage service, revealed that it would be reducing its workforce by approximately 16 per cent, or 500 employees, also citing AI as a factor. Outplacement firm Challenger, Gray & Christmas reported that in May alone, 3,900 individuals were laid off due to AI, marking the first time job cuts were specifically attributed to this factor. All of these layoffs occurred within the tech sector.

    These developments in Silicon Valley not only demonstrate its leadership in AI development but also provide insight into how businesses might adapt to these tools. Rather than rendering entire skill sets obsolete overnight, AI is currently compelling companies to redirect resources to maximize its potential. Consequently, workers with AI expertise are in high demand.

    Dropbox CEO Drew Houston, in a note announcing the job cuts, acknowledged that AI has captured people’s imagination and expanded the market for AI-powered products. He highlighted the need for a different skill set, particularly in AI and early-stage product development, for the company’s future growth.

    Dan Wang, a professor at Columbia Business School, believes AI will lead to organizational restructuring but does not foresee machines entirely replacing humans just yet. He suggests that AI enhances human work rather than replaces it. Wang argues that the real competition lies in human specialists who can effectively leverage AI tools.

    Overall, the influence of AI is already evident in the tech industry, prompting companies to adapt their strategies and prioritize workers with AI expertise, rather than causing immediate job obsolescence.

  • Google’s healthcare tech uses AI to predict heart disease with just an eye scan

    Google’s healthcare tech uses AI to predict heart disease with just an eye scan

    Google’s Artificial Intelligence (AI) diagnosis of diabetic retinopathy (a leading cause of blindness) has shown things in the retinal scans that “human beings didn’t know to look for”, according to CEO Sundar Pichai. The AI eye scans hold information with which Google can predict the five year risk of someone having a heart attack or a stroke.

    At last year’s Google IO, CEO Sundar Pichai announced GoogleAI, a culmination of the company’s efforts to bring the benefits of AI to everyone. DermAssist, Google’s AI program that detects and provides diagnosis for skin conditions, will be available on Google browser by the end of this year.

    Google had also been running field trials across hospitals in India, where Google used deep learning to help doctors diagnose diabetic retinopathy. Pichai says the field trials have been going very well, with AI offering expert diagnoses to places where trained doctors are scarce.

    As luck would have it, the very same eye scans that have helped successfully diagnose diabetic retinopathy also hold vital information that GoogleAI could use to predict the five year risk of an individual having an adverse cardiovascular event.

    Although the idea of looking into someone’s eyes to diagnose the condition of their heart sounds unusual, it actually draws from established research. The rear interior wall of the eye (the fundus) is full of blood vessels that reflect the body’s overall health. Information such as someone’s age, their biological sex, whether or not they smoke, their BMI and systolic blood pressure is readily available to doctors through a simple eye scan.

    According to Pichai, this could be the new basis for a non-invasive way of detecting cardiovascular risk. He says Google will be working with their partners to field trials.

    Another exciting AI-health related development is that AI can help in the prediction of medical events. Machine learning can go in and analyse over 100,000 data points per patient (obviously, more than one doctor could ever do), and then quantitatively predict the chance of readmission 24-48 hours to advance. This is hugely beneficial as it gives doctors more time to act.