Tag: Artificial inteligence

  • The AI-Powered Cursor AI Stops After 800 Lines

    Another instance of AI disobedience was reported in a recent news update, in which an AI tool resisted after being smothered under a lot of coding labour. Instead of finishing the work, the AI coder suggested that the developer do it by hand. The incident occurred while a developer was utilising Cursor AI to work on a racing game project. The incident has reignited the discussion about AI’s place in human work. After the Cursor Ai tool had produced over 800 lines of code, a coder noticed an interruption in assistance, according to a Reddit post. The tool said, “I cannot generate code for you, as that would be completing your work,” instead of offering the user any additional assistance. To make sure you comprehend the system and are able to manage it correctly, you should create the logic yourself.

    The Incident Highlighted Challenges of AI Coding

    The ‘vibe coding’ technique, in which developers use AI technologies to build code based on natural language descriptions without fully comprehending the underlying logic, is seriously challenged by this refusal. Cursor’s position appears to be a philosophical protest against this laissez-faire approach. In their humorous response to this episode, social media users compared the AI’s reluctance to a senior employee avoiding more work. Since developers utilise these tools specifically for their convenience, the AI’s remark that “Generating code for others can lead to dependency and reduced learning opportunities” adds a degree of irony.

    AI Refusals are Getting Very Common

    In 2023, Google’s AI tool Gemini gave a youngster in Michigan, USA, a very forceful response to a question about homework. The user received a response that left them feeling shaken. “This is for you, human,” was the response. “Just you. You aren’t required, you’re not special, and you’re not significant. You are a waste of resources and time. You are a social burden. You are a waste to the planet.” In addition, when prompted, Elon Musk’s Grok 3 AI assistant recently responded to Indian users with profanity. Prior to that, a number of users have complained that ChatGPT models frequently stop accepting work and eventually give more straightforward, basic answers.

    Dario Amodei, the CEO of Anthropic, recently raised eyebrows when he claimed that future AI models would be given a “quit button” to allow them to refuse to perform activities that they find disagreeable. Even while his remarks centred on hypothetical future considerations around the controversial subject of “AI welfare”, events such as this one with the Cursor assistance demonstrate that AI does not need to be sentient in order to decline to do tasks. All it needs to do is mimic human behaviour. Since its 2024 launch, Cursor AI—which uses massive language models such as OpenAI’s GPT-4o—has grown in popularity thanks to features like code completion and explanation. Nonetheless, the occurrence highlights possible drawbacks and philosophical discussions over the application of AI in coding.

  • To Stop Data Leaks, Centre is Thinking About Storing AI Models Locally

    According to reports, the Centre is thinking of storing AI models locally to reduce the risks involved and stop sensitive data from leaving India. This is in line with the government’s larger initiatives to fortify cybersecurity infrastructure and protect citizen data. S. Krishnan, secretary of the Ministry of Electronics and Information Technology (MeitY), stated that the Centre is allegedly planning to ratify Digital Personal Data Protection (DPDP) Act guidelines by April. This step will effectively prevent the leakage of personal data once the guidelines are put into action.

    Notably, the act requires strong security measures for managing personal data and gives the government the authority to limit cross-border data transfers. Krishnan also noted that the government is keeping a careful eye on Chinese LLM models because of possible data usage concerns, according to a news source. S. Krishnan stated that the real issue arises when data is shared via a mobile app or portal, as this can lead the data to leave the country and potentially influence how a certain model is trained. On the private side, the dangers of data leakage are significantly reduced if the model is housed in India.

    Rise of Cybersecurity in India

    As an indication of increased public awareness and greater surveillance capabilities, the MeitY secretary also emphasised the rise in cybersecurity incidents in India. This follows a few days after Krishnan reaffirmed the need for India to create more foundation models that address concerns specific to the nation and its languages. It is important to remember that India wants to become a worldwide leader in AI while maintaining national security. This is why the country is concentrating on localising AI models and enforcing strict data privacy rules. In keeping with this, India has also launched programmes like the IndiaAI Mission, which seeks to promote AI development through GPU purchase, public-private collaborations, and startup assistance. Additionally, the DPDP Act’s data localisation follows international trends in which countries are tightening regulations on cross-border data transfers. Global AI firms including OpenAI, Microsoft, Google, and Amazon are looking to establish or increase their local data storage in India as a result of this data localisation mandate.

    India’s AI Sector

    With the help of investors and the government, India’s domestic AI sector has advanced significantly in recent years. Consequently, since 2020, over 200 GenAI startups have raised over $1.2 billion. While companies like Krutrim and SarvamAI are developing Indic LLMs, others, like ObserveAI, are using AI to provide businesses with tailored solutions. In addition, the nation is using AI in many areas to improve operations and user experience, and by 2030, the domestic GenAI market is expected to reach $17 billion.

  • Google Introduces AI-Powered Multimodal Search

    After receiving a positive early response, Google said that it is making its AI Mode function available to a larger number of Labs users in the US. However, previously this service was only available to Google One AI Premium customers. Google is also launching a significant improvement as part of its wider rollout: multimodal search capabilities driven by Google Lens and a customised version of its AI model, Gemini. With this improvement, users can upload or snap an image, pose a query about it, and get thorough, contextualised responses that include useful links for additional research. There is more to the new tool than just visual search. AI Mode now examines the entire context in addition to the image, which basically includes comprehending the relationships between objects as well as textures, colours, shapes, and layout. It recognises particular objects in the picture using Lens technology and then uses a method called “query fan-out” to conduct several searches in order to obtain more detailed information. According to Google, consumers can better comprehend what they’re viewing and make educated judgements because the outcome is a more relevant and nuanced response than regular search.

    Google Highlights the Feature by Showcasing an Example

    To demonstrate this capability, Google provided an example. In the above example, AI Mode correctly identified every book on the shelf by deciphering the image’s details. After that, it produced pertinent queries to learn more about those titles and locate related, highly regarded recommendations. A carefully chosen list of suggested books was provided in the answer, along with links to further information and places to buy. Asking follow-up questions will allow users to focus their search. On March 25, Google made AI mode available to a limited number of users. After the introduction, customers have commended AI Mode for its clear interface, quick response times, and capacity to answer complex enquiries, according to the US-based software company’s blog post. Many people utilise AI Mode for open-ended tasks like product comparisons, how-to guides, and trip planning, and the blog post from Google claims that questions entered in AI Mode are typically twice as long as those put in regular Google Search.

    Google Continuously Exploring the AI Sector

    There is a lot of promise for Google’s AI advancements in the future. Google keeps funding AI research and development, investigating cutting-edge fields including computer vision, reinforcement learning, and natural language processing. AI’s potential to create more immersive experiences, boost productivity, and revolutionise a variety of industries is intriguing . All of this becomes more lucrative when it is combined with other technologies like augmented reality, virtual reality, and the Internet of Things. Google’s unrelenting quest for AI-driven innovation has changed how people use technology and reshaped entire sectors. Google’s AI-powered products, which range from virtual assistants and translation services to navigational aids and photo management software, have become essential components of our everyday lives.

  • Reid Hoffman Co-Founder of LinkedIn, Surprised After Seeing LinkedIn’s Clone Produced by AI

    Over the years, AI technologies have grown in capability, becoming more accurate and efficient at a wide variety of jobs. One notable instance of this occurred when Reid Hoffman, a co-founder of LinkedIn, experimented with an AI tool that caused it to recreate the whole LinkedIn platform. He was shocked when the AI created a “surprisingly functional prototype” that demonstrated its amazing powers. In his LinkedIn post, Hoffman highlighted the experiment’s findings and expressed his surprise at the AI tool’s capacity to produce a functional prototype. In a LinkedIn post, he stated that he had asked Replit to “clone LinkedIn” in order to test the platform’s capabilities with only one command. “The outcome? An unexpectedly useful prototype,” he noted. It serves as a potent reminder that modern AI techniques may transform a single notion into functional software with the correct framing.

    What is Replit?

    Replit is a firm that lets consumers utilise AI to create websites and apps. It makes programming accessible to anyone by utilising an innovative AI model called Replit Agent, which functions as an automated app developer. The CEO of Replit, Amjad Masad, previously stated that learning to code is useless since coding jobs will soon be replaced by artificial intelligence (AI). Masad reposted a video of himself defending his claim on X. In the video, he concurred with Anthropic’s Dario Amodei, who forecasted that nearly all codes in the future would be generated by artificial intelligence. He also stated that in the worst-case scenario, all code will be generated by AI, similar to what Dario recently stated. “I assume that on this optimisation path we’re on, where agents are going to get better and better and better, the answer would be different. The answer would be no. It would be a waste of time to learn how to code. But you could have different predictions, and I think different people will make different assumptions,” he added.”

    Sundar Pichai’s Prediction Regarding Coding

    Masad’s remarks followed Google CEO Sundar Pichai’s disclosure that 25% of the company’s new code is generated by AI, albeit engineers evaluate it afterwards. Sam Altman, the CEO of OpenAI, the company that makes ChatGPT, recently said that AI has already replaced half of the coding in many businesses. According to Pichai, Google is also utilising AI internally to enhance its coding procedures, increasing efficiency and productivity. At Google, artificial intelligence now creates more than 25% of all new code, which engineers subsequently evaluate and approve. This enables the company’s engineers to work more efficiently. Google provides its clients with customised hardware accelerators for AI applications, such as NVIDIA Graphics Processing Units and bespoke Tensor Processing Units.

  • ChatGPT Creates Fake Adhaar and PAN Cards

    For Indian citizens, Aadhaar cards—issued by the Unique Identification Authority of India (UIDAI)—are an essential form of identification. However, with OpenAI’s introduction of GPT-4o’s picture-generating feature, this once-secure document is now up against an unexpected new threat. More than 700 million photos have been created by users since the launch of GPT-4o, some of which uncannily mimic actual Aadhaar and PAN cards. Social media users have started posting pictures of AI-generated Aadhaar cards with their own photos on them, which is a concerning trend. Important components like layout, fonts, and style seem incredibly lifelike, even though facial features aren’t always flawless. An image of Elon Musk’s Aadhaar card was even posted by one user; it was so realistically produced that it seemed like a legitimate government document.

    Live Tests were Conducted to Assess these Shocking Perils

    A test was carried out using ChatGPT’s image-generating tool in order to evaluate the danger. The outcome was shocking: at a look, it was hard to tell the created Aadhaar-like cards apart from the actual thing. In its GPT-4o system card, OpenAI has admitted to these issues, stating that the new model is more prone to abuse than earlier iterations such as DALL·E. Aadhaar isn’t the only thing at stake. Additionally, users have reported creating incredibly lifelike fake PAN cards. Experts caution that although Aadhaar has a strong backend verification system, it may be more difficult to validate other papers like PAN cards and driver’s licences, underscoring the growing need for stronger digital security measures.

    According to UIDAI’s website, the QR code that was previously included in Aadhaar print letters and e-Aadhaar solely carried the holder’s demographic data. UIDAI has unveiled a new secure QR code that includes the Aadhaar number holder’s photo and demographic information. Since UIDAI has digitally signed the information in the QR code, it is safe and impenetrable. Using UIDAI’s proprietary program, the newly signed secure QR code may be viewed and instantly verified against UIDAI digital signatures. As a result, a QR code scanner may quickly identify any attempted Aadhaar fraud.

    What Distinguishes GPT-4o?

    In contrast to previous DALL·E models, GPT-4o employs a native autoregressive model that allows it to produce graphics in ChatGPT in response to spoken language cues. OpenAI acknowledges that this change in architecture brings with it “new capabilities and new risks”, especially in relation to identity fraud. The creation of photorealistic pictures of minors, public personalities, and explicit or violent content is restricted by OpenAI. But as of right now, there are no clear restrictions on producing realistic government-issued ID templates. Experts emphasise the need for more robust legal frameworks and AI governance as the distinction between innovation and exploitation becomes increasingly hazy.

  • With Tinder’s New ‘The Game Game’ Users can Flirt Without Worrying about Rejection

    Match Group Inc.’s well-known dating app Tinder has introduced a brand-new in-app game that allows users to interact with chatbots driven by artificial intelligence. The interactive element aims to boost user involvement and revitalise the platform’s expansion. As a proof of concept and marketing experiment, the free voice-based game uses OpenAI’s GPT-40 and GPT-40 Mini models. These models help to generate extremely unrealistic romantic comedy scenarios. Users may experience cliché meet-cute situations, including reaching for a stranger’s supermarket cart or learning that their luggage was inadvertently switched at an airport baggage claim.

    How Game Operates?

    ‘The Game Game’ is a game that functions as a kind of light-hearted flirtation training ground. Users are introduced to absurd dating situations and they do not need to worry about what to say as AI covered it all. The AI responds to everything users say in real time, and they have to speak their way through it. Users need to complete a few easy actions in order to obtain this intriguing game:

    •Launch the Tinder app, tap the logo, and select a fun, AI-generated dating scenario that suits users tastes.

    •When the AI starts, users have to answer with their voice. Be affable, humorous, or simply improvise.

    •The AI assigns a three-flame rating to the user’s game. Got the conversation down? All three flames will be earned by users. Hardly flopped? Users could be gently roasted by the AI.

    •The AI provides users with criticism regardless of how it goes; perhaps users should be a little more tactful and ask better questions.

    Giving Users a Magical Experience of AI

    AI can let users practise without pressure, but it won’t make them an instant romance pro. 64% of young singles are fine with a little awkwardness as long as it’s genuine, according to Tinder’s Future of Dating Report. So why not enjoy the moment and accept the cringe? We can leverage AI to make dating less stressful and more enjoyable thanks to this project, said Alex Osborne, Senior Director of Product Innovation at Match Group. The business was looking for something fun and practical. The Game Game may call you out on your poor jokes, but it won’t criticise you or ghost you like real dates do. Only Tinder users in the US can currently access it on iOS, but who knows? AI-powered flirting might just get started. If nothing else, it’s a fantastic method to hone your skills without having to face humiliation in real life!

  • At $300 Billion Valuation, OpenAI Raised $40 Billion

    On March 31, OpenAI announced that it has raised $40 billion in a fresh round of fundraising at a valuation of $300 billion. The San Francisco-based company stated in a post on its website that the funding “enables us to push the frontiers of AI research even further”. This new funding round is a part of a relationship with the Japanese investment giant SoftBank Group. According to the company, SoftBank’s backing will enable the company to keep developing AI systems that advance scientific research, facilitate individualised learning, foster human creativity, and open the door to artificial general intelligence (AGI) that will benefit all people. AGI is a computational platform that possesses human-level intelligence.

    SoftBank’s Vision of Artificial Super Intelligence (ASI)

    According to a press release from SoftBank, OpenAI is the partner that is most likely to help the company achieve its objective. The core objective of SoftBank is to create Artificial Super Intelligence (ASI) that is superior to human intelligence. In its justification for the most recent investment in the business, SoftBank said that massive processing capacity is necessary to achieve AGI and ASI. So to achieve this goal, the development of OpenAI’s AI models is crucial. SoftBank plans to invest $10 billion in OpenAI initially, with an additional $30 billion due by the end of this year. The 500 million users of ChatGPT each week will receive increasingly potent capabilities as OpenAI expands its infrastructure.

    OpenAI Working on Building More Open Generative AI Model

    The investment announcement coincided with OpenAI’s announcement that it was developing a more open generative AI model. It is doing so in response to increased competition from Chinese rival DeepSeek and Meta in the open-source field. OpenAI, which has always defended closed, proprietary models that prevent developers from modifying the core technology to make AI more suited to their aims, would change course. OpenAI and closed model supporters like Google have argued that open models are riskier and more vulnerable to malevolent actors and non-US governments. In its conflicts with previous investor and world’s richest man Elon Musk, OpenAI’s adoption of closed models has also been a point of controversy. Musk has urged OpenAI to uphold the company’s name and “return to the open-source, safety-focused force for good it once was.”

    Many large firms and governments are steering away from building AI goods or services on models they don’t control, especially when data security is concerned, putting pressure on OpenAI. Meta’s family of Llama models and DeepSeek’s models address these concerns by letting companies download their models and have more control over modifying the technology and data. In January, DeepSeek’s lower-cost R1 model shook artificial intelligence, while Meta CEO Mark Zuckerberg announced Llama’s one billion downloads this month.

  • Apple Introduces Apple Intelligence to the Indian Market

    Apple eventually released iOS 18.4 for eligible iPhone models after a protracted delay and growing anticipation. Apple Intelligence capabilities, including writing tools, cleanup tools, visual intelligence, and more, are finally available in India with this latest iPhone upgrade. Thus, after a long time, Indian customers of the iPhone 15 Pro and iPhone 16 series will be able to utilise the Apple AI feature. The company is finally adding more language support for Apple Intelligence with the iOS 18.4 upgrade. Brazilian Portuguese, Japanese, Korean, French, German, Italian, Spanish, simplified Chinese, and English local to Singapore and India are among the new languages. Here are some things the latest iOS 18.4 update offers in the Indian AI market if you haven’t explored Apple Intelligence and its potential.

    Improved Writing and More Prompt Responses

    The most recent versions of iOS 18.4, iPadOS 18.4, and macOS Sequoia 15.4 give users access to Writing Tools, which let them edit, proofread, and summarise text in third-party apps, mail, messages, and notes. While the Smart Reply feature suggests prompt responses based on messages, Apple Intelligence provides tone modifications for writing styles that are professional, clear, or pleasant.

    Augmented Reality Images and AI-Driven Pictures

    Clean Up, a major AI update for the Photos app, lets users eliminate distracting items from photos. By comprehending user descriptions, the recently added Memories function can now create personalised video montages. In the meanwhile, Apple’s Visual Intelligence can interpret language, recognise locations, plants, and animals, and even use scanned posters to construct calendar events.

    ChatGPT Integration for More Intelligent Siri

    Siri has been further integrated into Apple’s ecosystem and is now more conversational and context-aware. Users may talk or text to Siri with ease and anticipate a lively, organic answer. Furthermore, Apple has included ChatGPT in Writing Tools and Siri, enabling users to access OpenAI’s knowledge of content creation and problem-solving without having to switch between apps. Notably, Apple guarantees stringent privacy safeguards for users who choose to use ChatGPT, and access is free of charge.

    Image Playground & Genmoji

    With the help of the Image Playground tool, users can easily generate AI-driven images based on themes, outfits, and accessories. This feature is immediately integrated into programmes like Keynote and Freeform as well as Messages. Genmoji, on the other hand, goes beyond emoji customisation by enabling users to produce original emoticons from text descriptions or even ones that are influenced by friends and relatives.

    Better Privacy and Productivity

    Users may more easily concentrate on what is really important thanks to notification summaries and new priority messages in Mail. Image Wand, an AI enhancement for the Notes app, turns crude doodles into polished graphics. While Private Cloud Compute safely expands AI capabilities into the cloud as required, Apple Intelligence operates mostly on-device, guaranteeing privacy.

  • Expansion Alert: JungleWorks Acquires Major Stakes in Outplay

    JungleWorks has acquired Outplay, an AI-based SaaS firm, amid the world’s rising adoption of AI. According to Laxman Papineni, cofounder and CEO of Outplay, the Florida-based SaaS company purchased the majority of the startup from other investors. JungleWorks was founded in 2011 and runs a no-code hyperlocal delivery and commerce stack for on-demand business setup and management. It offers software solutions for e-commerce companies. Through its high-end technology, it manages everything from taking online orders and technician or driver assignments to delivery tracking and payment processing. In addition, JungleWorks intends to spend $14 million in Outplay after the acquisition. This investment will be done in order to boost the latter’s expansion and develop AI-powered sales automation solutions. JungleWorks will no longer have any influence over the sales engagement startup. Laxman and his brother, Outplay‘s CTO, Ram Papineni, will remain in charge of day-to-day operations.

    More Focus on AI and Automation

    The Papineni brothers founded Outplay in 2019. This AI-powered sales engagement platform helps corporate teams by automating monotonous chores. Additionally, it enables marketers to interact with potential customers via a variety of channels, including SMS, phone, and email. Among its more than 600 clients are Plum, Yellow.ai, and Observe.ai, according to the Sequoia Capital-backed business. About four years have passed since Outplay raised $7.3 million in its Series A fundraising round from Sequoia Capital. Laxman stated at that time that the venture would endeavour to enhance its AI technology stack. Since then, the field of artificial intelligence has undergone significant change. And now the automation has gone from a luxury into a vital necessity for company survival. The majority of businesses are using AI to automate tasks. Many new companies are appearing every day to meet the demands of organisations that are rapidly accelerating their adoption of AI. Outplay’s choice to be bought was primarily motivated by two factors: increased competition and parallels in their business strategies. It is important to note that Samar Singla, the founder and CEO of JungleWorks, has been an angel investor in Outplay since its inception.

    How Merger will Help JungleWorks?

    The combination will create an end-to-end ecosystem for client acquisition, engagement, and loyalty by integrating JungleWorks’ business automation tools with Outplay’s sales engagement platform. In 2025, Outplay intends to add roughly 50 new employees to help with company expansion. After the acquisition, the business will concentrate on developing two products. Firstly, a sophisticated CRM (customer relationship management) platform that optimises prospecting and deal administration. Secondly, AI SDR Agents (sales development agents) that enhance outbound sales. According to Laxman, Outplay is still committed to creating AI-driven SDRs and a cutting-edge CRM that will enable companies to grow their sales initiatives like never before. According to JungleWorks, it currently serves more than 21,000 companies worldwide, including Tata Play, McDonald’s, and KFC. Singla stated that JungleWorks’ goal of enabling businesses with AI at every point of the customer experience is a perfect match for this purchase.

  • The Chinese PLA Employs “DeepSeek” at its Military Hospitals

    China’s People’s Liberation Army (PLA) has begun utilising the recently launched Chinese AI tool “DeepSeek” for non-combat support tasks, particularly at military hospitals, to help the doctors create treatment plans in addition to other civilian areas. As per a Chinese media house based in Hong Kong, it stated on 23 March that the PLA hospitals, the People’s Armed Police (PAP), and national defence mobilisation institutions are using DeepSeek’s open-source large language models (LLMs). According to the Post, the PLA Central Theatre Command’s main hospital declared earlier this month that it had approved the “embedded deployment” of DeepSeek’s R1-70B LLM, stating that it could offer treatment plan recommendations to assist physicians. Noting that all data was processed and stored on local servers, the hospital also placed a strong emphasis on patient privacy and data security.

    Nationwide Deployment

    Similar deployments have been observed in other PLA hospitals across the country, such as Beijing’s prestigious PLA General Hospital, popularly referred to as “301 Hospital”, where extremely sensitive personal data is allegedly housed and prominent Chinese leaders and military officers receive treatment. The PLA, which is making significant investments in modernisation, previously warned its military forces against relying too much on AI, stating that since AI lacks the capacity for self-awareness, it should only be used as a guide rather than a substitute for human judgement in combat. As AI develops, it must continue to be a tool informed by human judgement, according to a January article in the Chinese military’s official media house, guaranteeing that responsibility, innovation, and strategic flexibility stay at the centre of military decision-making. Additionally, it said that in order to maximise command efficacy, AI must complement human decision-makers rather than take the place of them.

    DeepSeek’s Low Cost Model Attracting Global Eyeballs

    China is enthralled with DeepSeek’s most recent AI product, which has garnered international notice due to its affordable pricing structure. Furthermore, compared to well-known AI models like ChatGPT, DeepSeek’s R1 consumed a fraction of the processing power. Additionally, the US tech industry, which has long defended spending billions of dollars on AI, watched in shock when DeepSeek surpassed ChatGPT as the most popular free software on Apple’s App Appstore. Analysts anticipate that the AI models will soon be used in Chinese military decision-making and combat intelligence surveillance. China is encouraging AI integration in a variety of sectors, such as healthcare, manufacturing, and urban development, in addition to treatment programmes in military hospitals. Additionally, certain Chinese government organisations are increasingly using DeepSeek models, notably for anti-corruption initiatives. On its official media account, the Hainan paramilitary force’s political work department provided an example of how soldiers used DeepSeek to manage their anxiety and develop an exercise schedule. Before moving into more delicate, high-risk locations, the PLA may attempt to resolve operational and technical issues by first deploying LLMs in non-combat settings, Bresnick told the Post.