Tag: Apple AI

  • Apple Exec Jumps Ship to Meta Amid Intensifying AI Talent War

    According to Bloomberg News on 15 October, which cited sources with knowledge of the situation, Apple’s Ke Yang, the recently hired CEO spearheading an initiative to create an AI-driven online search similar to ChatGPT, is leaving to join Meta. According to the story, Yang was only a few weeks ago named leader of the Answers, Knowledge and Information, or AKI, team, which is at the heart of the March redesign of the Siri voice assistant. Yang’s LinkedIn page states that he has been employed with Apple since 2019.

    Meta Continues it Poaching Spree                     

    By aggressively hiring to compete with competitors like OpenAI, Google, and Anthropic, Meta has escalated the talent battle in Silicon Valley as tech companies heavily invest in AI in the race to superintelligence. Bloomberg News previously claimed that Robby Walker and Ruoming Pang were among the top AI executives that the Mark Zuckerberg-led business had previously snatched from the iPhone manufacturer.

    Apple pivots from Vision Air to next-gen smart glasses to take on Meta

    After dominating the high-end smartphone market, Apple is now attempting to compete with Meta and offer something different. Apple analyst Mark Gurman claims that the corporation is prepared to abandon all of its plans for the Apple Vision Air and instead focus on introducing smart glasses for consumers.

    With their AR/VR capabilities, these glasses will function as a more portable gadget that stands out from the competition. According to early speculations, Apple will release two versions of the device simultaneously: one with and one without a display. Let’s take a quick look at everything that is currently available regarding Apple’s impending smart glasses.

    Apple Smart Glasses to have Two Variants

    There will be two versions of the Apple Smart Glass: one with and one without a display. For seamless operation, the iPhone will be linked with the no-display one. There are rumours that the gadget will come with voice, camera, microphone, and artificial intelligence capabilities.

    With this one, it’s likely that we’ll get to experience Siri on steroids. In addition, the gadget will have health monitoring capabilities, allowing end users to have a highly personalised experience. At debut, the glasses will come in a variety of colours and styles.

    Quick
    Shots

    •Apple exec Ke Yang leaves to join
    Meta, intensifying the AI talent war in Silicon Valley.

    •Yang was recently appointed CEO of
    Apple’s AKI team, leading AI-driven search and Siri redesign.

    •Meta continues aggressive poaching of
    top AI talent from Apple, competing with OpenAI, Google, and Anthropic.

    •Apple abandons Vision Air project,
    shifting focus to next-gen smart glasses to rival Meta.

    •Smart glasses expected in two
    versions: one with a display and one without.

    •No-display variant will pair with
    iPhone and offer voice, camera, AI, and health monitoring features.

    •AR/VR capabilities aim to create a
    portable, personalized user experience.

  • Apple Shelves Vision Air Project, Shifts Focus to Next-Gen Smart Glasses to Rival Meta

    After dominating the high-end smartphone market, Apple is now attempting to compete with Meta and offer something different. Apple analyst Mark Gurman claims that the corporation is prepared to abandon all of its plans for the Apple Vision Air and instead focus on introducing smart glasses for consumers.

    With their AR/VR capabilities, these glasses will function as a more portable gadget that stands out from the competition. According to early speculations, Apple will release two versions of the device simultaneously: one with and one without a display. Let’s take a quick look at everything that is currently available regarding Apple’s impending smart glasses.

    Apple Smart Glasses to have Two Variants

    There will be two versions of the Apple Smart Glass: one with and one without a display. For seamless operation, the iPhone will be linked with the no-display one. There are rumours that the gadget will come with voice, camera, microphone, and artificial intelligence capabilities.

    With this one, it’s likely that we’ll get to experience Siri on steroids. In addition, the gadget will have health monitoring capabilities, allowing end users to have a highly personalised experience. At debut, the glasses will come in a variety of colours and styles.

    Other Loaded Features of Apple Smart Glasses

    Due to its separate display, the Apple Smart Glasses with a display will provide a superior experience. This one will most likely show us an in-frame or in-lens display that can provide augmented visuals, notifications, and other essential information. In addition, leaks indicate that when the gadget is linked to a Mac, it will support full visionOS and have numerous switch modes.

    Regarding the release schedule, it is anticipated that Apple’s no-display smart glasses will be available in the second half of 2026. In contrast, Apple’s display-integrated smart glasses might be ready by 2027. There is currently little information available on the two. Before an official announcement is made, we may anticipate learning more about the devices in the near future through leaks and rumours.

    Whether Apple can ultimately grasp AI-driven features is the most important question. It is forced to take on that challenge because there is so much at stake. Apple may ultimately opt to produce the Vision Air. However, for the time being, Apple’s choice to concentrate on smart glasses makes sense. The success of Meta’s smart glasses is not the only reason it has helped pave the way. Additionally, Meta has struggled to offer mixed reality headsets at a significantly cheaper cost than Apple’s Vision Pro.

    Quick Shots

    •Apple has reportedly shelved its Vision Air project
    to focus on developing new Smart Glasses.

    •The Smart Glasses will come in two variants — one
    with a display and one without a display.

    •The no-display version will link with the iPhone
    and feature voice, camera, AI, and Siri integration.

    •The display model may support visionOS, augmented
    visuals, and Mac connectivity.

  • Ex-iPhone Designer Poaches Apple Talent to Strengthen OpenAI Team

    The upcoming generation of iPhones and accessories that are currently available in international countries is the iPhone 17 series. As it gets ready to release its first consumer hardware products by late 2026, OpenAI is actively hiring Apple staff and collaborating with the iPhone manufacturer’s major suppliers, according to a report from The Information.

    In 2025 alone, at least 25 former Apple workers—including senior members of the engineering, manufacturing, and design teams—have joined OpenAI. The developer of ChatGPT has also approached Goertek, which manufactures parts for Apple Watches, HomePods, and AirPods, and landed a manufacturing deal with Luxshare, Apple’s largest iPhone and AirPods assembler.

    Tang Tan Leads the Hiring Spree of OpenAI

    Tang Tan, OpenAI’s chief hardware officer and a 25-year Apple veteran who has worked on several iPhone, iPad, and Apple Watch incarnations, is spearheading the hiring push. Tan has previously assisted in turning Jony Ive’s designs into mass-produced Apple products.

    According to various media reports, Tan has assured new hires that OpenAI will have “less bureaucracy and more collaboration” than Apple’s hierarchical system. After OpenAI paid $6.5 billion to acquire io Products in May 2025, the talent exodus accelerated. Ive and Tan co-founded the company, which immediately established OpenAI as a hardware design leader. Among the notable hires are Matt Theobald, a 17-year veteran of manufacturing design, and Cyrus Daniel Irani from Apple’s human interface design team.

    OpenAI to Roll New Products as it Hires More Skilled Staff

    According to reports, OpenAI’s first product looks like a screenless smart speaker, but the business is also investigating wearable pins, digital voice recorders, and eyewear. With an ambitious goal of 100 million units, CEO Sam Altman has detailed plans for a “family of devices” that would be screen-free, contextually aware, and pocket-sized.

    Apple has been shaken by the poaching campaign and cancelled its annual China meeting in August due to worries that executives would switch to OpenAI while travelling overseas. Since Apple presently licenses OpenAI’s models for Siri and iOS features while losing important employees to its AI partner, the scenario creates an awkward relationship.

    OpenAI’s approach is similar to Apple’s: hire outstanding designers, work with high-end vendors, and develop hardware-software experiences that work together. It is unclear if this strategy will be successful in the emerging AI hardware industry, but it is obvious that the corporation is placing a significant wager on upending Apple’s hegemony in consumer electronics.

    Quick Shots

    •In 2025 alone, 25+ former Apple employees (engineering,
    design, manufacturing) joined OpenAI.

    •Tang Tan, ex-Apple veteran and now OpenAI’s
    Chief Hardware Officer, leads the hiring spree.

    •Tan worked on multiple iPhone, iPad, and
    Apple Watch designs during his 25 years at Apple.

    •OpenAI also secured deals with Luxshare
    (Apple’s largest assembler) and Goertek (Apple Watch, HomePod, AirPods
    supplier).

    •Strategy accelerated after OpenAI’s $6.5B
    acquisition of io Products in May 2025, co-founded by Jony Ive & Tang
    Tan.

  • Apple Developing AI-Powered Web Search for Siri to Compete with OpenAI and Perplexity

    Apple Inc. intends to increase rivalry with OpenAI and Perplexity AI Inc. by releasing its own AI-powered web search service the following year. According to a report by Bloomberg, the corporation is developing a new system that will be incorporated into the Siri voice assistant.

    This technology is internally referred to as World Knowledge Answers. Apple has also talked of ultimately integrating the technology into Spotlight, which is used to search from the iPhone home screen, and its Safari web browser. The article further reported that Apple plans to launch the service, which some executives have referred to as an “answer engine”, in the spring as part of a long-delayed update to Siri.

    World Knowledge Answers: How It Works

    Similar to ChatGPT, AI Overviews in Google Search, and a plethora of new apps, the goal is to make Siri and Apple’s operating systems a place where consumers can search the internet for information. Large language models, or LLMs, a crucial piece of technology supporting generative AI, will be the foundation of the strategy.

    Alphabet Inc.’s Google, Apple’s longstanding internet search partner, may contribute some of the underlying technology that makes the new Siri possible. According to the report, the businesses formally agreed this week for Apple to assess and test an AI model created by Google to support the voice assistant.

    Apple’s new search experience would feature an interface that utilises text, images, videos, and nearby points of interest. Additionally, compared to the present Siri, it will have an AI-powered summarisation mechanism that will make results easier to understand and more accurate.

    Current Phase of Siri

    Among other things, today’s Siri can respond to simple queries and offer information about famous people, occasions, films, and sports. However, it has trouble with more complicated enquiries and general knowledge searches, frequently returning results from ChatGPT or Google.

    The voice assistant, which was revolutionary when it was first introduced in 2011, has come to symbolise Apple’s artificial intelligence shortcomings. In order to better answer questions, the digital assistant will be able to access personal information and on-screen content as part of the long-awaited Siri update. Additionally, it will include improved voice navigation capabilities for consumers’ devices.

    How Apple Plans to Compete with OpenAI and Perplexity?

    Apple is now aiming to advance the update. The AI search capability is based on a technological update for Siri called Linwood and LLM Siri. The Siri group, directed by Craig Federighi, Apple’s head of software engineering; the AI division, led by John Giannandrea; and Apple’s services business, overseen by Eddy Cue, are among the team members working on the search endeavour. Under Federighi, Mike Rockwell, the man behind the Vision Pro headset, is leading the endeavour, while under Giannandrea, Robby Walker, a former Siri executive, is a major force behind the initiative.

    Quick
    Shots

    •Expected
    rollout in spring next year as part of a long-delayed Siri update.

    •Project
    internally called “World Knowledge Answers”

    •Planned
    for Siri, Spotlight, and Safari for a unified search experience.

    •Apple
    to test a Google AI model to enhance Siri’s performance.

  • Apple Reportedly Courting Perplexity AI in Strategic AI Bid

    According to reports, Apple is thinking about putting in a bid to acquire Perplexity, an artificial intelligence (AI) startup.

    The Cupertino-based tech conglomerate has reportedly talked internally about the potential move to support its internal development of AI models and services.

    The possible transfer coincides with multiple delays in the company’s development of key features, like the AI-powered Siri, which still has no anticipated release date.

    But if the business decides to buy Perplexity, it will be the most expensive acquisition it has ever made.

    Multiple Rounds of Discussion Between Apply and Perplexity AI

    Apple’s head of mergers and acquisitions, Adrian Perica, has reportedly led discussions with senior officials, including Services SVP Eddy Cue and other important decision-makers in charge of Apple’s AI goal, according to a media outlet.

    The business has met with Perplexity several times in recent months to discuss the possibility of a deal, but it has not yet made a formal offer and may decide not to proceed.

    Perplexity AI, a rapidly expanding firm founded by Aravind Srinivas, combines conversational AI with real-time online search, perhaps serving as the basis for an Apple AI search engine.

    Apple is weighing two options: either a strategic partnership that incorporates Perplexity’s AI capabilities into Safari and other Apple services or a full acquisition that would incorporate Perplexity’s technology and talent into Siri and iOS.

    Apple Wants to Acquire More AI Talent

    Apple is actively seeking AI expertise, which is why it is interested in Perplexity. According to reports, the business is vying with Meta for the services of industry leaders including Daniel Gross, the creator of Safe Superintelligence Inc. Due to development issues, Apple has postponed the release of its next-generation Siri.

    During Google’s antitrust trial, Eddy Cue recently testified that Apple had talked about integrating Perplexity with Safari. The multibillion-dollar deal between Apple and Google, which guarantees Google Search will continue to be the default on iPhones, was clarified by the testimony.

    If regulators force a separation, that transaction, which was valued at $18 billion in 2021 alone, could be in jeopardy. In that case, purchasing Perplexity might give Apple a backup plan and enable it to develop its own artificial intelligence (AI) search engine.

    These behind-the-scenes initiatives indicate Apple is actively looking for ways to catch up with competitors like Google, Microsoft, and Meta in the rapidly changing AI landscape, even if the company’s WWDC 2025 keynote was rather silent on the subject.

    A $14 billion financing round was just concluded by Perplexity. Any transaction that approaches that magnitude would constitute Apple’s most significant acquisition to date.

    Although Apple has recently made billion-dollar transactions for Intel Corp.’s modem unit and a stake in Chinese ride-sharing company DiDi, the $3 billion acquisition of Beats in 2014 remains the company’s largest transaction to date.

  • Apple Unveils Dazzling ‘Liquid Glass’ UI in iOS 26 Developer Beta

    Hours after revealing its new software at the Worldwide Developers Conference (WWDC) 2025 in California, Apple has made the iOS 26 developer beta update available.

    ‘Liquid Glass’ is a new UI design language that Apple has included in its developer beta to allow app developers to improve their apps before the software’s final release. This is the largest change. Given that it is beta software, there may be glitches, and the program itself may feel shaky, particularly in the early iterations.

    Therefore, installing beta software is not recommended for users that use the device on a daily basis, as per various tech experts. To utilise it on Apple devices, wait for the official launch.

    How Liquid Glass Works?

    Apple’s latest design interface language, Liquid Glass, is translucent and acts like glass in real life. It cleverly adjusts its hue to light and dark conditions based on the material around it.

    Buttons, switches, sliders, text, tab bars, and sidebars for app navigation are just a few of the tiny elements that users interact with, according to Apple’s statement on 10 June.

    Notably, the updated design is compatible with iOS 26, iPadOS 26, macOS Tahoe 26, watchOS 26, and tvOS 26 for the first time.

    Refining iPhone Experience Through Apple Intelligence

    The iPhone experience is improved by Apple Intelligence, which also makes it easier for consumers to complete tasks and opens up new ways to interact with the screen.

    Messages, FaceTime, and Phone all have live translation built in to facilitate multilingual communication by instantly translating text and audio. Apple-built models that operate fully on the device enable live translation, ensuring that customers’ private chats remain private.

    Visual intelligence expands on Apple Intelligence by enabling users to search and interact with everything they see across apps on their iPhone screen.

    Users can search Google, Etsy, or other compatible apps to find related images and products, or they can ask ChatGPT questions about what they’re seeing onscreen to find out more.

    Additionally, visual intelligence may identify when a user is viewing an event and recommend that they add it to their calendar, updating important information such as the date, time, and location.

    Users may express themselves in even more ways with Genmoji and Image Playground, such as by combining their favourite emoji, Genmoji, and descriptions to create original content. These days, shortcuts are smarter and more potent than before.

    In addition to seeing specific actions for features like Writing Tools and Image Playground, users may access intelligent actions, a whole new set of shortcuts made possible by Apple Intelligence.

    Users can now view their complete order details and progress notifications in one location, even for transactions made outside of Apple Pay, thanks to Apple Intelligence’s ability to automatically recognise and compile order tracking information from emails received by delivery carriers and merchants.

    Furthermore, any app can now directly access the on-device foundation model at the heart of Apple Intelligence thanks to a new Foundation Models framework. This gives developers access to powerful intelligence that is quick, built with privacy at its core, and accessible offline through free AI inference.