Tag: LLM

  • Vaishnaw Claims India’s Local Foundational AI Platform to be Ready in Ten Months

    On February 4, Union Minister Ashwini Vaishnaw stated that a local foundational AI platform is anticipated in ten months, and that India may be able to develop its own high-end processing hardware, known as GPUs, in the next three to five years.

    Vaishnaw stated that the government will provide 18,000 top-tier GPU-based computing facilities for AI development to entities in the nation within the next few days and anticipates India’s own AI platform within ten months during a Budget Roundtable 2025 hosted by two renowned media houses.

    According to Vaishnav, the Ministry is considering three different solutions, each of which would include building the country’s own GPU using a chipset that is reasonably priced and either open source or licensed. The entire world has adopted this strategy, which will enable each nation to have its own GPU within three to five years.

    Surge in GPU’s Demand

    In the past, multimedia content processing involving several computational operations, such as gaming and video processing, was handled by GPUs (graphics processing units). However, the global demand for AI has caused a sharp increase in the demand for GPUs, and the US chip manufacturer Nvidia controls more than 80% of the industry. According to the minister, Indian Railways increased the confirmed ticket rate by 27% by leveraging AI models, and a number of start-ups have grown really effectively, albeit in comparison to ChatGPT.

    According to him, developing artificial intelligence models requires high-end computing equipment, which can only be purchased by wealthy individuals. However, the government has put in place a system that allows everyone to access computing infrastructure at a reasonable cost. Vaishnaw added that researchers, academicians, companies, institutions, and IITs may all access this computing capability and use it to launch foundational models.

    When asked whether India would have its own foundational model for AI, Vaishnaw responded, “10 months is the outer limit.”  According to him, there are numerous research papers that essentially discuss mathematical techniques that, for instance, the Chinese artificial intelligence company DeepSeek has employed to make the entire process extremely effective.

    Make In India Churning Jobs for Indians

    According to Vaishnaw, 12 lakh direct and indirect jobs have been created by the government’s mobile manufacturing division under the Make in India initiative. In order to demonstrate the degree of quality and accuracy attained by Indian electronic industries, he displayed a metallic object that was made up of numerous precisely linked pieces, without any apparent lines.

    He claimed that it took three years for a well-known Indian business to reach the high degree of accuracy needed by a vendor to supply parts for the production of Apple and Samsung’s high-end smartphones.

    India now produces a number of goods and parts needed in the mobile phone sector, such as chargers, battery packs, various mechanics, USB cables, keypads, display assemblies, camera modules, lithium-ion batteries, speakers and microphones, vibrator motors, and more, according to the minister.


    India Develops Its Own LLM to Tackle AI Challenges
    India is developing its own large language model (LLM) to strengthen its AI capabilities, ensuring technological independence and competitiveness in the global AI landscape.


  • Amid the DeepSeek Frenzy, Meta Plans to Invest “Hundreds of Billions of Dollars” in AI

    Mark Zuckerberg, the CEO of Meta, isn’t overly concerned about DeepSeek’s ascension, even though the Chinese AI lab’s rapid rise has shocked Wall Street and Silicon Valley. In fact, Zuckerberg stated on January 29 that Meta’s open-source strategy, which is based on the large language model (LLM) Llama, has “strengthened our conviction that this is the right thing for us to be focused on.”

    “There’s a number of novel things that they did that we’re still digesting… a number of advances that we will hope to implement in our systems, and that’s part of the nature of how this works,” Zuckerberg stated on the company’s earnings conference call. Every new firm that launches, whether or not it is a Chinese competitor, will have some new innovations that the rest of the industry may learn from, according to the head of Meta.

    DeepSeek’s Gain Causing Tremors Among Established Players

    With its boasts of creating a model that can compete with top-tier models from American companies like OpenAI, Meta, and Google for a fraction of the cost, DeepSeek has thrown Wall Street into a collapse over the past week, especially with regard to AI-related equities. Investors were alarmed by this since IT companies were spending billions of dollars developing their AI models and goods.

    Zuckerberg stated during the earnings call that he continues to think that making significant investments in infrastructure and capital expenditures will eventually provide a competitive edge. “It’s probably too early to have a strong opinion on what this means for the trajectory around infrastructure and capex,” he stated.

    Meta’s Plan to Outrun its Competitors

    According to Zuckerberg, Meta plans to spend “hundreds of billions of dollars” on AI infrastructure in the long run. He declared last week that Meta will increase its AI efforts by investing between $60 billion and $65 billion in 2025. According to him, a large portion of the compute infrastructure will probably transition from the pre-training stage to creating strong “reasoning” models and superior products that will be sold to billions of customers.

    Because you can “apply more compute at inference time in order to generate a higher level of intelligence and a higher quality of service,” Zuckerberg stated that this “doesn’t mean you need less compute.”

    “As a company that has a strong business model to support this, I think that’s generally an advantage that we’re now going to be able to provide a higher quality of service than others who don’t necessarily have the business model to support it on a sustainable basis,” he stated.

    Launch of Llama 4 in the Upcoming Month

    In the upcoming months, Meta intends to release Llama 4 with native multimodal and agentic capabilities. “Llama 4’s training is going really well. Pre-training for Llama 4 mini is complete, and both our reasoning models and the larger model appear to be doing well,” Zuckerberg stated.

    “With Llama 3, we wanted to make open source competitive with closed models, and with Llama 4, we want to lead,” he continued. Zuckerberg said that it will be feasible to create an AI engineering bot with coding and problem-solving skills comparable to those of a competent mid-level engineer by 2025.


    DeepSeek to Operate on Indian Servers, Says Union Minister
    Union Minister confirms DeepSeek will soon run on Indian servers, addressing privacy concerns and enhancing data security for Indian users.


  • India Prepares for the AI Challenge by Developing its Own LLM Fundamental Model

    IT Minister Ashwini Vaishnaw announced on January 30 that the Indian government has chosen to develop a large language model of its own domestically as part of the INR 10,370 crore IndiaAI Mission, just days after a startup Chinese artificial intelligence (AI) lab unveiled the low-cost foundational model DeepSeek.

    Additionally, the government has chosen ten businesses to provide 18,693 graphics processing units, or GPUs—high-end CPUs required to create machine learning tools that can be used to build a basic model. These firms include CMS Computers, Ctrls Datacenters, Locuz Enterprise Solutions, NxtGen Datacenter, Orient Technologies, Jio Platforms, Tata Communications, Yotta, which is funded by the Hiranandani Group, and Vensysco Technologies. Yotta has promised to supply 9,216 GPUs, which accounts for over half of the total.

    According to Vaishnaw, ministry teams have been collaborating closely with professors, researchers, startups, and others for the past one and a half years. The government is currently soliciting ideas for creating India’s own basic model. The model is free of prejudices and will take into account the Indian background, languages, and culture.

    Sharing his views on the subject, Giridhar LV, CEO and Co-founder, Nuvepro Technologies opoined, “India’s AI journey is no longer about catching up—it’s about leading with innovation. With the rapid advancements in AI, India is stepping up by developing its own Large Language Model (LLM)—a foundational AI model tailored to the country’s diverse linguistic and industry needs. Unlike generic global models, this initiative aims to create an AI framework deeply rooted in India’s unique datasets, regional languages, and cultural nuances. By investing in homegrown AI capabilities, India is ensuring data sovereignty, reduced dependency on foreign models, and AI solutions that align with local industries. The push towards indigenous AI development also aligns with the government’s Digital India and Atmanirbhar Bharat initiatives, fostering self-reliance in technology.”

    Focus on Developing Fundamental Model

    In addition, Vaishnaw stated that the government is in contact with a minimum of six developers to construct the foundational model, which may take four to eight months. “In the coming months, we will have a world-class foundational model,” the minister stated. However, he omitted to mention the companies the government is currently in contact with or the estimated cost of building the model. Regarding assisting with the acquisition of computing power, Vaishnaw stated that approximately 10,000 GPUs out of the 18,693 that have been authorised for empanelment are prepared for installation today.

    In a few days, the government will open a shared computing facility where researchers and businesses can access the power of computers. Higher-end GPU access will cost INR 150 per hour, while lower-end GPU use would cost Rs 115.85 per hour. The government would provide end customers with a 40% price subsidy to further facilitate access to these services.

    Proposal from IndiaAI Mission

    The IndiaAI Mission’s plan states that the bids for building LLMs will be shortlisted based on a number of criteria, including the approach’s innovativeness, scalability and sustainability, financial viability, and ethical considerations, among others.

    Additionally, the Centre will choose candidates based on the teams’ abilities, the viability of the submissions, and their potential impact. According to a blog post on the IndiaAI website, a panel of experts will analyse the submitted proposals before inviting the chosen candidates for a thorough presentation. It is anticipated that startups aiming to create an LLM that is developed domestically will also have access to this AI compute. Although it is still unclear if the prospective foundational AI model will be implemented, optimism depends on India’s startup ecosystem’s resourcefulness, inventiveness, and talent pool.


    MeitY Seeks Ideas for India’s AI Foundation Paradigm
    MeitY invites proposals to create India’s own AI foundation paradigm, encouraging innovative ideas for a robust and future-ready artificial intelligence framework.