Tag: Artificial inteligence

  • Vayu, from Tata Communications, Transforms the Cloud Fabric in Intelligence Enterprise

    In an effort to help businesses manage their IT infrastructure, Tata Communications announced the launch of Tata Communications Vayu, a new cloud platform. The platform’s goals are to lower expenses, streamline Cloud usage, and meet the increasing need for AI-powered solutions. Infrastructure, software platforms, artificial intelligence tools, security, and connectivity are just a few of the services that are integrated into a unified system by the cloud fabric. The company claims that this integration lowers operating costs and helps companies avoid the hassle of managing several vendors. With no extra fees for data transfers, the platform promises to save up to 30% on costs when compared to conventional cloud alternatives. According to A.S. Lakshminarayanan, Managing Director and CEO of Tata Communications, enterprise Cloud and AI solutions that strike a balance between cost, performance, and sustainability are more important than ever as the digital era speeds up. Tata Communications Vayu is more than just a product; it will pave the path for companies to innovate, integrate, and simplify their operations.

    Other Features of Vayu

    Features like on-demand access to high-performance computing resources—which are crucial for the development of AI—are included in the platform.  Additionally, it offers resources for training, optimising, and implementing AI models. The software also provides automation solutions to decrease manual intervention and optimise workflows. The Tata Communications’ Vayu represents a revolutionary change, according to Bhaskar Gorti, Executive Vice President of Cloud and Cybersecurity Services at Tata Communications. “Our most recent product marks the start of a new era in cloud innovation, one in which technology no longer acts as a barrier but rather as an enabler of seemingly endless possibilities. Today’s enterprises want more than just basic cloud services,” he said. The platform is made to help companies in a variety of industries, such as retail, financial services, and government.  It guarantees that data may be easily accessed across various contexts, including cloud servers, edge devices, and on-premises systems, all the while preserving security and adhering to laws like the Digital Personal Data Protection (DPDP) Rules 2025. Tata Communications also emphasised plans to implement cutting-edge cooling technology and energy-efficient data centres as ways to lessen their influence on the environment.

    What makes Vayu Unique?

    With serverless computing, auto-scaling, and managed databases, Tata Communications Vayu’s PaaS services streamline the implementation of applications. Model training and deployment are made easier by its AI/ML platform, and workflows are automated by integrated DevOps tools, microservices, and API management, allowing businesses to develop quickly without worrying about infrastructure administration. It provides a completely integrated cloud environment that includes security, storage, computation, AI, and cloud connectivity.  In order to balance public, private, and on-premises installations, Tata Communications has worked with businesses to create custom cloud plans. The services include retail, financial, and government services. The business makes sure that cloud infrastructure satisfies long-term growth goals and industry-specific criteria.

  • SAP Introduces Jule, AI Agent that Facilitates Autonomous Data Input

    By automating the input of client data into the cloud, SAP’s newest AI-powered agent, Jule, is poised to revolutionise company operations.  Jule enables AI agents to cooperate like human teams for the best outcomes, and it is designed to function smoothly across departments like finance and human resources.  SAP will finish developing Korean language support for Jule by the end of March 2025, and the company will formally debut the service for Korean customers in April.  This year, SAP wants to use Jule to increase operational efficiency by 30–40%.  These updates were provided by SAP CEO Christian Klein at the March 20 “Business Unleashed” news conference held at Seoul’s The Shilla Hotel.  According to Klein, SAP has more than 20 years of experience in the field and a lot of high-quality data.  With 800 million users throughout the globe, SAP has a clear understanding of the types of AI that our clients desire.

    Other Features of Jule and SAP’s Expansion of Operations

    Klein also emphasised how AI may cut down on work hours by automating tasks that developers had to manually complete in the past. He went on to say that data-driven high-performance AI can drastically cut down on labour hours by replacing manual operations that developers once had to complete. Effective resource allocation and lower staff turnover rates are made possible by the use of Jule in a variety of domains, including supply chain management, finance, and human resources. By April, SAP also intends to grow its domestic data centre activities. In 2021, the business constructed its first data centre in Korea, and it is currently investing further to fortify its cloud infrastructure. Klein underlined that the company will keep spending money on its data centres. The objective is to continuously grow the infrastructure in order to increase the range of choices available to clients that want to use cloud services.

    About SAP

    SAP, the world leader in business software, was founded in Germany in 1972 and provides supply chain management (SCM), customer relationship management (CRM), and enterprise resource planning (ERP) solutions.  With more than 26,000 clients globally, the company collaborates with leading cloud providers like Google, Microsoft, and Amazon Web Services (AWS).  SAP already collaborates with LG Electronics and other significant companies in South Korea.  SAP and Databricks signed a $650 million contract last month to create a cloud data analytics service.  “We intend to aggressively pursue companies that have not yet made the move to the cloud,” Klein said.  SAP is actively working to strengthen its position in the Korean and international markets with the launch of Jule, ongoing investments in data centres, and new cloud products.

  • Exabeam Magnifies Its Cloud-Native Presence in the UK

    With the launch of its New-Scale Security Operations Platform in the UK, Exabeam, the industry leader in intelligence and automation for security operations, has made it the tenth region in which its cloud-native security information and event management (SIEM) solution satisfies in-country data residency requirements. The New-Scale Platform cloud is now available in the UK, Germany, and Switzerland thanks to this expansion, which is based on Google Cloud and gives businesses access to best-in-class analytics, security intelligence, and automation to keep ahead of changing threats. With more than ten years of experience incorporating machine learning (ML) and artificial intelligence (AI) into security systems, Exabeam produces tangible outcomes quickly that surpass industry hype.

    What New-Scale Platform Offers?

    The New-Scale Platform, which is based on Google Cloud’s safe, scalable infrastructure, gives security teams the tools they need to streamline security operations and speed up response times through automated investigations, behavioural analytics, and AI-driven threat identification. Exabeam prioritises client experience, automated efficiency, and AI-powered security operations that have a noticeable impact, in contrast to many vendors who overburden the market with marketing language. Extending the Exabeam cloud footprint into the UK, according to UKI Vice President Kev Eley, provides security teams with the local data residency they want without compromising automation, performance, or scale. AI-driven security intelligence is now fully available to UK organisations, enabling them to improve compliance, expedite investigations, and identify threats more quickly.

    The Expansion Aligns with UK’s AI Opportunities Action Plan

    By highlighting cybersecurity as a key component of AI adoption across industries, the UK’s AI Opportunities Action Plan is directly supported by the New-Scale Platform’s expansion. Securing AI-driven innovation is crucial as the UK government cultivates an ecosystem powered by AI, which is propelling improvements in manufacturing, healthcare, education, financial services, and government operations. Exabeam enables UK businesses and public sector organisations to safeguard sensitive data, lower risk, and uphold compliance while utilising AI to boost productivity, efficiency, and service delivery by facilitating quicker threat detection, investigation, and response (TDIR).

    Ian Thomas, CEO of Sapphire, shared his thoughts on the launch, saying that traditional security systems aren’t keeping up with the unprecedented rate at which cyberthreats are growing. By providing AI-powered automation, the Exabeam New-Scale Platform transforms the game and lets security teams avoid playing catch-up with attackers. For organisations that require quicker and more intelligent threat detection, the fact that it is now accessible in the UK is a major plus.

  • To Improve AI Training, NVIDIA Releases the Open-Source Isaac GR00T N1

    NVIDIA Isaac GR00T N1, the first open, completely customisable foundation model for generalised humanoid reasoning and skills, is part of a portfolio of technologies introduced by NVIDIA on 18 March to accelerate the creation of humanoid robots. Other technologies include open-source physics engines like Newton, which is being developed alongside Google DeepMind and Disney Research specifically for the development of robots, and simulation frameworks and blueprints like the NVIDIA Isaac GR00T Blueprint for creating synthetic data. The GR00T N1, which is currently available, is the first of a series of fully customisable models that NVIDIA will pretrain and make available to robotics developers globally. This will hasten the transformation of industries that are facing a labour crisis that is predicted to affect over 50 million people worldwide. Nvidia CEO Jensen Huang announced in his keynote speech at GTC 2025 that the era of generalist robotics has arrived. With the Nvidia Isaac GR00T N1 and new frameworks for data collection and robot learning, robotics developers worldwide will be able to explore the next frontier in the AI era.

    GR00T N1 Promotes the Humanoid Developer Community

    The dual-system architecture of the GR00T N1 foundation model is based on ideas from human cognition. “System 1” is a quick-thinking action model that mimics human intuition or reactions. “System 2” is a meticulous, slow-thinking approach to decision-making. System 2 uses a vision language model to plan activities by reasoning about its surroundings and the commands it has been given. These designs are subsequently converted into precise, continuous robot motions by System 1. Both human demonstration data and a vast amount of synthetic data produced by the NVIDIA OmniverseTM platform are used to train System 1. In addition to performing multistep tasks that call for extensive context and combinations of broad skills, GR00T N1 can readily generalise across typical tasks, including gripping, manipulating objects with one or both arms, and transferring goods from one arm to another. These features are applicable to a variety of use cases, including inspection, packing, and material handling. For their particular humanoid robot or task, developers and researchers can post-train GR00T N1 using synthetic or real data.

    Other Smart Features of GROOT

    Huang used a post-trained strategy based on GR00T N1 to demonstrate 1X’s humanoid robot automatically completing household cleaning tasks during his GTC keynote address. The autonomous capabilities of the robot are the outcome of 1X and NVIDIA’s AI training partnership. The CEO of 1X Technologies, Bernt Børnich, said that learning and flexibility are key to the future of humanoids. He added that NVIDIA’s GR00T N1 significantly improves robot thinking and skills as we build our own models. We completely deployed on NEO Gamma with very little post-training data, furthering our goal of building robots that are companions rather than merely tools that can help people in significant, incalculable ways. Other top humanoid developers throughout the world, like Boston Dynamics, Mentee Robotics, NEURA Robotics, and Agility Robotics, have early access to GR00T N1.

  • Monetizing AI: Business Models That Work (And Those That Don’t)

    This article has been contributed by Nida Sahar, CEO, Nife.io.

    In 2017, a small AI startup had what seemed like an unbeatable product: a powerful deep-learning model that could generate human-like text. Investors were excited, the demo was impressive, and the hype was real. But there was a major challenge—they had no clear monetization strategy. 

    After months of experimentation, they launched a subscription product, but user churn was high. They then attempted to license their model to enterprises, but the sales cycle was too slow. Finally, they pivoted to an API model, which allowed them to scale revenue quickly. 

    That startup was OpenAI. 

    Their journey reflects a challenge every AI company faces: Having powerful technology is not enough—without the right business model, even the most advanced AI products will fail. 

    Three Proven AI Monetization Strategies 

    AI startups typically choose among three primary monetization strategies, each with advantages and challenges. 

    1. Subscription-Based AI: The SaaS Model 

    A common approach to AI applications is the subscription-based SaaS model. Whether it’s an AI-powered tool for marketing, automation, or data analysis, many companies opt to charge users a monthly or annual subscription. 

    When Subscription Works Best 

    • The product provides continuous value (e.g., Grammarly enhances writing daily).
    • A freemium-to-premium conversion strategy is effective (Jasper AI monetized free users by offering premium marketing features). 
    • The target audience consists of individuals or small businesses that are unlikely to commit to enterprise contracts. 

    Challenges of Subscription AI 

    • High churn rates: If users do not perceive ongoing value, retention becomes difficult.
    • Customer acquisition costs: Scaling an AI SaaS product requires significant investment in marketing and customer education.
    • Compute costs: AI-driven SaaS products often have higher infrastructure costs than traditional SaaS, which can eat into margins. 

    Case Study: Grammarly & Jasper AI 

    Grammarly successfully leveraged the freemium model, allowing users to test the product before upgrading to paid plans. Jasper AI found a profitable niche in marketing, charging users based on AI-generated content

    Both succeeded because they solved a specific, recurring pain point rather than simply offering an interesting AI feature. 

    2. API-Based AI: The Developer-Focused Approach 

    Instead of building an end-user application, some AI startups monetize by offering API access to their models, allowing developers and businesses to integrate AI into their own products. 

    When APIs Work Best 

    • Developers need off-the-shelf AI without the cost of in-house development. ● The AI model is expensive to build but relatively cheap to scale (Deepgram optimized infrastructure to make speech recognition cost-effective). 
    • Enterprises prefer usage-based pricing over fixed subscriptions. 

    Challenges of API Monetization 

    • Cloud costs can escalate if you don’t carefully structure pricing; usage growth can lead to unsustainable infrastructure costs. 
    • Commoditization risk: Open-source alternatives and competitors can drive prices down, leading to a race to the bottom

    Case Study: OpenAI & Deepgram 

    OpenAI successfully implemented pay-per-use pricing, which allowed it to generate significant revenue while maintaining accessibility. Deepgram positioned itself as a cost-effective alternative to major cloud providers by optimizing infrastructure and pricing aggressively. 

    APIs are scalable, but success depends on controlling costs and maintaining differentiation in a competitive market. 

    3. Enterprise AI: Selling to Large Organizations 

    Enterprise AI focuses on selling AI solutions directly to businesses, often through customized deployments or large-scale integrations. This model is common in industries like finance, healthcare, cybersecurity, and government contracting.

    When Enterprise AI Works Best 

    • The AI product solves a critical business problem that organizations cannot solve internally (e.g., Palantir provides intelligence solutions to government agencies).
    • Customers require custom AI models that are not easily replaced by off-the-shelf solutions. 
    • The company has the resources to withstand long sales cycles (often 12+ months).

    Challenges of Enterprise AI 

    • Sales cycles are long: Closing enterprise deals can take a year or more, creating cash flow challenges for early-stage startups. 
    • Customer acquisition requires direct sales efforts, which can be expensive and complex. 
    • Procurement processes are slow and bureaucratic, making it difficult to scale quickly. Case Study: Palantir & C3.ai 

    Palantir built a successful AI business by securing large government contracts before expanding into the private sector. C3.ai focused on industry-specific AI applications, such as energy and supply chain optimization, allowing them to differentiate from general-purpose AI platforms. 

    Enterprise AI can be highly profitable, but it requires significant capital, patience, and a strong sales team


    Top Trends in Artificial Intelligence (AI) Shaping Business in 2025
    Explore the latest trends in artificial intelligence (AI) for business, from automation and predictive analytics to AI-driven customer experiences and innovation.


    What Doesn’t Work? 

    While some AI business models have proven successful, others have consistently failed.

    1. “Cool Tech Without a Business Model” 

    Many AI startups focus too much on research and product development without a clear go-to-market strategy. Having a high-performing model is not enough; it needs to be packaged, priced, and distributed effectively

    2. “Subscription Models with High Churn” 

    If users do not see continuous value from an AI product, they will cancel their subscriptions quickly. AI tools that are used sporadically or fail to integrate into users’ workflows often struggle to retain customers. 

    3. “APIs Without Pricing Control”

    APIs can be profitable, but only if usage-based pricing accounts for infrastructure costs. If an API model scales usage without sufficient margins, the company can end up losing money as it grows. 

    4. “Enterprise AI Without Sufficient Funding” 

    Many startups attempt to sell AI to enterprises without realizing how long and expensive the process is. Without strong financial backing, these companies often run out of capital before closing enough deals to sustain operations. 

    Pricing AI Products: Key Strategies 

    Selecting the right pricing model is critical for AI monetization. The most successful companies use one or a combination of the following approaches: 

    1. Usage-Based Pricing (Best for APIs & Enterprise AI) 

    • Charges customers per API call, token processed, or data analyzed.
    • Example: OpenAI’s pricing scales based on usage. 

    2. Tiered Subscription Pricing (Best for SaaS AI) 

    • Offers multiple pricing plans based on feature access or limits. 
    • Example: Jasper AI charges higher fees for businesses, generating more AI content. 

    3. Freemium-to-Paid Conversion (Best for Consumer AI) 

    • Provides free access to basic features, with paid upgrades for advanced functionality.
    • Example: Grammarly’s free version drives user adoption before upselling premium features. 

    4. Enterprise Licensing (Best for B2B AI) 

    • Companies sell AI solutions as a one-time license or an annual contract.
    • Example: Palantir’s multi-million-dollar government contracts. 

    Key Takeaways for AI Founders and Investors 

    • Subscription AI can work, but retention is critical. 
    • API-first models scale quickly, but pricing and cloud costs must be managed carefully.
    • Enterprise AI is lucrative but requires capital, patience, and strong sales execution. 
    • A hybrid approach often provides the most stability and scalability. What’s Your Strategy? 

    The AI market is exploding, but monetization remains a major challenge. The most successful AI companies are not necessarily those with the best technology, but those that have a well-defined business model and execution plan. 

    For AI founders, the key question is not just, “What can we build?” but “How will we sell it?”.


    Cloud Security in the Age of AI: Adapting to Evolving Threats
    Explore the impact of cloud security in the age of AI and how evolving threats are reshaping business and consumer interactions. Learn how to adapt to new challenges.


  • ZuperAI and KiranaPro Collaborate to Provide AI-Powered Retail Market Solutions

    Quick commerce platform KiranaPro has teamed up with B2B management platform ZuperAI to provide its users with AI-based retail market solutions, only days after enlisting PV Sindhu as an investor. In order to help consumers and merchants with product discovery, inventory optimisation, and supply chain efficiency, KiranaPro will incorporate ZuperAI’s “Matchmaking AI” technology as part of this collaboration. The connection would help businesses grow their business while also making it easier for users to find products. The fast commerce startup hopes to create a tailored shopping experience while bridging the gap between consumers and retailers. Deepak Ravindran, the founder and CEO of KiranaPro, stated that this partnership is revolutionary for AI-powered business, opening up new possibilities for accuracy, customisation, and operational effectiveness.

    Joining Network of ONDC

    The fast commerce company just joined ONDC, making it the first platform in India to access the nation’s network of more than 7 lakh registered merchants. Before expanding farther into Kerala, the ONDC-powered platform will first begin operations in Hyderabad and Thiruvananthapuram. KiranaPro was founded in 2024 by Ravindran and Dipankar Sarkar with the goal of transforming traditional kirana (retail) establishments by providing them with a flexible income model and an AI-driven interface that aids in managing their digital operations. KiranaPro links local mom-and-pop shops with consumers directly, in contrast to other quick commerce systems that rely on dark stores. Because of its collaboration with ONDC, the firm operates throughout India.

    Why Quick Commerce Companies are Keeping its Platform Hi-Tech and Updated?

    To increase its presence in the hyperlocal retail market, it also purchased Joper.app, a hyperlocal grocery delivery business, earlier this month. In addition to bolstering KiranaPro’s position in the hyperlocal commerce market, the acquisition of Joper.app guarantees local business owners superior tech-enabled solutions that enhance productivity and customer satisfaction, according to Deepak Ravindran, co-founder and CEO of KiranaPro. The action is in line with KiranaPro’s goal of enabling small merchants to take on the big rapid commerce titans. KiranaPro and the merchants collaborating with the brand will benefit from Sumit Gorai’s (founder of Joper.app) knowledge and insights on the mechanics of operating a retail business in India, which he frequently discusses on his YouTube channel.

    The growth coincides with intense competition in India’s fast commerce market. Leading companies like Zomato, Swiggy, and Zepto are rapidly growing their quick commerce services, which allow them to deliver food and household necessities in as little as ten minutes. These companies have recently introduced a number of rapid commerce services, such as Swiggy’s SNACC, Zomato’s Bistro, and Zepto Cafe, among many more. Additionally, Zomato’s Blinkit launched a 10-minute ambulance service last month. Notably, the three giants collectively recorded over $1 billion in revenue in FY24, and a survey indicates that sales in India’s rapid commerce sector have increased by 280% over the past two years.


    Paytm Partners with Perplexity AI for Smarter In-App Search
    Paytm partners with Perplexity AI to introduce intelligent search on its app, enhancing user experience with advanced AI-driven search capabilities.


  • Mira Murati, Former CTO of OpenAI, Starts Startup for AI Research and Products

    Mira Murati has started her own artificial intelligence (AI) company, “Thinking Machines Lab,” over five months after leaving her position as CTO of OpenAI. The company’s objective, according to Murati, who announced the launch on X, is straightforward: develop AI by making it widely applicable and intelligible through sound foundations, open science, and real-world applications. The goal of the AI research and product startup is to close the current gaps in AI and increase the systems’ general capability, understandability, and customisability.

    Although AI skills have significantly improved, there are still significant gaps, according to the company’s blog post. The fast-developing capabilities of frontier AI systems are not well understood by the scientific community. The best research labs hold the majority of the knowledge on how these systems are educated, which restricts both the public conversation about AI and people’s capacity to use it efficiently. Furthermore, despite their potential, people still find it challenging to adapt these systems to their own needs and values.

    What Thinking Machine Lab will Offer?

    Rather than concentrating on creating completely autonomous AI systems, Thinking Machines Lab will create customised AI systems with sophisticated multimodal capabilities. According to the AI firm, it intends to regularly release technical papers, blog entries, and code that emphasise cross-industry human-AI collaboration. Many of the roughly 30 workers at Thinking Machines Lab have prior experience with firms like OpenAI, Google DeepMind, Character AI, and Mistral AI. After working with OpenAI for six years, Murati departed the company in September of last year. She stated that she was taking a break to “do her own exploration” at the time of her departure. She is Thinking Machines Lab’s CEO.

    Barret Zoph, the CTO of Thinking Machines Lab, left OpenAI in September of last year. The company’s principal scientist is John Schulman, who departed OpenAI for rival company Anthropic in August of last year. According to reports, Murati is negotiating to raise $100 million for her new AI business from unidentified VC firms. The corporation did neither confirm nor deny if it had raised money in its blog post.

    Growing Network of AI Startup

    The most recent addition to the already saturated AI startup market is Thinking Machines Lab. In the global competition to develop generative AI models, it will face off against industry titans including OpenAI, Anthropic, Meta, Google, and Microsoft. India’s increasing need for AI hardware and software has opened the door for a new wave of entrepreneurs that prioritise using GenAI technology in consumer and corporate applications over infrastructure development. India is home to more than 200 GenAI businesses, which have earned a total of $1.2 billion in funding since 2020, according to the report “The Rise of India’s GenAI Brigade.”


    ITU and DoT Partner to Develop AI-Powered Digital Twins
    ITU and DoT collaborate to develop AI-powered digital twins, enhancing virtual modeling and simulation capabilities for smarter decision-making.


  • ITU and DoT Collaborate on AI-Powered Digital Twins

    A strategic relationship aimed at improving AI-driven digital twin technologies has begun with the signing of a Letter of Intent (LoI) between the Department of Telecommunications (DoT) and the International Telecommunication Union (ITU). The goals of this partnership are to advance sustainable development, create international standards, and stimulate innovation in infrastructure planning. The LoI will lay the groundwork for a number of projects that will incorporate next-generation technologies—such as digital twins, AI-driven solutions, and IMT-2030 technologies—into frameworks that will help vital industries like healthcare, urban development, and transportation.

    In an effort to bolster India’s position as a global leader in digital connectivity, Dr. Neeraj Mittal, Secretary of the Department of Telecommunications, signed the LoI while on an official visit to Geneva. Dr. Mittal talked on India’s leadership in 5G and 6G technologies, AI for digital transformation, and cybersecurity frameworks in talks with ITU leadership, including ITU Secretary-General Ms. Doreen Bogdan-Martin.

    India Cementing its Strong Base in Telecommunication Field

    The ITU’s Partner2Connect program, which attempts to close the global digital gap, was another topic of debate. India reiterated its dedication to backing ITU projects, especially those pertaining to skill development and global connectivity.

    Dr. Mittal suggested during his visit that India host the ITU-Plenipotentiary Conference in 2030. Positive reactions were received to this suggestion, and more talks will take place at the next ITU Council Meeting. This plan has a strong basis thanks to India’s recent achievement in hosting the World Telecommunication Standardisation Assembly (WTSA-2024) in New Delhi. By hosting the conference, India would further solidify its position as a focal point for international policy discussions on ICT legislation and telecommunications, thus influencing the direction of global connectivity.

    An important development in the function of telecommunications is the partnership between the DoT and ITU. Next-generation mobile communication technologies are platforms for flexible and dynamic infrastructure planning, not just for connectivity. AI and digital twins can be used to deliver real-time, intelligent data that radically alters the planning, design, and implementation of infrastructure and cities. Better planning, monitoring, and management of infrastructure projects are made possible by digital twins, which provide virtual versions of actual systems, increasing sustainability and efficiency.

    What AI-driven Digital Twin Technologies can Offer?

    Pervasive intelligence can be produced by AI-driven digital twin technologies, enabling open, networked systems that transform stakeholder collaboration on infrastructure projects. These developments make it possible to approach infrastructure and urban planning with greater flexibility and responsiveness, resulting in solutions that are better able to adjust to shifting circumstances. By opening up new business models, this strategy makes it possible to provide scalable, data-driven solutions that support long-term growth in vital industries. By using a comprehensive approach, future infrastructure will be durable and innovative, able to adapt to changing needs.

     The LoI lists a number of important areas for cooperation. Knowledge sharing and capacity building will be a key focus, promoting the sharing of ideas from projects such as the ITU’s Citiverse platform and the DoT’s Sangam project. Better data integration and cross-sectoral cooperation will be made possible by this partnership. The creation of international standards through contributions to ITU-T Study Group 20, which focuses on digital twins, smart cities, and the Internet of Things, is another crucial topic. The objective is to develop international standards and procedures that guarantee the interoperability and scalability of AI-driven solutions. In order to verify the revolutionary potential of digital twin technologies, DoT and ITU will also set up sandbox environments for testing and trial initiatives. In order to promote more participatory government, AI-powered platforms will also be utilised to involve individuals in urban planning.

    Lastly, by customising solutions to fit the unique requirements of various nations, the partnership will concentrate on privacy-enhancing methods in ICT measurement and AI model integration for digital twins. This partnership ushers in a new era of global infrastructure planning that promotes sustainability and creativity. An important step towards building a more sustainable and interconnected future for global infrastructure is this alliance.


    India and US to Deepen Ties in AI, Semiconductors, and Space
    India and the US are strengthening partnerships in AI, semiconductors, and space technology, fostering innovation and strategic collaboration.


  • India Ought to be at the Forefront of the AI Revolution: Sam Altman

    India is an “incredibly important” market for the massive artificial intelligence (AI) company, according to Sam Altman, founder and CEO of OpenAI. Altman stated that India ought to be among the front-runners of the AI revolution at a fireside chat with Ashwini Vaishnaw, the minister of information technology (IT), on Wednesday, February 5. He described the nation’s adoption of the technology thus far and the use cases that have been developed on top of the large language models (LLMs) that are already in place as “really quite amazing.”

    The CEO of OpenAI added that the country’s user base has tripled in the last 12 months, making it the company’s second-largest market worldwide. When asked what areas India should prioritise in the field of artificial intelligence, Altman stated that he truly wanted to reaffirm the remarks regarding the full-stack approach.

    However, given what Indians are creating with AI at every stage of the stack—chips, models, the stack, and all the amazing applications—India ought to be taking the lead. India ought to be one of the pioneers of the AI movement. Seeing what the nation has accomplished is quite astounding. Altman arrived in India late on the evening of February 4th while on a multi-country global tour.

    Meeting Government Heads and  the Big Players of Indian Startup Sector

    He met with several Indian company owners and venture capitalists earlier in the day, as well as IT Minister Vaishnaw. Additionally, he is anticipated to meet with Prime Minister Narendra Modi.  He has private meetings with startup founders such as Vijay Shekhar Sharma of Paytm, Gaurav Munjal of Unacademy, Srikanth Velamakanni of Fractal, Aloke Bajpai of ixigo, and Tushar Vashisht of HeathifyMe.

    Prominent investors Prayank Swaroop of Accel, Hemant Mohapatra of Lightspeed Venture Partners, and Rajan Anandan and Harshjit Sethi of Peak XV Partners also attended the conference. Pricing for Founders Bat in India According to a media reports, tech entrepreneurs primarily pitched the company for India-centric pricing at the founders’ meeting with Altman. Indian creators informed Altman that global pricing might not be effective in India and that major tech companies like Microsoft, Google, and Amazon already have pricing tailored to India.

    In order to guarantee that OpenAI’s products, including its APIs, are more reasonably priced for Indian developers and businesses, the founders also made a pitch to the CEO of the company. Although Altman refrained from making any commitments, he stated that the company is thinking about offering customised pricing for the Indian market. The CEO of OpenAI added that as the company develops more advanced and potent models, he anticipates expenses to drop “rapidly” over time.

    Cofounder Kunal Bahl of Snapdeal and Titan Capital acknowledged in a post on X that OpenAI product prices are “high” and that they must drop “dramatically” in order to be widely adopted. They acknowledge that the basic models can only go so far (“80-90% of the way”) and that a strong application layer will be required for particular industry/company contexts in order to raise it to 100%. For the numerous Indian businesses developing at the application layer, this is crucial,” Bahl continued.

    Tug of War Between Open AI and Chinese DeepSeek

    The tour takes place at a time when OpenAI is facing significant challenges due to the emergence of DeepSeek, a Chinese AI search engine platform that claims to have developed AI models that can compete with the best models from US firms like OpenAI, Meta, and Google at a far lower cost. India has one of the biggest populations and developer pools in the world.

    OpenAI will be able to increase its earnings by establishing a physical base in the nation. The trip coincides with a wave of copyright infringement cases against the AI giant for allegedly exploiting local digital platforms’ and book publishers’ content to train its chatbot ChatGPT without permission.

    Meanwhile, OpenAI has apparently started talking about data localisations in an effort to ward off any additional regulatory obstacles. The corporation wants to store its Indian consumers’ data in the nation itself as part of this. Since India is one of the company’s largest developer ecosystems, OpenAI is naturally seeking methods to increase its presence there.

    In preparation for the Digital Personal Data Protection Act of 2023, it has already started talking about ways to localise the data of its Indian citizens in domestic data centres. A person with knowledge of the development told Livemint that the drive to localise data operations is probably going to start soon.


    India’s Foundational AI Model to Be Ready in 10 Months: Vaishnaw
    India’s indigenous foundational AI platform will be ready within ten months, says Vaishnaw, marking a major step in the country’s AI self-reliance efforts.


  • To Support Autonomous AI Throughout its Product Line, Zoho Introduces Zia Agents

    On February 4, the leading SaaS company Zoho Corporation announced the launch of Zia Agents, Agent Studio, and Agent Marketplace, further expanding its own AI platform, Zia. Businesses will be able to access, create, and implement intelligent, self-governing digital agents throughout their organisations thanks to these new capabilities.

    Pre-built, task-specific Zia Agents will be made available for preview by Zoho and its IT management business, ManageEngine, starting today. In the upcoming weeks, the firm plans to roll them out across its range of more than 100 products, according to a release.

    “The pace of disruption and quality of innovation we are seeing in our industry right now has encouraged me to focus on my passion area, technology,” said Sridhar Vembu, co-founder and chief scientist of Zoho Corporation. “I will lead a number of extensive R&D projects for the company, starting with AI, and dedicate more time to practical technical work.”

    He continued by saying that the brand would create strong and practical solutions that increase customer value while upholding our dedication to customer flexibility and data privacy by leveraging Zoho’s extensive engineering experience, its own data centres, and its shared data model.

    Zoho Spreading its Wings

    The company’s core AI, Zia, was introduced in 2015, and it allows for contextual and intelligent activities throughout its app ecosystem. Ask Zia, a system-wide conversational assistant that improved workflows by summarising interactions, evaluating trends, and automating activities, was introduced by Zoho in 2018.

    Zoho is now extending its AI capabilities with Zia Agents, showcasing a number of pre-built autonomous agents that will be rolled out in the upcoming months, including an Account Manager Agent, SDR Agent, HR Agent, Customer Support Agent, IT Help Desk Agent, and SalesCoach Agent.

    Zia Agent Studio

    Additionally, Zoho will launch Zia Agent Studio, a platform that enables clients, partners, and developers to create and implement unique AI agents with inherited skill sets. These agents may then be made available through Zoho’s Agent Marketplace for broader use.

    Zia Agent Studio, which is intended for low-code and no-code development, gives customers access to pre-built Zia Skills, Zoho’s ecosystem tools, unified data, and many language models in addition to enabling them to construct autonomous agents that are pertinent to their needs.

    Organisations may access, deploy, and reuse specialised AI agents by publishing agents created using Zia Agent Studio on the Agent Marketplace. While partners and developers can contribute bespoke agents tailored to certain company needs, Zoho will provide a pre-built roster of agents.

    This announcement coincides with Zoho’s record-breaking expansion, which saw the company gain 110,000 new clients worldwide in 2024, reaching 850,000+ enterprises across all industries. A small number of clients will be the first to get access to the new AI features, with availability growing every month.


    PMKVY 4.0 Trains 20,000 Individuals in AI Courses So Far
    PMKVY 4.0 has trained 20,000 individuals in AI courses, enhancing India’s tech workforce and preparing professionals for the growing demand in artificial intelligence.