Industry

Commercial developments and partnerships

8,637 ArticlesUpdated Daily
Showing 12 of 8,637 articles
Intel Announces 2026 EPIC Supplier Award Recipientsindustry

Intel Announces 2026 EPIC Supplier Award Recipients

HideShow Image --> Annual program recognizes suppliers that exemplify Intel’s standard of excellence. --> Share In this article: Intel today announced the recipients of the 2026 Intel EPIC Supplier Award, honoring suppliers that demonstrate the highest levels of excellence across Intel’s global value chain. As the company delivers product leadership and world-class foundry services, its suppliers play an essential role in enabling technological progress and strengthening supply chain resilience for customers. This year’s award recipients reflect a world-class commitment to continuous improvement, collaboration, and performance excellence.“Our 2026 Intel EPIC Supplier Award recipients reflect the strength of Intel’s global supply chain and the trust we’ve built together. Their partnership, focus on quality, and commitment to continuous improvement help us deliver for our customers every day,” said Lip-Bu Tan, chief executive officer of Intel. “I want to thank this year’s honorees for their collaboration, resilience, and the high standards they bring to our industry.”Intel founded its recognition program in 1987 to establish an objective system for measuring and improving supplier quality, innovation and performance. Each year, the Intel EPIC Supplier Award recognizes the top performers in the Intel supply chain for their dedication to “EPIC” performance – Excellence, Partnership, Inclusion and Continuous Improvement. Of the thousands of Intel suppliers around the world, only a few hundred qualify to participate in the EPIC Supplier Program.Learn more at the Intel EPIC Supplier Program and Awards site.2026 Intel EPIC Supplier Award Recipients: AEM Holdings Ltd. - Excellence in Business Enablement AGC Inc. - Excellence in Technology Partnership on EUV Mask Blanks Applied Materials - Excellence in Technology Deve

Intel QuantumLoading...0
Pearson CTO has the solution to the AI productivity gapindustry

Pearson CTO has the solution to the AI productivity gap

Share this article Copy Link Share on X Share on Linkedin Share on Facebook Dave Treat is Pearson’s chief technology officer. As AI is integrated into the workplace, everyone is asking the question of what AI will do to jobs. While headlines on rising unemployment fuel debate about the future impact, there is a more immediate question: if AI is everywhere, why aren’t workplaces seeing the productivity jump they were promised? If AI is spreading quickly while output per worker barely moves, something fundamental isn’t working. Expanding access to AI tools matters, but access alone won’t close the productivity gap. Learning is what turns technology into results. That distinction is where much of the current conversation goes wrong. The real constraint on productivity What’s holding productivity back isn’t technology, but learning capacity. Learning is the missing link between AI promise and performance. But learning is hard, and it’s supposed to be. It takes courage – courage from business leaders to integrate it effectively, and courage from individuals to adapt to new technologies. This demands a new approach to learning, supported by new solutions that more naturally support learners in the flow of work. Human judgment, AI assistance, and feedback need to reinforce one another continuously not through one off trainings, but inside daily operations. Productivity improves when work is redesigned at the task level and learning is built directly into how that work gets done. Take a customer-support team. AI can draft a response in seconds, but the real productivity gain comes from the surrounding ecosystem, which requires a level of human input – setting up a verification or scheduling a lightweight review step for tougher cases. Over time the prompts, and policies improve, along with the human-led team. This is because the learning is happening naturally inside the workflow instead of a one-off training course. Learning embedded in the flow of work So, what steps can

Quantum Computing UK (Tech Monitor)Loading...0
Accenture partners with Databricks on scaling enterprise AI solutionsindustry

Accenture partners with Databricks on scaling enterprise AI solutions

Share this article Copy Link Share on X Share on Linkedin Share on Facebook Credit: Tada Images/Shutterstock.com. Accenture and Databricks have announced the launch of the Accenture Databricks Business Group as part of an expanded partnership aimed at assisting organisations to implement Databricks’ data and AI platform. The initiative aims to support businesses in scaling AI applications and agents, utilising recent Databricks developments such as Lakebase for serverless Postgres databases, Genie for conversational data queries, and Agent Bricks for building AI agents on enterprise data. The companies are responding to challenges faced by organisations attempting to scale AI due to fragmented data systems and legacy infrastructure. They aim to centralise data governance, facilitate the move of AI from pilot stages to operational use, and improve accessibility of data and AI across business functions. Accenture and Databricks are already working with clients in various sectors. For example, US retailer Albertsons Companies is using their services to develop pricing intelligence solutions for merchants and category managers. Chemical firm BASF has introduced a digital assistant named FOX within its finance division, while Kyowa Kirin International has modernised its data management infrastructure using the Databricks Lakehouse platform to improve data reliability and compliance. Accenture chair and CEO Julie Sweet said: “With Databricks, we’re helping clients modernise their data foundation so they can build, scale and govern AI applications and agents with confidence.” The new business group will be staffed by over 25,000 professionals trained in Databricks technology. This resource aims to help clients deploy Lakebase, Genie, Agent Bricks, and Lakehouse solutions across multiple industries, including financial services, retail, life sciences, telecommunications, and the public sector. The companies report an increase in adoption of multi-agent systems within enterp

Quantum Computing UK (Tech Monitor)Loading...0
IBM completes $11bn Confluent acquisitionindustry

IBM completes $11bn Confluent acquisition

Share this article Copy Link Share on X Share on Linkedin Share on Facebook Photo Credit: Mats Wiklund/Shutterstock IBM has completed the previously announced $11bn acquisition of data streaming platform Confluent, in a move to strengthen real-time data streaming for enterprise AI. Under the terms of the deal announced in December 2025, IBM purchased all outstanding common shares of Confluent listed on Nasdaq for $31 per share in cash. Confluent’s data streaming platform, which is based on Apache Kafka, is claimed to be adopted by over 6,500 enterprises worldwide, including 40% of Fortune 500 companies. Through this acquisition, IBM aims to integrate Confluent’s real-time data streaming capabilities into its broader software portfolio, particularly targeting applications in AI and automation across hybrid and on-premises environments. IBM’s intent is to address a significant challenge in enterprise AI deployments by improving the availability and governance of timely operational data. The tech major notes that in many organisations, operational data remains fragmented and delayed across multiple business systems, creating obstacles for AI models that depend on current information at scale. Confluent’s technology enables continuous processing, connection and governance of event data as it is generated. According to IBM, this approach has already been adopted by customers across sectors such as financial services, healthcare, manufacturing and retail. Confluent CEO and co-founder Jay Kreps said: “Since our founding, Confluent’s mission has been to set the world’s data in motion, making data streaming as foundational to the enterprise as the database. Joining IBM allows us to accelerate that mission at a much greater scale. “IBM’s global reach and deep enterprise relationships will help us go further, faster. As enterprises move from experimenting with AI to running their business on it, helping data flow continuously across the business has never mattered more.” The i

Quantum Computing UK (Tech Monitor)Loading...0
Nvidia launches Dynamo 1.0 AI inference operating systemindustry

Nvidia launches Dynamo 1.0 AI inference operating system

Share this article Copy Link Share on X Share on Linkedin Share on Facebook Nvidia supports expansion of AI data centre optics with major investments. Credit: Stock all/Shutterstock.com. Nvidia has commenced production of Dynamo 1.0, an open-source operating system designed for large-scale AI inference. Dynamo 1.0 is currently in use across a range of global cloud service providers, AI-native firms and enterprises. The software is available immediately to developers worldwide. Dynamo 1.0 works with the Nvidia Blackwell platform to manage GPU and memory resources for AI workloads across data centre clusters. It divides inference tasks between GPUs and uses advanced traffic management tools to move data efficiently between GPUs and storage systems, reducing memory bottlenecks and computational overheads. For agentic AI applications and processes involving lengthy prompts, the system routes requests to GPUs already containing relevant data from earlier steps, offloading this information when it becomes unnecessary. Recent benchmarks indicate that Dynamo can increase the inference performance of Blackwell GPUs by as much as seven times, reducing the operational cost per token for users employing millions of GPUs. As open-source software, Dynamo 1.0 aims to address challenges associated with scaling AI inference in data centres, where varying request sizes and unpredictable demand make resource orchestration complex. Dynamo integrates natively with leading open-source AI frameworks such as LangChain, llm-d, LMCache, SGLang and vLLM through optimisations made possible by the Nvidia TensorRT-LLM library. Core components of Dynamo, such as KVBM for memory management, NIXL for GPU-to-GPU data movement and Grove for scaling, are also released as standalone modules. Nvidia has provided TensorRT-LLM CUDA kernels to the FlashInfer project to support their integration into further open source initiatives. The Nvidia inference platform incorporating Dynamo is supported by major cl

Quantum Computing UK (Tech Monitor)Loading...0
Nebius inks $27bn deal with Meta for AI cloud capacityindustry

Nebius inks $27bn deal with Meta for AI cloud capacity

Share this article Copy Link Share on X Share on Linkedin Share on Facebook Dutch AI infrastructure company Nebius Group has entered into a long-term agreement with Meta to supply AI infrastructure, with the deal valued at up to $27bn. The arrangement will see Nebius provide $12bn worth of dedicated capacity across several locations, utilising one of the earliest large-scale deployments of Nvidia’s Vera Rubin platform. The company plans to begin delivering this capacity in early 2027. In addition, Meta has pledged to purchase further compute power from upcoming Nebius clusters, potentially bringing the total value of its commitment to $15bn over five years. Nebius intends to sell available capacity to third-party clients in its AI cloud business, with any remaining resources allocated for Meta’s use. The partnership comes as Nebius reports continued expansion in AI cloud operations and maintains its 2026 financial guidance unchanged. Nebius founder and CEO Arkady Volozh said: “We are pleased to expand our significant partnership with Meta as part of securing more large, long-term capacity contracts to accelerate the build-out and growth of our core AI cloud business. We will continue to deliver.” In a separate development announced in March 2026, Nvidia revealed a $2bn investment in Nebius to support the expansion of AI cloud services targeted at both startups and enterprises. This collaboration is set to deliver next-generation hyperscale cloud offerings for AI workloads. Under the Nvidia partnership, Nebius will deploy more than 5 gigawatts (GW) of AI compute globally by 2030, including several large data centres in the US. The cooperation includes access to early product samples, design resources, system software support, and technical guidance for constructing AI-focused facilities. Nebius will integrate multiple Nvidia technologies into its platform, such as the Rubin platform, Vera CPUs, and BlueField storage systems, aiming to improve GPU fleet management thr

Quantum Computing UK (Tech Monitor)Loading...0
Intel Xeon 6 used as Host CPUs in NVIDIA DGX Rubin NVL8 Systemsindustry

Intel Xeon 6 used as Host CPUs in NVIDIA DGX Rubin NVL8 Systems

HideShow Image --> Intel Xeon is used as host CPU, underscoring its role to orchestrate, scale and secure modern AI infrastructure --> Share In this article: What’s New: Today at NVIDIA GTC 2026, Intel announced that Intel Xeon 6 is being used as the processor for NVIDIA DGX Rubin NVL8 systems. This highlights Xeon’s role in providing architectural continuity and scalability for GPU-accelerated AI systems as workloads shift toward massive, real-time inference.“AI is shifting from large-scale training to real‑time, everywhere inference—driven by agentic AI and reasoning systems,” said Jeff McVeigh, corporate vice president and general manager, Data Center Strategic Programs at Intel. “In this new era, the host CPU is mission‑critical. It governs orchestration, memory access, model security, and throughput across GPU‑accelerated systems. Intel Xeon 6 delivers leadership performance, efficiency, and compatibility with the extensive x86 software ecosystem that customers rely on to scale inference workloads.”Why It Matters: As organizations continue to deploy AI systems, inference is increasingly defined not only by GPU throughput but also by CPU-led system performance, with the host CPU shaping overall cluster efficiency and total cost of ownership. It is also responsible for critical functions such as memory management, task orchestration, and workload distribution, while ensuring the security, reliability, and operational continuity essential to modern AI infrastructure.Building on these system-level requirements, Intel Xeon processors are used as the host CPU for DGX Rubin NVL8 systems due to their capability to support fast memory speeds, balanced performance across a range of workloads, lower long-term total cost of ownership (TCO), and their mature, enterprise-proven software ecosystem. Additionally, Intel’s robust PCIe and I/O capabilities further strengthen Xeon’s role as a high-bandwidth, low-latency platform across diverse workloads. Efficient performance per

Intel QuantumLoading...0
Zendesk to acquire Forethought to expand resolution platformindustry

Zendesk to acquire Forethought to expand resolution platform

Share this article Copy Link Share on X Share on Linkedin Share on Facebook Zendesk’s AI agents handle over 80% of customer interactions from start to finish across diverse clients. Credit: Stock all/Shutterstock.com. Zendesk has reached a definitive agreement to acquire Forethought, aiming to expand its autonomous AI capabilities within its Resolution Platform. The company anticipates that, during the current year, AI agents will manage more customer service interactions than human agents, signalling a major change in the customer service sector. Through the proposed acquisition, Zendesk plans to integrate Forethought’s technology to introduce self-learning AI agents capable of generating and executing complex workflows across multiple platforms and channels. Currently, Zendesk’s AI agents manage more than 80% of customer interactions from start to finish for a wide range of clients. These interactions are effectively handled through a collaborative approach, where both human and AI agents work together to address customer needs. The existing Resolution Learning Loop feature learns directly from each conversation, eliminating the need for manual retraining and enabling ongoing improvements in performance. By incorporating Forethought’s AI solutions, Zendesk intends to offer advanced workflow capabilities, support for additional service channels including voice automation, and integration into enterprise systems even where APIs are not available. Following completion of the acquisition, customers using Forethought products are expected to see continued service without disruption. Zendesk clients will gain access to enhanced AI features and a more unified service experience, while new users can adopt these solutions independently of the broader Zendesk platform. Zendesk CEO Tom Eggemeier said: “Forethought’s advanced capabilities perfectly align with our vision for agentic service. Together, we will be scaling self-improving AI that learns from every interaction. “Th

Quantum Computing UK (Tech Monitor)Loading...0
Dr. Oz advocates for agentic AI for every member of Medicareindustry

Dr. Oz advocates for agentic AI for every member of Medicare

Dr. Oz advocates for agentic AI for every member of Medicare “Kill the clipboard” is a mantra of CMS as the agency advocates for patients to be able to scan a QR code to bring their data to their provider. Medicare & Medicaid By Susan Morse , Executive Editor | March 13, 2026 | 10:54 AM From left: Hal Wolf, HIMSS president and CEO, CMS Administrator Dr. Mehmet Oz; Kimberly Brandt, COO and CMS deputy administrator; and Amy Gleason, administrator and senior advisor, DOGE and CMS. Photo: Susan Morse/HFN/HIMSS LAS VEGAS - CMS Administrator Dr. Mehmet Oz is advocating for agentic AI for every member of Medicare and for patients to use more digital tools to improve health and lower costs.Technology should be used more in the beginning of the care cycle and in the home, said Oz, speaking before a crowd of healthcare professionals at the HIMSS Global Health Conference & Exhibition on Thursday.Healthcare is inflationary, Oz said. Doctors are making about the same amount of money as the rate of inflation but hospitals are at twice that rate, due in large part to manpower issues, he said.“We have such a wonderful opportunity to use technology to be a deflationary force,” Oz said. “I’m very confident we can get better quality care.”Oz also asked healthcare professionals in the audience to advocate for tech for patients.“I’m here to recruit you,” he said. “We have a real crisis.”Throwing money at a problem will only go so far, Oz indicated. “We look at the opportunity of technological advances,” Oz said.HIMSS needs to embrace the reality and CMS needs to reach people, Oz said, to save lives and manage a $1.7 trillion business.Oz was speaking with Hal Wolf, president and CEO of HIMSS for “A Revolutionary Vision for American Healthcare Transformation: CMS’s Roadmap for Now and the Future.” With Oz were Kimberly Brandt, COO and CMS deputy administrator and Amy Gleason, administrator and senior advisor, Department of Government Efficiency (DOGE) and CMS.The federal gove

Healthcare Finance NewsLoading...0