We are based on the local area and look at the worldHomepage - English
Hon Hai Group
America
Europe
Czech Republic
Slovakia
3+3 Event Highlights
3+3 Event Highlights
"Hon Hai has dedicated themselves in developing R&D capabilities and investments in new industries with the introduction of the“3+3” (industry and technology) strategy.
Hon Hai has prioritized the three key industries: electric vehicles, digital health, and robotics industries, each has a significant growth potential with current scale at USD 1.4 trillion and over 20% compound annual growth rate. Hon Hai's own industrial experience and technology advantages will foster future development and growth.
The Group is also committed to developing artificial intelligence, semiconductors and next-generation communication technologies, building blocks in the Group's technology strategy.
Hon Hai showcases latest innovations and research results in its annual HHTD, Hon Hai Tech Day, sharing the achievements of the "3+3" strategy."
Event Information
Hon Hai Research Institute Demonstrates Superiority of Shallow Quantum Circuits Beyond Prior Understanding
2025/04/29
Hon Hai Research Institute Demonstrates Superiority of Shallow Quantum Circuits Beyond Prior Understanding
Breakthrough study published in Nature Communications29 April 2025, Taipei, Taiwan  – Hon Hai Research Institute (HHRI), in a milestone collaborative effort, has demonstrated that parallel quantum computation can exhibit greater computational power than previously recognized, with its research results accepted for publication in the prestigious journal Nature Communications.Titled "Unconditional advantage of noisy qudit quantum circuits over biased threshold circuits in constant depth," the latest HHRI paper achieves another milestone in quantum computing research. Figure 1: Classes of circuits and the corresponding problems that could be efficiently solved by them. This breakthrough study establishes a fundamental advancement in our understanding of quantum circuit capabilities. The research demonstrates that a class of problems, known as ISMRP, can be efficiently computed by shallow quantum circuits—but not by any polynomial-sized classical biased threshold circuits (bPTC0(k)). This proves a previously unverified advantage of shallow quantum circuits. While many current claims of “quantum advantage” are based on certain unproven assumptions and remain experimentally challenging to verify, this study presents an unconditional proof of quantum circuit supremacy without any computational hardness assumptions. Notably, the team proved that even when quantum circuits are subject to noise, shallow qudit quantum circuits built from local logic gates can solve problems that classical polynomial-sized biased threshold circuits fundamentally cannot. The finding highlights the long-term potential and practical application of quantum computing. This breakthrough solidifies Taiwan’s growing influence in the field of quantum computing and showcases the deep commitment and accumulated expertise in this critical area of research by Hon Hai Research Institute, a key R&D source for Hon Hai Technology Group (Foxconn), the world’s largest electronics manufacturing service provider. HHRI will continue to push forward in quantum technology to contribute to global innovation and industrial advancement. The research was a collaborative effort led by Dr. Ming-Hsiu Hsieh, Director of HHRI’s Quantum Computing Research Center, along with institute Researcher Leandro Mendes and PhD intern Michael de Oliveira. Collaborating with HHRI was Sathyawageeswar Subramanian, a senior research fellow from the Department of Computer Science and Technology at the University of Cambridge in the United Kingdom. The article is published in Nature Communications (Volume 16, Article number: 3559, 2025), which ranks 5th globally in Google Scholar’s h5-index metrics with a score of 375 – underscoring the high academic impact of this achievement. Access the full publication here: https://doi.org/10.1038/s41467-025-58545-4   About Hon Hai Research Institute The institute, founded in 2020 and part of Hon Hai Technology Group (Foxconn), has five research centers. Each center has an average of 40 high technology R&D professionals, all of whom are focused on the research and development of new technologies, the strengthening of Foxconn’s technology and product innovation pipeline, efforts to support the Group’s transformation from "brawn" to "brains", and the enhancement of the competitiveness of Foxconn’s "3+3" strategy.
2025/04/29
Hon Hai Technology Group (Foxconn)  Unpacks Artificial Intelligence Progress At NVIDIA GTC
2025/03/19
Hon Hai Technology Group (Foxconn) Unpacks Artificial Intelligence Progress At NVIDIA GTC
Humanoid robotics, GB300 NVL72 infrastructure & digital twins scaling out AI factories 18 March 2025, Taipei, Taiwan, and San Jose, California – Hon Hai Technology Group (“Foxconn”) (TWSE:2317) today unveiled the first comprehensive look at its progress toward humanoid robotics, unrivalled infrastructure for the next generation NVIDIA GB300 NVL72 platform, and digital twins accelerating AI factories of the future, at the premier conference on artificial intelligence, GTC 2025. The world’s largest electronics manufacturing service provider, this year in San Jose, brings a tour de force with more than double the delegation size from last year and participation in 6 GTC sessions, unpacking expertise from frontier AI to hybrid and nursing robots to the deployment of advanced physical AI and simulation solutions built on NVIDIA Omniverse and the Mega Omniverse Blueprint in sparking innovation around autonomous driving, smart manufacturing and digital healthcare.“You know Foxconn as the biggest manufacturer of the most advanced AI servers running on the planet's fastest superchips. We do more than that. At GTC we are showcasing how our Tier-1 excellence extends to Smart City, Smart Manufacturing and Smart EV,” said Foxconn Chairman Young Liu, leading a corporate delegation of more than 70 engineers and executives. “This stellar event brings together great partners to exchange insights on where AI is heading. We are delighted to once again support NVIDIA.”On rare display at Foxconn’s GTC Booth 323, a model of the next generation GB300 NVL72 server rack, designed and developed in collaboration with NVIDIA, visually demonstrates the AI infrastructure behind the training and inference of trillion-parameter large language models (LLMs), serving as the computational engine of the AI Factory. From the manufacturing of the NVIDIA GB200 NVL72 to GB300 NVL72 platforms, Foxconn is committed to ensuring optimal computing performance and is qualified as a PBR (Pilot Build Request) Partner by NVIDIA. The advanced superchip ecosystem is accelerated by NVIDIA Blackwell architecture and Foxconn subsidiary Ingrasys’ superior liquid cooling solutions, which utilizes digital twin simulation solutions built on NVIDIA Omniverse to optimize server cooling and data center designs to ensure efficient and stable operations.Foxconn’s optimization of smart manufacturing excellence is happening across its large global footprint, complemented by NVIDIA AI solutions including the NVIDIA AI blueprint for video search and summarization (VSS) for operations and safety monitoring. Alongside real-time simulations of factory layouts and fleet operations on the exhibit floor to showcase how AI drives real-world scenarios, the GTC session – How to Use NVIDIA Omniverse on Smart Factory Design: The Fii Omniverse Digital Twin Project on NVIDIA GB200 Grace Blackwell Superchip Production Line – will tease out the possibility of how Foxconn’s sustainable lighthouse factories can be replicated and exported to industry partners as solutions to scale out AI factories of the future. Foxconn, a Diamond sponsor this year, has also set up a Robotics Zone to exhibit its Hybrid Robot that supports semiconductor and automation needs with advanced vision recognition and precision mobility. The GTC session – Reinventing Smart Manufacturing: How Foxconn Builds and Deploys an AI Workforce – will detail how the AI Factory can deliver agentic AI and physical AI applications like Factory GPT and embodied intelligence robots. Foxconn’s Nurabot is debuting at GTC; it is a nursing collaborative robot that optimizes medical workflows and enhances patient care and is scheduled to be deployed later this year into partner hospitals in Taiwan. During the on-demand GTC session – Transform Patient Care With Digital Twins and Nursing Collaborative RobotsTransform Patient Care With Digital Twins and Nursing Collaborative Robots – senior Foxconn executives will detail for the first time at a major conference the first digital twin of a busy hospital ward in Taiwan. In another first, GTC session – From Open Source to Frontier AI: Build, Customize, and Extend Foundation Models – will detail FoxBrain, the first Traditional Chinese Large Language Model (LLM) with reasoning capabilities that is expected to become an important engine to drive the upgrade of Foxconn’s three major platforms: Smart Manufacturing. Smart EV. Smart City.Meet a cool avatar at the on-demand GTC session – Improving Road Safety with GenAI, Metropolis, and NVIDIA Omniverse – and pick up a unique chocolate token at our GTC Booth 323. Information on exhibit hall and hours can be found here.More on Foxconn’s GTC sessions here.How to Use NVIDIA Omniverse on Smart Factory Design: The Fii Omniverse Digital Twin Project on NVIDIA GB200 Grace Blackwell Superchip Production Line (Presented by Foxconn) [S74429]Reinventing Smart Manufacturing: How Foxconn Builds and Deploys an AI Workforce [S72841]From Open Source to Frontier AI: Build, Customize, and Extend Foundation Models [S74035]Improving Road Safety with GenAI, Metropolis, and NVIDIA Omniverse [S74446]Transform Patient Care With Digital Twins and Nursing Collaborative Robots [S74078]Physical AI for the Next Frontier of Industrial Digitalization [S73232; panel session]
2025/03/19
Hon Hai Research Institute Launches  Traditional Chinese LLM With Reasoning Capabilities
2025/03/10
Hon Hai Research Institute Launches Traditional Chinese LLM With Reasoning Capabilities
10 March 2025, Taipei, Taiwan – Hon Hai Research Institute announced today the launch of the first Traditional Chinese Large Language Model (LLM), setting another milestone in the development of Taiwan’s AI technology with a more efficient and lower-cost model training method completed in just four weeks. The institute, which is backed by Hon Hai Technology Group (“Foxconn”) (TWSE:2317), the world’s largest electronics manufacturer and leading technological solutions provider, said the LLM – code named FoxBrain – will be open sourced and shared publicly in the future. It was originally designed for applications used in the Group’s internal systems, covering functions such as data analysis, decision support, document collaboration, mathematics, reasoning and problem solving, and code generation. FoxBrain not only demonstrates powerful comprehension and reasoning capabilities but is also optimized for Taiwanese users' language style, showing excellent performance in mathematical and logical reasoning tests. "In recent months, the deepening of reasoning capabilities and the efficient use of GPUs have gradually become the mainstream development in the field of AI. Our FoxBrain model adopted a very efficient training strategy, focusing on optimizing the training process rather than blindly accumulating computing power,” said Dr. Yung-Hui Li, Director of the Artificial Intelligence Research Center at Hon Hai Research Institute. ”Through carefully designed training methods and resource optimization, we have successfully built a local AI model with powerful reasoning capabilities." The FoxBrain training process was powered by 120 NVIDIA H100 GPUs, scaled with NVIDIA Quantum-2 InfiniBand networking, and finished in just about four weeks. Compared with inference models recently launched in the market, the more efficient and lower-cost model training method sets a new milestone for the development of Taiwan's AI technology. FoxBrain is based on the Meta Llama 3.1 architecture with 70B parameters. In most categories among TMMLU+ test dataset, it outperforms Llama-3-Taiwan-70B of the same scale, particularly exceling in mathematics and logical reasoning (For TMMLU+ benchmark of FoxBrain, please refer to Fig.1). The following are the technical specifications and training strategies for FoxBrain: ●      Established data augmentation methods and quality assessment for 24 topic categories through proprietary technology, generating 98B tokens of high-quality pre-training data for Traditional Chinese ●      Context window length: 128 K tokens ●      Utilized 120 NVIDIA H100 GPUs for training, with total computational cost of 2,688 GPU days ●      Employed multi-node parallel training architecture to ensure high performance and stability ●      Used a unique Adaptive Reasoning Reflection technique to train the model in autonomous reasoning ●      Fig. 1:TMMLU+ benchmark results of FoxBrain, Meta-Llama-3.1-70B and Taiwan-Llama-70B   In test results, FoxBrain showed comprehensive improvements in mathematics compared to the base Meta Llama 3.1 model. It achieved significant progress in mathematical tests compared to Taiwan Llama, currently the best Traditional Chinese large model, and surpassed Meta's current models of the same class in mathematical reasoning ability. While there is still a slight gap with DeepSeek's distillation model, its performance is already very close to world-leading standards. FoxBrain's development – from data collection, cleaning and augmentation, to Continual Pre-Training, Supervised Finetuning, RLAIF, and Adaptive Reasoning Reflection – was accomplished step by step through independent research, ultimately achieving benefits approaching world-class AI models despite limited computational resources. This large language model research demonstrates that Taiwan's technology talent can compete with international counterparts in the AI model field. Although FoxBrain was originally designed for internal group applications, in the future, the Group will continue to collaborate with technology partners to expand FoxBrain's applications, share its open-source information, and promote AI in manufacturing, supply chain management, and intelligent decision-making. During model training, NVIDIA provided support through the Taipei-1 Supercomputer and technical consultation, enabling Hon Hai Research Institute to successfully complete the model pre-training with NVIDIA NeMo. FoxBrain will also become an important engine to drive the upgrade of Foxconn’s three major platforms: Smart Manufacturing. Smart EV. Smart City. The results of FoxBrain is scheduled to be shared for the first time at a major conference during NVIDIA GTC 2025 Session Talk “From Open Source to Frontier AI: Build, Customize, and Extend Foundation Models” on March 20.   About Hon Hai Research Institute The institute has five research centers. Each center has an average of 40 high technology R&D professionals, all of whom are focused on the research and development of new technologies, the strengthening of Foxconn’s technology and product innovation pipeline, efforts to support the Group’s transformation from "brawn" to "brains", and the enhancement of the competitiveness of Foxconn’s "3+3" strategy.
2025/03/10
Past HHTD
HHTD24
HHTD24
HHTD23
HHTD23