
OpenAI Atlas: The Essential AI Browser Guide
AI · 23 Oct 2025
OpenAI Atlas explained — features, comparisons, pricing, and real-world use cases to boost productivity and research.

OpenAI hires Intel AI chief Sachin Katti to lead AGI compute infrastructure, driving custom chips and data‑centre strategy.
ai · 12 Nov 2025
OpenAI has hired Sachin Katti, formerly Intel’s Chief Technology and Artificial Intelligence Officer, recently to lead development of its computing infrastructure for AGI at OpenAI’s engineering teams in San Francisco so the firm can scale high‑performance systems, reduce dependence on external GPU suppliers and accelerate custom chip and data‑centre designs through partner integrations like Broadcom by building teams, architecture and deployment pipelines.
Sachin Katti joined Intel to lead technology and AI strategy, overseeing networking, edge computing and AI system efforts during his tenure. At Intel he managed cross‑discipline teams and roadmaps that balanced silicon, firmware and system deliverables. His move signals a transfer of systems‑level expertise from a leading silicon firm to a research‑first AI lab.
Katti’s track record includes building networking stacks, edge‑optimized platforms and production AI pipelines. Examples: multi‑node networking projects, low‑latency edge inference deployments and partnerships with OEMs. Data point: he led initiatives that targeted throughput and latency improvements in the single‑ to multi‑ms range — crucial for large model serving.
At OpenAI he will be responsible for chip and system architecture, data‑centre integration, and vendor partnerships. Actionable insight: expect an emphasis on end‑to‑end co‑design, where hardware teams are embedded with model engineers to optimize performance and cost per token in production workloads.
OpenAI’s models demand bespoke hardware to scale cost‑effectively. The hire addresses the strategic need for specialized data‑centre hardware and system engineering to support AGI‑level compute growth. Example: designing racks and interconnects tailored to transformer parallelism reduces overhead and improves utilization.
OpenAI wants to reduce vendor lock‑in by developing custom accelerators. Actionable consideration for industry: expect optimized ISAs and memory subsystems tuned for large sparse models, which can yield 10–30% improvements in throughput versus generalized GPUs for certain workloads.
OpenAI’s announced collaboration with Broadcom targets custom AI chip delivery by 2026. This hire accelerates integration between chip design, switch fabrics and data‑centre systems. Practically, OpenAI can co‑design silicon and interconnects to hit targeted power and latency envelopes.
Priority one is custom accelerators that balance FLOPS, memory bandwidth and sparsity support. Example components: on‑package memory, HBM variants and custom tensor units. Actionable insight: teams will evaluate ops/byte efficiency and aim for tighter model–hardware mapping to reduce inference and training costs.
Network fabrics and power delivery are critical; AGI racks will require higher bisection bandwidth and liquid cooling in hotspots. Example: tighter RDMA integration and direct GPU‑to‑GPU fabrics to lower software orchestration overhead and improve scaling from single to multi‑rack training runs.
Co‑design of model parallelism strategies, orchestration layers and runtime optimizations will be prioritized. Table: technical priorities overview.
| Priority | Description | Example |
|---|---|---|
| Accelerator design | Custom cores for tensor ops | Reduced op latency |
| Memory subsystem | High BW, coherent caches | HBM variants |
| Interconnect | Low‑latency fabrics | RDMA over custom switches |
| Cooling & power | Thermal density solutions | Immersion or liquid cooling |
| Runtime software | Scheduler & model parallelism | Adaptive sharding |
Intel faces a leadership gap in AI systems after Katti’s departure; its CEO reassigned responsibilities. Short‑term, Intel may refocus on partnerships and platforms while assessing talent retention. Actionable item for Intel watchers: monitor organizational updates and product roadmap revisions.
Nvidia remains dominant, but OpenAI’s push toward custom silicon could pressure pricing and market share over time. Example scenario: if OpenAI scales in‑house accelerators, third‑party GPU demand could soften for specific large model workloads.
Broadcom becomes a strategic hardware partner for switch and ASIC integration. Table: competitive implications.
| Company | Short‑term effect | Long‑term effect |
|---|---|---|
| Intel | Leadership shuffle | Refocused platform strategy |
| Nvidia | Stable demand | Competitive pressure |
| Broadcom | Increased orders | Key partner for switches |
| OEMs | Design adjustments | New system SKUs |
| Cloud providers | Procurement shifts | Custom hosting offers |
Demand will shift toward higher‑density racks, advanced cooling and power distribution. Enterprises should audit PUE and rack power capacity now. Example: retrofitting existing halls for liquid cooling can take 6–18 months planning and capex.
Hyperscalers may partner or offer co‑located custom racks; some will build dedicated pods. Actionable step: cloud providers should define pricing models that reflect differentiated hardware performance and reserved capacity.
Enterprises can expect evolving access models: managed APIs, co‑location or licensed stacks. TCO will depend on utilization; careful workload placement and batching strategies will reduce effective cost per inference.
Risks include fabrication lead times, firmware bugs and integration mismatches. Supply chain constraints for HBM, substrates and advanced nodes can create slip risks. Teams should budget for multi‑quarter validation cycles.
Key milestone: Broadcom‑linked chips targeted by 2026. Shorter milestones include architecture selection, prototype silicon and pilot racks over the next 12–24 months. Watch for public benchmark disclosures and partner announcements.
Deploying AGI‑grade infrastructure raises security, export and governance questions. Actionable recommendation: embed compliance, adversarial testing and access controls from design through deployment.
OpenAI’s hire of Sachin Katti leaves the organization positioned to accelerate custom compute and data‑centre strategies while prompting industry shifts in hardware sourcing, partnerships and facility design; next steps include prototype silicon, Broadcom integrations and pilot rack deployments toward a 2026 timeline, and stakeholders should watch benchmarks, partner announcements and procurement signals closely for the clearest indicators of progress at OpenAI.

AI · 23 Oct 2025
OpenAI Atlas explained — features, comparisons, pricing, and real-world use cases to boost productivity and research.

Analytics · 20 Aug 2025
cara menjadi data analyst lead — Pelajari soft skill penting untuk memimpin tim data dan ubah insight jadi keputusan bisnis. Mulai sekarang.

AI · 27 May 2025
Temukan 10 tools prompt ai terbaik 2025 yang membantu tingkatkan kreativitas dan hasil AI Anda dengan mudah dan efektif.

AI · 09 Sep 2025
AI dijelaskan sederhana dan praktis untuk profesional. Pelajari cara kerja, contoh nyata, dan cara pakainya sekarang.

AI · 28 Mar 2025
Ingin tahu apa itu agen AI? Temukan penjelasan mudah dan menarik tentang agen AI di sini!

Analytics · 20 Aug 2025
AI marketing analytics untuk pemula: pelajari cara kerja, data, algoritme, dan ukur ROI dengan langkah praktis.

AI · 27 May 2025
Generative AI jadi kunci karier masa depan, pelajari langkah praktis jadi expert sekarang!

AI · 04 May 2025
Temukan contoh prompt AI untuk SEO yang dapat meningkatkan kualitas konten Anda dan mendatangkan lebih banyak pembaca.

AI · 04 May 2025
Pelajari prompt AI dengan cara mudah, efektif, dan praktis. Tingkatkan produktivitasmu sekarang!

Analytics · 11 Nov 2024
Discover how data-driven strategies can revolutionize your decision-making process and lead to real-world success by turning insights into impactful actions.