https://www.rssboard.org/rss-specification AnandTech This channel features the latest computer hardware related articles. https://www.anandtech.com en-us Copyright 2024 AnandTech https://www.anandtech.com/content/images/rss_logo.png AnandTech https://www.anandtech.com Western Digital Ships 24TB Red Pro Hard Drive For NASes Anton Shilov

Nowadays highest-capacity hard drives are typically aimed at cloud service providers (CSPs) and enterprises, but this does not mean that creative professionals or regular users do not need them. To cater to demands of more regular consumers, Western Digital has started shipments of its Red Pro 24 TB HDDs, which are aimed at high-end NAS use for creative professionals with significant storage requirements.

Western Digital's Red Pro 24 TB hard drives come approximately 20 months after their 22 TB model hit retail in 2022, offering an incremental improvement to WD's highest-capacity NAS and consumer hard drive offering. The platform uses conventional magnetic recording (CMR), feature a 7200 RPM rotating speed, are equipped with a 512 MB cache, and use OptiNAND technology to improve reliability as well as optimize performance and power consumption. The HDDs are rated for an up to 287 MB/s media to cache transfer rate, which makes them some of the fastest hard drives around (albeit, still a bit slower compared to CSP and enterprise-oriented HDDs).

Just like other high-end network-attached storage-aimed HDDs, the Red Pro 24 TB hard drives use helium-filled platforms that are very similar to those designed for enterprise drives. Consequently, the Red Pro 24 TB HDD are equipped with rotation vibration sensors to anticipate and proactively counteract disturbances caused by increased vibration and multi-axis shock sensors to detect subtle shock events and automatically offset them with dynamic fly height technology to ensure that heads to not scratch disks.

What these drives lack compared to WD Gold and Ultraster 22 TB and 24 TB drives for enterprises and cloud datacenters is the ArmorCache feature that provides protection against power loss when write-cache is enabled (WCE mode) and enhances performance when write-cache is disabled (WCD mode).

On the reliability side of matters, Western Digital's Red Pro 24 TB HDDs are designed for 24/7 operation in vibrating environments, such as enterprise-grade NAS with loads of bays, and are rated for up to 550 TB/year workloads as well as up to 600,000 load/unload cycles, which is in line with what Western Digital's WD Gold and Ultrastar hard drives offer.

As for power consumption, the WD Red Pro 24 TB consumes up to 6.4W during read and write operations, up to 3.9W in idle mode, and up to 1.2W in standby/sleep mode.

Western Digital's Red Pro 24 TB (WD240KFGX) HDDs are now shipping to resellers as well as NAS makers, and are slated to be available shortly. Expect these hard drives to be slightly cheaper than the WD Gold 24 TB model.

]]>
https://www.anandtech.com/show/21329/western-digital-ships-24tb-hdd Thu, 28 Mar 2024 12:00:00 EDT tag:www.anandtech.com,21329:news
The DeepCool AK620 Digital CPU Cooler Review: Big, Heavy, and Lit E. Fylladitakis Typical CPU coolers do the job for standard heat management but often fall short when it comes to quiet operation and peak cooling effectiveness. This gap pushes enthusiasts and PC builders towards specialized aftermarket solutions designed for their unique demands. The premium aftermarket cooling niche is fiercely competitive, with brands vying to offer top-notch thermal management solutions.

Today we're shining a light on DeepCool's AK620 Digital cooler, a notable entry in the high-end CPU cooler arena. At first blush, the AK620 Digital stands out from the crowd mostly for its integrated LCD screen. Yet aesthetics aside, underneath the snappy screen is a tower cooler that was first and foremost engineered to exceed the cooling needs of the most powerful mainstream CPUs. And it's a big cooler at that: with a weight of 1.5Kg and 162mm tall, this is no lightweight heatsink and fan assembly. All of which helps to set it apart in a competitive marketplace.

]]>
https://www.anandtech.com/show/21299/the-deepcool-ak620-digital-cpu-cooler-review Thu, 28 Mar 2024 09:00:00 EDT tag:www.anandtech.com,21299:news
HBM Revenue Poised To Cross $10B as SK hynix Predicts First Double-Digit Revenue Share Anton Shilov

Offering some rare insight into the scale of HBM memory sales – and on its growth in the face of unprecedented demand from AI accelerator vendors – the company recently disclosed that it expects HBM sales to make up "a double-digit percentage of its DRAM chip sales" this year. Which if it comes to pass, would represent a significant jump in sales for the high-bandwidth, high-priced memory.

As first reported by Reuters, SK hynix CEO Kwak Noh-Jung has commented that he expects HBM sales will constitute a double-digit percentage of its DRAM chip sales in 2024. This prediction corroborate with estimates from TrendForce, who believe that, industry-wide, HBM will account for 20.1% of DRAM revenue in 2024, more than doubling HBM's 8.4% revenue share in 2023.

And while SK hynix does not break down its DRAM revenue by memory type on a regular basis, a bit of extrapolation indicates that they're on track to take in billions in HBM revenue for 2024 – having likely already crossed the billion dollar mark itself in 2023. Last year, SK hynix's DRAM revenue $15.941 billion, according to Statista and TrendForce. So SK hynix only needs 12.5% of its 2024 revenues to come from HBM (assuming flat or positive revenue overall) in order to pass 2 billion in HBM sales. And even this is a low-ball estimate.

Overall, SK hynix currently commands about 50% of HBM market, having largely split the market with Samsung over the last couple of years. Given that share, and that DRAM industry revenue is expected to increase to $84.150 billion in 2024, SK hynix could earn as much as $8.45 billion on HBM in 2024 if TrendForce's estimates prove accurate.

It should be noted that with demand for AI servers at record levels, all three leading makers of DRAM are poised to increase their HBM production capacity this year. Most notable here is a nearly-absent Micron, who was the first vendor to start shipping HBM3E memory to NVIDIA earlier this year. So SK hynix's near-majority of the HBM market may falter some this year, though with a growing pie they'll have little reason to complain. Ultimately, if sales of HBM reach $16.9 billion as projected, then all memory makers will be enjoying significant HBM revenue growth in the coming months.

Sources: Reuters, TrendForce

]]>
https://www.anandtech.com/show/21327/hbm-revenue-poised-to-cross-10b-double-digit-revenue-share-at-sk-hynix Thu, 28 Mar 2024 08:00:00 EDT tag:www.anandtech.com,21327:news
GDDR7 Approaches: Samsung Lists GDDR7 Memory Chips on Its Product Catalog Anton Shilov

Now that JEDEC has published specification of GDDR7 memory, memory manufacturers are beginning to announce their initial products. The first out of the gate for this generation is Samsung, which has has quietly added its GDDR7 products to its official product catalog.

For now, Samsung lists two GDDR7 devices on its website: 16 Gbit chips rated for an up to 28 GT/s data transfer rate and a faster version running at up to 32 GT/s data transfer rate (which is in line with initial parts that Samsung announced in mid-2023). The chips feature a 512M x32 organization and come in a 266-pin FBGA packaging. The chips are already sampling, so Samsung's customers – GPU vendors, AI inference vendors, network product vendors, and the like – should already have GDDR7 chips in their labs.

The GDDR7 specification promises the maximum per-chip capacity of 64 Gbit (8 GB) and data transfer rates of 48 GT/s. Meanwhile, first generation GDDR7 chips (as announced so far) will feature a rather moderate capacity of 16 Gbit (2 GB) and a data transfer rate of up to 32 GT/s.

Performance-wise, the first generation of GDDR7 should provide a significant improvement in memory bandwidth over GDDR6 and GDDR6X. However capacity/density improvements will not come until memory manufacturers move to their next generation EUV-based process nodes. As a result, the first GDDR7-based graphics cards are unlikely to sport any memory capacity improvements. Though looking a bit farther down the road, Samsung and SK Hynix have previously told Tom's Hardware that they intend to reach mass production of 24 Gbit GDDR7 chips in 2025.

Otherwise, it is noteworthy that SK Hynix also demonstrated its GDDR7 chips at NVIDIA's GTC last week. So Samsung's competition should be close behind in delivering samples, and eventually mass production memory.

Source: Samsung (via @harukaze5719)

]]>
https://www.anandtech.com/show/21326/samsung-lists-gddr7-memory-chips-on-its-website Wed, 27 Mar 2024 15:00:00 EDT tag:www.anandtech.com,21326:news
Report: SK Hynix Mulls Building $4 Billion Advanced Packaging Facility in Indiana Anton Shilov

SK hynix is considering whether to build an advanced packaging facility in Indiana, reports the Wall Street Journal. If the company proceeds with the plan, it intends to invest $4 billion in it and construct one of the world's largest advanced packaging facilities. But to accomplish the project, SK hynix expects it will need help from the U.S. government.

Acknowledging the report but stopping short of confirming the company's plans, a company spokeswoman told the WSJ that SK hynix "is reviewing its advanced chip packaging investment in the U.S., but hasn’t made a final decision yet."

Companies like TSMC and Intel spend billions on advanced packaging facilities, but so far, no company has announced a chip packaging plant worth quite as much as SH hynix's $4 billion. The field of advanced packaging – CoWoS, passive silicon interposers, redistribution layers, die-to-die bonding, and other cutting edge technologies – has seen an explosion in demand in the last half-decade. As bandwidth advances with traditional organic packaging are largely played out, chip designers have needed to turn to more complex (and difficult to assemble) technologies in order to wire up an ever larger number of signals at ever-higher transfer rates. Which has turned advanced packaging into a bottleneck for high-end chip and accelerator production, driving a need for additional packaging facilities.

If SK hynix approves the project, the advanced packaging facility is expected to begin operations in 2028 and could create as many as 1,000 jobs. With an estimated cost of $4 billion, the plant is poised to become one of the largest advanced packaging facilities in the world.

Meanwhile, government backing is thought to be essential for investments of this scale, with potential state and federal tax incentives, according to the report. These incentives form part of a broader initiative to bolster the U.S. semiconductor industry and decrease dependence on memory produced in South Korea.

SK hynix is the world's leading producer of HBM memory, and is one of the key HBM suppliers to NVIDIA. Next generations of HBM memory (including HBM4 and HBM4E) will require even closer collaboration between chip designers, chipmakers, and memory makers. Therefore, packaging HBM in America could be a significant benefit for NVIDIA, AMD, and other U.S. chipmakers.

Investing in the Indiana facility will be a strategic move by SK hynix to enhance its advanced chip packaging capabilities in general and demonstrating dedication to the U.S. semiconductor industry.

]]>
https://www.anandtech.com/show/21325/sk-hynix-mulls-building-4-billion-advanced-packaging-facility-in-indiana Tue, 26 Mar 2024 19:00:00 EDT tag:www.anandtech.com,21325:news
Intel Announces Expansion to AI PC Dev Program, Aims to Reach More Software & Hardware Devs Gavin Bonshor

Today, Intel announced that it is looking to progress its AI PC Acceleration program further by offering various new toolkits and devkits designed for software and hardware AI developers under a new AI PC Developer Program sub-initiative. Originally launched on October 23, the AI PC Acceleration program was created to connect hardware vendors with software developers, using Intel's vast resources and experience to develop a broader ecosystem as the world pivots to one driven by AI development.

Intel aims to maximize the potential of AI applications and software and broaden the whole AI-focused PC ecosystem by aiming for AI within 100 million Intel-driven AI PCs by 2025. The AI PC Developer Program aims to simplify the adoption of new AI technologies and frameworks on a larger scale. It provides access to various tools, workflows, AI-deployment frameworks, and developer kits, allowing developers to take advantage of the latest NPU found within Intel's Meteor Lake Core Ultra series of processors.

It also offers centralized resources like toolkits, documentation, and training to allow developers to fully utilize their software and hardware in tandem with the technologies associated with Meteor Lake (and beyond) to enhance AI and machine learning application performance. Such toolkits are already broadly used by developers, including Intel's open-source OpenVino.

Furthermore, this centralized resource platform is designed to streamline the AI development process, making it more efficient and effective for developers to integrate AI capabilities into their applications. It is designed to play a crucial role in Intel’s strategy to not only advance AI technology but also to make it more user-friendly and adaptable to various real-world applications.

Notably, this is both a software and a hardware play. Intel isn't just looking to court more software developers to utilize their AI resources, but they also want to get independent hardware vendors (IHVs) on board. OEMs and system assemblers are largely already covered under Microsoft's requirements for Windows certification, but Intel wants to get the individual parts vendors involved as well. How can AI be used to improve audio performance? Display performance? Storage performance? That's something that Intel wants to find out.

"We have made great strides with our AI PC Acceleration Program by working with the ecosystem. Today, with the addition of the AI PC Developer Program, we are expanding our reach to go beyond large ISVs and engage with small and medium sized players and aspiring developers" said Carla Rodriguez, Vice President and General Manager of Client Software Ecosystem Enabling. "Our goal is to drive a frictionless experience by offering a broad set of tools including the new AI-ready Developer Kit,"

The Intel AI PC Acceleration Program offers 24/7 access to resources and early reference hardware so that both ISVs and software developers can create and optimize workloads before launching retail components. Developers can join the AI PC Acceleration Program at their official webpage or email AIPCIHV@intel.com for further information

]]>
https://www.anandtech.com/show/21324/intel-announces-expansion-to-ai-pc-acceleration-program-creating-the-ai-pc-ecosystem Tue, 26 Mar 2024 18:00:00 EDT tag:www.anandtech.com,21324:news
Report: China to Pivot from AMD & Intel CPUs To Domestic Chips in Government PCs Anton Shilov

China has initiated a policy shift to eliminate American processors from government computers and servers, reports Financial Times. The decision is aimed to gradually eliminate processors from AMD and Intel from system used by China's government agencies, which will mean lower sales for U.S.-based chipmakers and higher sales of China's own CPUs.

The new procurement guidelines, introduced quietly at the end of 2023, mandates government entities to prioritize 'safe and reliable' processors and operating systems in their purchases. This directive is part of a concerted effort to bolster domestic technology and parallels a similar push within state-owned enterprises to embrace technology designed in China.

The list of approved processors and operating systems, published by China's Information Technology Security Evaluation Center, exclusively features Chinese companies. There are 18 approved processors that use a mix of architectures, including x86 and ARM, while the operating systems are based on open-source Linux software. Notably, the list includes chips from Huawei and Phytium, both of which are on the U.S. export blacklist.

This shift towards domestic technology is a cornerstone of China's national strategy for technological autonomy in the military, government, and state sectors. The guidelines provide clear and detailed instructions for exclusively using Chinese processors, marking a significant step in China's quest for self-reliance in technology.

State-owned enterprises have been instructed to complete their transition to domestic CPUs by 2027. Meanwhile, Chinese government entites have to submit progress reports on their IT system overhauls quarterly. Although some foreign technology will still be permitted, the emphasis is clearly on adopting local alternatives.

The move away from foreign hardware is expected to have a measurable impact on American tech companies. China is a major market for AMD (accounting for 15% of sales last year) and Intel (commanding 27% of Intel's revenue), contributing to a substantial portion of their sales. Additionally, Microsoft, while not disclosing specific figures, has acknowledged that China accounts for a small percentage of its revenues. And while government sales are only a fraction of overall China sales (as compared to the larger commercial PC business) the Chinese government is by no means a small customer.

Analysts questioned by Financial Times predict that the transition to domestic processors will advance more swiftly for server processors than for client PCs, due to the less complex software ecosystem needing replacement. They estimate that China will need to invest approximately $91 billion from 2023 to 2027 to overhaul the IT infrastructure in government and adjascent industries.

]]>
https://www.anandtech.com/show/21323/china-to-ban-usage-of-amd-and-intel-cpus-in-government-pcs Tue, 26 Mar 2024 16:00:00 EDT tag:www.anandtech.com,21323:news
The DeepCool PX850G 850W PSU Review: Less Than Quiet, More Than Capable E. Fylladitakis DeepCool is one of the few veterans in the PC power & cooling components field still active today. The Chinese company was first founded in 1996 and initially produced only coolers and cooling accessories, but quickly diversified into the PC Case and power supply unit (PSU) markets. To this day, DeepCool stays almost entirely focused on PC power & cooling products, with input devices and mousepads being their latest diversification attempt.

Today's review turns the spotlight toward DeepCool’s PSUs and, more specifically, the PX850G 850W ATX 3.0 PSU, which currently is their most popular power supply. The PX850G is engineered to balance all-around performance with reliability and cost, all while providing ATX 3.0 compliance. It is based on a highly popular high-output platform but, strangely, DeepCool rated the PX850G for operation up to 40°C.

]]>
https://www.anandtech.com/show/21279/the-deepcool-px850g-850w-psu-review Tue, 26 Mar 2024 09:00:00 EDT tag:www.anandtech.com,21279:news
Construction of $106B SK hynix Mega Fab Site Moving Along, But At Slower Pace Anton Shilov

When a major industry slowdown occurs, big companies tend to slowdown their mid-term and long-term capacity related investments. This is exactly what happened to SK hynix's Yongin Semiconductor Cluster, a major project announced in April 2021 and valued at $106 billion. While development of the site has been largely completed, only 35% of the initial shell building has been constructed, according to the Korean Ministry of Trade, Industry, and Energy.

"Approximately 35% of Fab 1 has been completed so far and site renovation is in smooth progress," a statement by the Korean Ministry of Trade, Industry, and Energy reads. "By 2046, over KRW 120 trillion ($90 billion today, $106 billion in 2021) in investment will be poured to complete Fabs 1 through 4, and construction of Fab 1's production line will commence in March next year. Once completed, the infrastructure will rank as the world's largest three-story fab."

The new semiconductor fabrication cluster by SK hynix announced almost exactly three years ago is primarily meant to be used to make DRAM for PCs, mobile devices, and servers using advanced extreme ultraviolet lithography (EUV) process technologies. The cluster, located near Yongin, South Korea, is intended to consist of four large fabs situated on a 4.15 million m2 site. With a planned capacity of approximately 800,000 wafer starts per month (WSPMs), it is set to be one of the world's largest semiconductor production hubs.

With that said, SK hynix's construction progress has been slower than the company first projected. The first fab in the complex was originally meant to come online in 2025, with construction starting in the fourth quarter of 2021. However, SK hynix began to cut its capital expenditures in the second half of 2022, and the Yongin Semiconductor Cluster project fell a victim of that cut. To be sure, the site continues to be developed, just at a slower pace; which is why some 35% of the first fab shell has been built at this point.

If completed as planned in 2021, the first phase of SK hynix Yongin operations would have been a major memory production facility costing $25 billion, equipped with EUV tools, and capable of 200,000-WSPM, according to reports from 2021.

Sources: Korean Ministry of Trade, Industry, and Energy; ComputerBase

]]>
https://www.anandtech.com/show/21322/sk-hynix-severely-slowdowns-building-of-106-billion-fab-site Sat, 23 Mar 2024 08:00:00 EDT tag:www.anandtech.com,21322:news
Micron Samples 256 GB DDR5-8800 MCR DIMMs: Massive Modules for Massive Servers Anton Shilov

Micron this week announced that it had begun sampling of its 256 GB multiplexer combined (MCR) DIMMs, the company's highest-capacity memory modules to date. These brand-new DDR5-based MCRDIMMs are aimed at next-generation servers, particularly those powered by Intel's Xeon Scalable 'Granite Rapids' processors that are set to support 12 or 24 memory slots per socket. Usage of these modules can enable datacenter machines with 3 TB or 6 TB of memory, with the combined ranks allowing for effect data rates of DDR5-8800.

"We also started sampling our 256 GB MCRDIMM module, which further enhances performance and increases DRAM content per server," said Sanjay Mehrotra, chief executive of Micron, in prepared remarks for the company's earnings call this week.

In addition to announcing sampling of these modules, Micron also demonstrated them at NVIDIA's GTC conference, where server vendors and customers alike are abuzz at building new servers for the next generation of AI accelerators. Our colleagues from Tom's Hardware have managed to grab a couple of pictures of Micron's 256 GB DDR5-8800 MCR DIMMs.


Image Credit: Tom's Hardware

Apparently, Micron's 256 GB DDR5-8800 MCRDIMMs come in two variants: a taller module with 80 DRAM chips distributed on both sides, and a standard-height module using 2Hi stacked packages. Both are based on monolithic 32 Gb DDR5 ICs and are engineered to cater to different server configurations with the standard-height MCRDIMM adressing 1U servers.The taller version consumes about 20W of power, which is in line with expectations as a 128 GB DDR5-8000 RDIMM consumes around 10W in DDR5-4800 mode. I have no idea about power consumption of the version that uses 2Hi packages, though expect it to be a little bit hotter and harder to cool down.


Image Credit: Tom's Hardware

Multiplexer Combined Ranks (MCR) DIMMs are dual-rank memory modules featuring a specialized buffer that allows both ranks to operate simultaneously. This buffer enables the two physical ranks to operate as though they were separate modules working in parallel, which allows for concurrent retrieval of 128 bytes of data from both ranks per clock cycle (compared to 64 bytes per cycle when it comes to regular memory modules), effectively doubling performance of a single module. Of course, since the modules retains physical interface of standard DDR5 modules (i.e., 72-bits), the buffer works with host at a very high data transfer rate to transfer that fetched data to the host CPU. These speeds exceed the standard DDR5 specifications, reaching 8800 MT/s in this case.

While MCR DIMMs make memory modules slightly more complex than regular RDIMMs, they increase performance and capacity of memory subsystem without increasing the number of memory modules involved, which makes it easier to build server motherboards. These modules are poised to play a crucial role in enabling the next generation of servers to handle increasingly demanding applications, particularly in the AI field.

Sources: Tom's Hardware, Micron

]]>
https://www.anandtech.com/show/21320/micron-samples-256-gb-ddr58800-mcr-dimms-massive-modules-for-massive-servers Fri, 22 Mar 2024 16:00:00 EDT tag:www.anandtech.com,21320:news
Micron Sells Out Entire HBM3E Supply for 2024, Most of 2025 Anton Shilov

Being the first company to ship HBM3E memory has its perks for Micron, as the company has revealed that is has managed to sell out the entire supply of its advanced high-bandwidth memory for 2024, while most of their 2025 production has been allocated, as well. Micron's HBM3E memory (or how Micron alternatively calls it, HBM3 Gen2) was one of the first to be qualified for NVIDIA's updated H200/GH200 accelerators, so it looks like the DRAM maker will be a key supplier to the green company.

"Our HBM is sold out for calendar 2024, and the overwhelming majority of our 2025 supply has already been allocated," said Sanjay Mehrotra, chief executive of Micron, in prepared remarks for the company's earnings call this week. "We continue to expect HBM bit share equivalent to our overall DRAM bit share sometime in calendar 2025."

Micron's first HBM3E product is an 8-Hi 24 GB stack with a 1024-bit interface, 9.2 GT/s data transfer rate, and a total bandwidth of 1.2 TB/s. NVIDIA's H200 accelerator for artificial intelligence and high-performance computing will use six of these cubes, providing a total of 141 GB of accessible high-bandwidth memory.

"We are on track to generate several hundred million dollars of revenue from HBM in fiscal 2024 and expect HBM revenues to be accretive to our DRAM and overall gross margins starting in the fiscal third quarter," said Mehrotra.

The company has also began sampling its 12-Hi 36 GB stacks that offer a 50% more capacity. These KGSDs will ramp in 2025 and will be used for next generations of AI products. Meanwhile, it does not look like NVIDIA's B100 and B200 are going to use 36 GB HBM3E stacks, at least initially.

Demand for artificial intelligence servers set records last year, and it looks like it is going to remain high this year as well. Some analysts believe that NVIDIA's A100 and H100 processors (as well as their various derivatives) commanded as much as 80% of the entire AI processor market in 2023. And while this year NVIDIA will face tougher competition from AMD, AWS, D-Matrix, Intel, Tenstorrent, and other companies on the inference front, it looks like NVIDIA's H200 will still be the processor of choice for AI training, especially for big players like Meta and Microsoft, who already run fleets consisting of hundreds of thousands of NVIDIA accelerators. With that in mind, being a primary supplier of HBM3E for NVIDIA's H200 is a big deal for Micron as it enables it to finally capture a sizeable chunk of the HBM market, which is currently dominated by SK Hynix and Samsung, and where Micron controlled only about 10% as of last year.

Meanwhile, since every DRAM device inside an HBM stack has a wide interface, it is physically bigger than regular DDR4 or DDR5 ICs. As a result, the ramp of HBM3E memory will affect bit supply of commodity DRAMs from Micron, the company said.

"The ramp of HBM production will constrain supply growth in non-HBM products," Mehrotra said. "Industrywide, HBM3E consumes approximately three times the wafer supply as DDR5 to produce a given number of bits in the same technology node."

]]>
https://www.anandtech.com/show/21319/micron-sells-out-entire-hbm3e-supply-for-2024-most-of-2025 Fri, 22 Mar 2024 11:00:00 EDT tag:www.anandtech.com,21319:news
NVIDIA's GPU IP Drives into MediaTek's Dimension Auto SoCs Anton Shilov

MediaTek this week has introduced a new lineup of Dimensity Auto Cockpit system-on-chips, covering the entire market spectrum from entry-level to premium. And while automotive chip announcements are admittedly not normally the most interesting of things, this one is going to be an exception to that rule because of whose graphics IP MediaTek is tapping for the chips: NVIDIA's. This means the upcoming Dimensity Auto Cockpit chips will be the first chips to be released by a third-party (non-NVIDIA) vendor to be based around NVIDIA's GeForce graphics technology.

NVIDIA's first attempt to license its GPU IP to third parties dates back to the year 2013, when the company proposed to license its Kepler GPU IP and thus rival Arm and Imagination Technologies. An effort that, at the time, landed flat on its face. But over a decade later and a fresh effort at hand to license out some of NVIDIA's IP, and it seems NVIDIA has finally succeeded. Altogether, MediaTek's new Dimensity Auto Cockpit system-on-chips will rely on NVIDIA's GPU IP, Drive OS, and CUDA, setting a historical development for both companies.

MediaTek's family of next-generation Dimensity Auto Cockpit processors consists of four distinct system-on-chip, including CX-1 for range-topping vehicles, CY-1, CM-1, and CV-1 for entry-level cars. These are highly-integrated SoCs packing Armv9-A-based general-purpose CPU cores as well as NVIDIA's next-generation graphics processing unit IP. NVIDIA's GPU IP can run AI workloads for driver assistance as well as power infotainment system, as it fully supports such graphics technologies like real-time ray-tracing and DLSS 3 image upscaling.

The Dimensity Auto Cockpit processors are monolithic SoCs with built-in multi-camera HDR ISP, according to HardwareLuxx. This ISP supports front-facing, in-cabin, and bird's-eye-view cameras for a variety of safety applications. Additionally, these processors feature an audio DSP that supports various voice assistants.

The announcement from MediaTek does not disclose which generation of NVIDIA's graphics IP they're adopting – only that it's a "next-gen" design. Given the certification requirements involved, automotive SoC announcements tend to be rather conservative, so it remains to be seen just how "next gen" this graphics IP will actually be compared to the current generation Ada Lovelace architecture.

The new MediaTek SoCs will be fully supported by NVIDIA's Drive OS, which is widely used by automakers already. This will allow automakers to unify their software stack and use the same set of software for all of their cars powered by MediaTek's Dimensity. Furthermore, since NVIDIA's Drive OS fully supports CUDA, TensorRT, and Nsight, MediaTek's Dimensity SoCs will be able to take advantage of AI applications developed for the green company's platform.

“Generative AI is transforming the automotive industry in the same way that it has revolutionized the mobile market with more personalized and intuitive computing experiences,” said Jerry Yu, Corporate Senior Vice President and General Manager of MediaTek’s CCM Business Group. “The Dimensity Auto Cockpit portfolio will unleash a new wave of AI-powered entertainment in vehicles, and our unified hardware and software platform makes it easy for automakers to scale AI capabilities across their entire lineup.”

Without a doubt, licensing graphics IP and platform IP to a third party marks a milestone for NVIDIA in general, as well as its automotive efforts in particular. Leveraging DriveOS and CUDA beyond NVIDIA's own hardware platform is a big deal for a business unit that NVIDIA has long considered poised for significant growth, but has faced stiff competition and a slow adoption rate thanks to conservative automakers. Meanwhile, what remains to be seen is how MediaTek's new Dimensity Auto Cockpit processors will stack up against NVIDIA's own previously announced Thor SoC and associated DRIVE Thor platform, which integrates a Blackwell-based GPU delivering 800 TFLOPS of 8-bit floating point AI performance.

]]>
https://www.anandtech.com/show/21318/nvidias-gpu-ip-drives-into-mediateks-dimension-auto-socs Thu, 21 Mar 2024 17:00:00 EDT tag:www.anandtech.com,21318:news
AMD Announces FSR 3.1: Seriously Improved Upscaling Quality Anton Shilov

AMD's FidelityFX Super Resolution 3 technology package introduced a plethora of enhancements to the FSR technology on Radeon RX 6000 and 7000-series graphics cards last September. But perfection has no limits, so this week, the company is rolling out its FSR 3.1 technology, which improves upscaling quality, decouples frame generation from AMD's upscaling, and makes it easier for developers to work with FSR.

Arguably, AMD's FSR 3.1's primary enhancement is its improved temporal upscaling image quality: compared to FSR 2.2, the image flickers less at rest and no longer ghosts when in movement. This is a significant improvement, as flickering and ghosting artifacts are particularly annoying. Meanwhile, FSR 3.1 has to be implemented by the game developer itself, and the first title to support this new technology sometime later this year is Ratchet & Clank: Rift Apart.

Temporal Stability

AMD FSR 2.2 AMD FSR 3.1
Ghosting Reduction

AMD FSR 2.2 AMD FSR 3.1

Another significant development brought by FSR 3.1 is its decoupling from the Frame Generation feature introduced by FSR 3. This capability relies on a form of AMD's Fluid Motion Frames (AFMF) optical flow interpolation. It uses temporal game data like motion vectors to add an additional frame between existing ones. This ability can lead to a performance boost of up to two times in compatible games, but it was initially tied to FSR 3 upscaling, which is a limitation. Starting from FSR 3.1, it will work with other upscaling methods, though AMD refrains from saying which methods and on which hardware for now. Also, the company does not disclose when it is expected to be implemented by game developers.

In addition, AMD is bringing support for FSR3 to Vulkan and Xbox Game Development Kit, enabling game developers on these platforms to use it. It also adds FSR 3.1 to the FidelityFX API, which simplifies debugging and enables forward compatibility with updated versions of FSR. 

Upon its release in September 2023, AMD FSR 3 was initially supported by two titles, Forspoken and Immortals of Aveum, with ten more games poised to join them back then. Fast forward to six months later, the lineup has expanded to an impressive roster of 40 games either currently supporting or set to incorporate FSR 3 shortly. As of March 2024, FSR is supported by games like Avatar: Frontiers of Pandora, Starfield, The Last of Us Part I. Shortly, Cyberpunk 2077, Dying Light 2 Stay Human, Frostpunk 2, and Ratchet & Clank: Rift Apart will support FSR shortly.

Source: AMD

]]>
https://www.anandtech.com/show/21317/amd-announces-fsr-31-seriously-improved-upscaling-quality Thu, 21 Mar 2024 10:00:00 EDT tag:www.anandtech.com,21317:news
Ultra Ethernet Consortium Grows to 55 Members, Reveals Some Details on Upcoming HPC Backbone Tech Anton Shilov

The Ultra Ethernet Consortium (UEC) has announced this week that the next-generation interconnection consortium has grown to 55 members. And as the group works towards developing the initial version of their ultra-fast Ethernet standard, they have released some of the first technical details on the upcoming standard.

Formed in the summer of 2023, the UEC aims to develop a new standard for interconnection for AI and HPC datacenter needs, serving as a de-facto (if not de-jure) alternative to InfiniBand, which is largely under the control of NVIDIA these days. The UEC began to accept new members back in November, and just in five months' time it gained 45 new members, which highlights massive interest for the new technology. The consortium now boasts 55 members and 715 industry experts, who are working across eight technical groups. 

There is a lot of work at hand for the UEC, as the group has laid out in their latest development blog post, as the consortium works to to build a unified Ethernet-based communication stack for high-performance networking supporting artificial intelligence and high-performance computing clusters. The consortium's technical objectives include developing specifications, APIs, and source code for Ultra Ethernet communications, updating existing protocols, and introducing new mechanisms for telemetry, signaling, security, and congestion management. In particular, Ultra Ethernet introduces the UEC Transport (UET) for higher network utilization and lower tail latency to speed up RDMA (Remote Direct Memory Access) operation over Ethernet. Key features include multi-path packet spraying, flexible ordering, and advanced congestion control, ensuring efficient and reliable data transfer.

These enhancements are designed to address the needs of large AI and HPC clusters — with separate profiles for each type of deployment — though everything is done in a surgical manner to enhance the technology, but reuse as much of the existing Ethernet as possible to maintain cost efficiency and interoperability.

The consortium's founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta, and Microsoft. After the Ultra Ethernet Consortium (UEC) began to accept new members in October, 2023, numerous industry heavyweights have joined the group, including Baidu, Dell, Huawei, IBM, Nokia, Lenovo, Supermicro, and Tencent.

The consortium currently plans to release the initial 1.0 version of the UEC specification publicly sometime in the third quarter of 2024.

"There was always a recognition that UEC was meeting a need in the industry," said J Metz, Chair of the UEC Steering Committee. "There is a strong desire to have an open, accessible, Ethernet-based network specifically designed to accommodate AI and HPC workload requirements. This level of involvement is encouraging; it helps us achieve the goal of broad interoperability and stability."

While it is evident that then Ultra Ethernet Consortium is gaining support across the industry, it is still unclear where other industry behemoths like AWS and Google stand. While the hardware companies involved can design Ultra Ethernet support into their hardware and systems, the technology ultimately exists to serve large datacenter and HPC system operators. So it will be interesting to see what interest they take in (and how quickly they adopt) the nascent Ethernet backbone technology once hardware incorporating it is ready.

]]>
https://www.anandtech.com/show/21315/grows-to-55-members-reveals-details-on-upcoming-networking-tech Thu, 21 Mar 2024 09:00:00 EDT tag:www.anandtech.com,21315:news
Qualcomm Intros Snapdragon 7+ Gen 3: Pushing GenAI Into Premium Smartphones Ryan Smith

Proving the adage “ask, and you shall receive”, Qualcomm is back this week for a second Snapdragon SoC announcement for mobile phones. This time, the company is announcing the Snapdragon 7+ Gen 3, the latest-generation member of their relatively new Snapdragon 7+ lineup of SoCs. Like its predecessor, the Snapdragon 7+ Gen 2, the Gen 3 is aimed at the premium segment of smartphones, offering high-end features with more modest performance and costs – but still a feature set and level of performance ahead of “mid-tier” smartphone SoCs. And, with Monday’s launch of the Snapdragon 8s Gen 3, this is a segment that has been bifurcated into two lines of SKUs over at Qualcomm.

]]>
https://www.anandtech.com/show/21316/qualcomm-intros-snapdragon-7-gen-3 Thu, 21 Mar 2024 06:30:00 EDT tag:www.anandtech.com,21316:news
Intel to Receive $8.5B in CHIPS Act Funding & Further Loans To Build US Fabs Anton Shilov

Intel and the United States Department of Commerce announced on Wednesday that they had inked a preliminary agreement under which Intel will receive $8.5 billion in direct funding under the CHIPS and Science Act. Furthermore, Intel is being made eligible for $11 billion in low-interest loans under the same law, and is being given access to a 25% investment tax credit on up to $100 billion of capital expenditures over the next five years. The funds from the long-awaited announcement will be used to expand or build new Intel's semiconductor manufacturing plants in Arizona, New Mexico, Ohio, and Oregon, potentially creating up to 30,000 jobs.

"Today is a defining moment for the U.S. and Intel as we work to power the next great chapter of American semiconductor innovation," said Intel CEO Pat Gelsinger. "AI is supercharging the digital revolution and everything digital needs semiconductors. CHIPS Act support will help to ensure that Intel and the U.S. stay at the forefront of the AI era as we build a resilient and sustainable semiconductor supply chain to power our nation's future."

Intel is working on several important projects, including new semiconductor production facilities and advanced packaging facilities. On the fab front, there are three ongoing projects: 

  • Firstly, Intel is expanding its chip production capacities in Arizona — the Silicon Desert campus — by constructing two additional fab modules capable of making chips on Intel 18A and 20A production technologies at a projected cost of around $20 billion. 
  • Secondly, the company is building its all-new Silicon Heartland campus in Licking County, near Columbus, Ohio. This extensive project is anticipated to require a total investment of $100 billion or more when fully developed, with an initial investment of around $20 billion for the first two fabrication modules, which are set to be completed in 2027 – 2028. 
  • Thirdly, Intel is expanding and upgrading its chip production, research, and development capabilities in its Silicon Forest campus near Hillsboro, Oregon. In particular, the company recently began installing a $380 million High-NA EUV tool in its D1X fab in Oregon.

Regarding advanced packaging facilities, Intel is about to complete the conversion of two of its fabs in its Silicon Mesa campus in New Mexico to advanced packaging facilities. These facilities will be crucial to building next-generation multi-chipset processors for clients, data center, and AI applications in the coming years, and which will be the largest advanced packaging operation in the US. Meanwhile, with advanced packaging capacity in New Mexico already in place, the state is set to concentrate vast advanced packaging capabilities to support Intel's ramp of leading-edge fabs in Arizona, Ohio, and Oregon.

To receive both the $8.5 billion in direct funding and the $11 billion in low-interest, long-term loans, Intel must comply with the terms set in the so-called preliminary memorandum of terms (PMTs). The PMT specifies that receiving direct funding and federal loans will only be provided after thoroughly reviewing and negotiating detailed agreements. These financial awards also depend on meeting specific milestone goals, which are not public, but are thought to include terms concerning investments, timing, and workforce developments. Finally, all of this funding is subject to the availability of remaining CHIPS Act funds.

On top of this direct financial assistance, if Intel meets the U.S. government's requirements, it can also access a 25% tax credit on up to $100 billion of qualified capital expenditures over the next five years. This will make Intel's CapEx – the most expensive part of building and outfitting a chip fab – 'cheaper' for the company and stimulate it to invest in the U.S.

"With this agreement, we are helping to incentivize over $100 billion in investments from Intel – marking one of the largest investments ever in U.S. semiconductor manufacturing, which will create over 30,000 good-paying jobs and ignite the next generation of innovation," said U.S. Secretary of Commerce Gina Raimondo. "This announcement is the culmination of years of work by President Biden and bipartisan efforts in Congress to ensure that the leading-edge chips we need to secure our economic and national security are made in the U.S."

]]>
https://www.anandtech.com/show/21314/intel-to-get-85-billion-from-us-govt-to-build-fabs-in-the-us Wed, 20 Mar 2024 16:45:00 EDT tag:www.anandtech.com,21314:news
SK hynix Platinum P51 Gen5 SSD with 238L NAND Spotted at GTC Ganesh T S

SK hynix is set to unveil their first Gen5 consumer NVMe SSD lineup shortly, based on the products at display in their GTC 2024 booth. The Platinum P51 M.2 2280 NVMe SSD will take over flagship duties from the Platinum P41 that has been serving the market for more than a year.

Similar to the Gold P31 and the Platinum P41, the Platinum P51 also uses an in-house SSD controller. The key updates are the move to PCIe Gen5 and the use of SK hynix's 238L TLC NAND. Other details are scarce, and we have reached out for additional information.

SK hynix Platinum P51 Gen5 NVMe SSD Specifications
Capacity 500 GB 1 TB 2 TB
Controller SK hynix In-House (Alistar)
NAND Flash SK hynix 238L 3D TLC NAND at ?? MT/s ('4D' with CMOS circuitry under the NAND as per SK hynix marketing)
Form-Factor, Interface M.2-2280, PCIe 5.0 x4, NVMe 2.0
Sequential Read 13500 MB/s
Sequential Write 11500 MB/s
Random Read IOPS TBD
Random Write IOPS TBD
SLC Caching Yes
TCG Opal Encryption TBD
Warranty TBD
Write Endurance TBD TBD TBD

Only the peak sequential access numbers were available at the GTC booth, indicating that the drive's firmware is still undergoing tweaks. It is also unclear how these numbers are going to vary based on capacity. Availability and pricing are also not public yet.

This is a significant launch for the Gen5 consumer SSD market, where the number of available options are quite limited. The Phison E26 controller and Micron's B58R NAND combination is already in its second generation (with the NAND operating at 2400 MT/s in the newest avatar), but other vertically integrated vendors such as Samsung, Western Digital / Kioxia, and SK hynix (till now) are focusing more on the Gen4 market which has much higher adoption.

We will update the piece with additional information once the specifications are officially available.

]]>
https://www.anandtech.com/show/21313/sk-hynix-platinum-p51-gen5-ssd-with-238l-nand-spotted-at-gtc Tue, 19 Mar 2024 21:45:00 EDT tag:www.anandtech.com,21313:news
Noctua Launches 145mm Tall Chromax.black NH-D12L CPU Cooler Matthew Connatser

Today, Noctua announced the launch of its NH-D12L chromax.black CPU cooler, an all-black version of the existing NH-D12L. The cooler sports not only a coat of mattte black paint, but also a relatively short height of 145mm, which Noctua says makes the NH-D12L suitable for slimmer cases and 4U server racks.

Having launched in 2022, the NH-D12L is essentially a shorter version of the NH-U12A, which stands at 158mm tall. While plenty of cases have the room for a cooler that tall, not all do (especially small form factor cases). The NH-D12L exists to offer similar performance as the NH-U12A but for cases where 145mm would fit but 158mm wouldn’t. However, the NH-D12L has just a single 120mm NF-A12x25 fan, whereas the NH-U12A has two. Additionally, the NH-D12L has five heatpipes to the NH-U12A’s seven. These two factors mean the NH-D12L can’t quite catch up to the NH-U12A when it comes to cooling capacity.

The chromax.black model is practically identical to the original, but features Noctua’s popular black motif. It should perform the same, and its SecuFirm 2 mounting hardware supports the same sockets: AMD’s AM4 and AM5, and Intel’s LGA 1700 and LGA 1851 for upcoming Arrow Lake CPUs. Despite its compact design, the NH-D12L also has “100% RAM compatibility” for sticks with tall heatspreaders, which sometimes pose clearance issues with air coolers.

The NH-D12L chromax.black also comes with the usual Noctua accessories: a screwdriver, NH-T1 thermal paste, and a four-pin low-noise adapter for the NF-A12x25 fan. Additionally, the 120mm fan is mounted to the cooler via a bracket, meaning no screws are necessary and it can be removed or installed toollessly.

At $99/€109, the NH-D12L is positioned fairly high in the market, next to larger high-end air coolers such as Corsair’s A115, as well as 240mm to 360mm AIO liquid coolers. However, the NH-D12L holds a substantial advantage in its size and compatibility, and while many of these high-end air coolers are 160mm tall or more, the NH-D12L is just 145mm. In some cases, even 15mm could make a big difference.

]]>
https://www.anandtech.com/show/21312/noctua-launches-145mm-tall-chromaxblack-nhd12l-cpu-cooler Tue, 19 Mar 2024 13:00:00 EDT tag:www.anandtech.com,21312:news
SK Hynix Starts Mass Production of HBM3E: 9.2 GT/s Anton Shilov

SK Hynix said that it had started volume production of its HBM3E memory and would supply it to a customer in late March. The South Korean company is the second DRAM producer to announce mass production of HBM3E, so the market of ultra-high-performance memory will have some competition, which is good for companies that plan to use HBM3E.

According to specifications, SK Hynix's HBM3E known good stack dies (KGSDs) feature data transfer rates up to 9.2 GT/s, a 1024-bit interface, and a bandwidth of 1.18 TB/s, which is massively higher than the 6.4 GT/s and 819 GB/s offered by HBM3. The company does not say whether it mass produces 8Hi 24GB HBM3E memory modules or 12Hi 36GB HBM3E devices, but it will likely begin its HBM3E ramp from lower-capacity products as they are easier to make.

We already know that SK Hynix's HBM3E stacks employ the company's advanced Mass Reflow Molded Underfill (MR-RUF) technology, which promises to reduce heat dissipation by 10%. This technology involves the use of an enhanced underfill between DRAM layers, which not only improves heat dissipation but also reduces the thickness of HBM stacks. As a result, 12-Hi HBM stacks can be constructed that are the same height as 8-Hi modules. However, this does not necessarily imply that the stacks currently in mass production are 12-Hi HBM3E stacks.

Although the memory maker does not officially confirm this, SK Hynix's 24GB HBM3E stacks will arrive just in time to address NVIDIA's Blackwell accelerator family for artificial intelligence and high-performance computing applications.

"With the success story of the HBM business and the strong partnership with customers that it has built for years, SK Hynix will cement its position as the total AI memory provider," said Sungsoo Ryu, Head of HBM business at SK Hynix. As a result, NVIDIA will have access to HBM3E memory from multiple suppliers with both Micron and SK Hynix.

Meanwhile, AMD recently confirmed that it was looking forward to expanding its Instinct MI300-series lineup for AI and HPC applications with higher-performance memory configurations, so SK Hynix's HBM3E memory could also be used for this.

]]>
https://www.anandtech.com/show/21311/sk-hynix-starts-mass-production-of-hbm3e-92-gts Tue, 19 Mar 2024 09:30:00 EDT tag:www.anandtech.com,21311:news
NVIDIA's 'cuLitho' Computational Lithography Adopted By TSMC and Synopsys For Production Use Anton Shilov

Last year, NVIDIA introduced its cuLitho software library, which promises to speed up photomask development by up to 40 times. Today, NVIDIA announced a partnership with TSMC and Synopsys to implement its computational lithography platform for production use, and use the company's next-generation Blackwell GPUs for AI and HPC applications.

The development of photomasks is a crucial step for every chip ever made, and NVIDIA's cuLitho platform, enhanced with new generative AI algorithms, significantly speeds up this process. NVIDIA says computational lithography consumes tens of billions of hours per year on CPUs. By leveraging GPU-accelerated computational lithography, cuLitho substantially improves over traditional CPU-based methods. For example, 350 NVIDIA H100 systems can now replace 40,000 CPU systems, resulting in faster production times, lower costs, and reduced space and power requirements.

NVIDIA claims its new generative AI algorithms provide an additional 2x speedup on the already accelerated processes enabled through cuLitho. This enhancement is particularly beneficial for the optical proximity correction (OPC) process, allowing the creation of near-perfect inverse masks to account for light diffraction.

TSMC says that integrating cuLitho into its workflow has resulted in a 45x speedup of curvilinear flows and an almost 60x improvement in Manhattan-style flows. Curvilinear flows involve mask shapes represented by curves, while Manhattan mask shapes are restricted to horizontal or vertical orientations.

Synopsys, a leading developer of electronic design automation (EDA), says that its Proteus mask synthesis software running on the NVIDIA cuLitho software library has accelerated computational workloads compared to current CPU-based methods. This acceleration is crucial for enabling angstrom-level scaling and reducing turnaround time in chip manufacturing.

The collaboration between NVIDIA, TSMC, and Synopsys represents a significant advancement in semiconductor manufacturing in general and cuLitho adoption in particular. By leveraging accelerated computing and generative AI, the partners are pushing semiconductor scaling possibilities and opening new innovation opportunities in chip designs.

]]>
https://www.anandtech.com/show/21309/nvidias-culitho-gains-support-from-tsmc-and-synopsys Mon, 18 Mar 2024 18:00:00 EDT tag:www.anandtech.com,21309:news
NVIDIA Blackwell Architecture and B200/B100 Accelerators Announced: Going Bigger With Smaller Data Ryan Smith Already solidly in the driver’s seat of the generative AI accelerator market at this time, NVIDIA has long made it clear that the company isn’t about to slow down and check out the view. Instead, NVIDIA intends to continue iterating along its multi-generational product roadmap for GPUs and accelerators, to leverage its early advantage and stay ahead of its ever-growing coterie of competitors in the accelerator market. So while NVIDIA’s ridiculously popular H100/H200/GH200 series of accelerators are already the hottest ticket in Silicon Valley, it’s already time to talk about the next generation accelerator architecture to feed NVIDIA’s AI ambitions: Blackwell.

]]>
https://www.anandtech.com/show/21310/nvidia-blackwell-architecture-and-b200b100-accelerators-announced-going-bigger-with-smaller-data Mon, 18 Mar 2024 17:00:00 EDT tag:www.anandtech.com,21310:news
The NVIDIA GTC 2024 Keynote Live Blog (Starts at 1:00pm PT/20:00 UTC) Ryan Smith & Gavin Bonshor We're here in sunny San Jose California for the return of an event that's been a long-time coming: NVIDIA's in-person GTC. The Spring 2024 event, NVIDIA's marquee event for the year, promises to be a big one for NVIDIA, as the company is due to deliver updates on its all-important datacenter accelerator products – the successor to the GH100 GPU and its Hopper architecture – along with NVIDIA's other professional/enterprise hardware, networking gear, and, of course, a slew of software stack updates.

In the 5 years since NVIDIA was last able to hold a Spring GTC in person, a great deal has changed for the company. They're now the third biggest company in the world, thanks to explosive sales growth (and even further growth expectations) due in large part to the combination of GPT-3/4 and other transformer models, and NVIDIA's transformer-optimized H100 accelerator. As a result, NVIDIA is riding high in Silicon Valley, but to keep doing so they also will need to deliver the next big thing to push the envelope on performance, and keep a number of hungry competitors off their turf.

Headlining today's keynote is, of course, NVIDIA CEO Jensen Huang, whose kick-off address has finally outgrown the San Jose Convention Center. As a result, Huang is filling up the local SAP Center arena instead. Suffice it to say, it's a bigger venue for a bigger audience for a [i]much[/i] bigger company.

So come join the AnandTech crew for our live blog coverage of NVIDIA's biggest enterprise keynote in years. The presentation kicks off at 1pm Pacific, 4pm Eastern, 20:00 UTC.

]]>
https://www.anandtech.com/show/21308/the-nvidia-gtc-2024-keynote-live-blog-starts-at-100pm-pt2000-utc Mon, 18 Mar 2024 15:00:00 EDT tag:www.anandtech.com,21308:news
StarTech Unveils 15-in-1 Thunderbolt 4/USB4 Dock with Quad Display Support Anton Shilov

StarTech.com has introduced its latest Thunderbolt 4/USB4 docking station, which has a plethora of ports and supports four display outputs. This makes it suitable for 4Kp60 quad-monitor setups often used for professional applications. The Thunderbolt 4 Quad Display Docking Station can also deliver up to 98W of power to the host, which is enough to feed a high-end laptop, such as Apple's MacBook Pro 16.

StarTech's 15-in-1 docking (132N-TB4USB4DOCK) has pretty much everything that one comes to expect from a dock engineered explicitly for demanding professionals, such as those involved in photography, content creation, video production, and computer-aided design. The unit comes with one Thunderbolt 4/USB 4 port with a 98W power delivery capability to connect to the host, a 2.5 GbE adapter, six USB Type-A ports (three supporting 10 Gbps, two supporting 5 Gbps, and one being USB 2.0 for up to 7.5W charging), one USB Type-C connector (at 10 Gbps), four display outputs (two DP 1.4, two HDMI 2.1), an SD Card reader with UHS-II, a microSD card reader with UHS-II, and a 3.5-mm audio jack. 

The dock's main selling feature is, its support for up to four displays. Of course, this is a valuable capability, but it has a couple of catches. The device can support four 4Kp60 displays when connected to a laptop featuring Intel's 12th or 14th Generation Core processor using a Thunderbolt 4 or USB 4 connector and with DSC enabled. With AMD Ryzen 6000 and Intel's 11th Gen Core-based systems, only three 4Kp60 displays are supported. Meanwhile, with MacBooks, users must get on with two 5Kp60 or one 6Kp60 display. The good news is that the Thunderbolt 4 Quad Display Docking Station requires no drivers and works seamlessly with MacOS, Windows, and ChromeOS.

The docking station has a 180W power supply, so it can simultaneously charge a laptop and power on all the remaining ports.

Thunderbolt 4 and USB 4 docks with rich capabilities are not cheap as they have to pack loads of quite expensive controllers, and StarTech's 15-in-1 docking station is no exception, as it costs $330.99

The StarTech.com Thunderbolt 4 Quad Display Docking Station is available for purchase directly from the company and through various IT resellers and distributors such as CDW, Amazon, Ingram Micro, TD SYNNEX, and D&H. 

]]>
https://www.anandtech.com/show/21306/startech-unveils-15in1-thunderbolt-4usb4-dock-with-quad-display-support Mon, 18 Mar 2024 12:30:00 EDT tag:www.anandtech.com,21306:news
Qualcomm Announces Snapdragon 8s Gen 3: A Cheaper Chip For Premium Phones Ryan Smith

With the launch of their flagship Snapdragon 8 SoC firmly behind them now, Qualcomm this morning is turning their collective head towards the premium market with the launch of another new Snapdragon 8 family SoC, the Snapdragon 8s Gen 3. The first of Qualcomm’s ‘s’-subseries of down-market parts to be released under the Snapdragon 8 banner, the Snapdragon 8s Gen 3 (8sG3) is intended to be a bridge part between the last-gen flagship 8 Gen 2 and current-gen flagship 8 Gen 3, offering a not-quite-flagship experience at a lower price point than Qualcomm’s top SoC. The new SoC is set to be available globally, with the first phones announced this month, though as is often the case for Qualcomm’s “premium” market SoCs, it looks like only Chinese handset OEMs will be picking up the chip, at least initially.

Although Qualcomm prefers to draw comparisons to their current gen flagship Snapdragon 8 Gen 3, the Snapdragon 8s Gen 3 is by and large and enhanced version of the Snapdragon 8 Gen 2. Many of the hardware blocks of the 8G2 have been carried over to the new chip – either in whole or in terms of functionality – a process that is made very easy thanks to the fact that Qualcomm is building the chip on the same TSMC 4nm node as the 8G2 and 8G3. Compared to the 8G2 then, there are two key differentiators for the 8sG3: a newer CPU complex lifted from the 8G3, and official on-device generative AI support.

Qualcomm Snapdragon 8 SoCs
SoC Snapdragon 8 Gen 3
(SM8650)
Snapdragon 8s Gen 3
(SM8635)
Snapdragon 8 Gen 2
(SM8550)
CPU 1x Cortex-X4
@ 3.3GHz

3x Cortex-A720
@ 3.2GHz

2x Cortex-A720
@ 3.0GHz

2x Cortex-A520
@ 2.3GHz

12MB sL3
1x Cortex-X4
@ 3.0GHz

4x Cortex-A720
@ 2.8GHz

3x Cortex-A520
@ 2.0GHz
1x Cortex-X3
@ 3.2GHz

2x Cortex-A715
@ 2.8GHz

2x Cortex-A710
@ 2.8GHz

4x Cortex-A510
@ 2.0GHz

8MB sL3
GPU Adreno
(Hardware RT & Global Illum.)
Adreno
(Hardware RT)
Adreno
(Hardware RT)
DSP / NPU Hexagon Hexagon Hexagon
Memory
Controller
4x 16-bit CH

@ 4800MHz LPDDR5X  /  76.8GB/s
4x 16-bit CH

@ 4200MHz LPDDR5X  /  67.2GB/s
4x 16-bit CH

@ 4200MHz LPDDR5X  /  67.2GB/s
ISP/Camera Triple 18-bit Spectra ISP

1x 200MP or 108MP with ZSL
or
64+36MP with ZSL
or
3x 36MP with ZSL

8K HDR video & 64MP burst capture
Triple 18-bit Spectra ISP

1x 200MP or 108MP with ZSL
or
64+36MP with ZSL
or
3x 36MP with ZSL

4K HDR video & 64MP burst capture
Triple 18-bit Spectra ISP

1x 200MP or 108MP with ZSL
or
64+36MP with ZSL
or
3x 36MP with ZSL

8K HDR video & 64MP burst capture
Encode/
Decode
8K30 / 4K120 10-bit H.265

H.265, VP9, AV1 Decoding

Dolby Vision, HDR10+, HDR10, HLG

720p960 SlowMo
4K60 10-bit H.265

H.265, VP9, AV1 Decoding

Dolby Vision, HDR10+, HDR10, HLG

1080p240 SlowMo
8K30 / 4K120 10-bit H.265

H.265, VP9, AV1 Decoding

Dolby Vision, HDR10+, HDR10, HLG

720p960 SlowMo
Integrated
Radio
FastConnect 7800
Wi-FI 7 + BT 5.4
2x2 MIMO
FastConnect 7800
Wi-FI 7 + BT 5.4
2x2 MIMO
FastConnect 7800
Wi-FI 7 + BT 5.3
2x2 MIMO
Integrated Modem X75 integrated
3GPP Rel 18

(5G NR Sub-6 + mmWave)
DL = 10000 Mbps
UL = 3500 Mbps
X70 integrated
3GPP Rel 17

(5G NR Sub-6 + mmWave)
DL = 5000 Mbps
UL = 3500 Mbps
X70 integrated
3GPP Rel 17

(5G NR Sub-6 + mmWave)
DL = 10000 Mbps
UL = 3500 Mbps
Mfc. Process TSMC 4nm TSMC 4nm TSMC 4nm

Starting with the CPU complex, Qualcomm is implementing Arm’s latest generation of Armv9 CPU cores here, meaning a mix of the Cortex-X4, Cortex-A720, and Cortex-A520. Relative to the flagship 8G3, the 8sG3 gives up one of its performance cores for another efficiency core, shifting the design from a 1/5/2 configuration to a 1/4/3 configuration – the same as the 8G2. The 8sG3 also loses some frequency headroom in the process, with the X4 prime core dropping from 3.3GHz to 3.0GHz, and the other CPU cores following similarly along.

Still, the 8sG3 should outperform the 8G2 in CPU tasks, which is the primary reason for replacing the CPU complex at all. Qualcomm is basically looking to offer an 8G2 with better CPU performance and energy efficiency, and using Arm’s latest CPU cores will be how they deliver on that.

Outside of the CPU complex, however, most of the rest of the functional blocks are either lifted from the 8G2, or are the same generation teams of features. This means the 8sG3’s integrated GPU offers hardware raytracing, for example, but not the global illumination support that was introduced for the flagship 8G3. The memory controller is also otherwise identical to the 8G2, with the SoC supporting a maximum of 24GB of LPDDDRX-8400.

The video recording and decoding capabilities of the 8sG3 are a distinct downgrade from the other Snapdragon 8 SoCs, however. Qualcomm has retained their trio of 18-bit Spectra ISPs – so the SoC can support up to 3 cameras – but all 8K support has been excised entirely. Instead, the 8sG3 can only record video at up to 4K, and even then only at 60fps, half the framerate of the 8G3/8G2. Slow-mo video capture has also been altered, as well; Qualcomm lists 1080p240 for this mode rather than 720p960. The higher resolution will no doubt be appreciated, but less so if this means it’s not possible to record above 240fps.

The lack of 8K video support also applies to the SoC’s video decode block, which can only decode videos up to 4K in resolution. Qualcomm has otherwise kept all of the underlying features of the video decode block at parity, however, so the 8sG3 gets support for AV1 decoding, along with Dolby Vision HDR.

Meanwhile, the DSP/NPU situation on the 8sG3 is a mixed bag. Officially, this SoC supports generative AI models (up to 10B parameters in size), something the 8G2 and its NPU were not capable of, and is otherwise only available on the 8G3. However, according to Qualcomm this is not the same generation of NPU IP as on the 8G3, and among other things it lacks support for speculative decoding (and I don’t see any mention of the newer NPU’s micro-tile inferencing improvements). So by all appearances, this is just the 8G2 NPU. Still, Qualcomm has at least rolled out some software/firmware updates to improve its functionality, giving it additional AI functionality right as exuberance for that is through the roof.

Finally, the comms side of the 8sG3 is essentially a slower version of the 8G2. Qualcomm is once again using their Snapdragon X70 integrated modem here, a 5G Release 17-generation design that offers 2x2 MIMO on mmWave, and 4x4 MIMO on sub-6G. Max upload speeds are unchanged, at 3.5Gbps, however max download speeds for the 8sG3 are 5Gbps, half that of the 8G2 (and 8G3). Paired with the X70 modem is Qualcomm’s FastConnect 7800 system, which offers Wi-Fi 7 support with 2x2 MIMO, as well as Bluetooth 5.4. The dual BT antenna feature from the other Snapdragon 8 chips has also made it over for this part.

Overall, the Snapdragon 8s Gen 3 is intended to occupy a very specific niche within Qualcomm’s SoC lineup, offering a cheaper alternative to their flagship SoC without giving up too many features. The marketing messaging behind the chip is made somewhat complicated by the fact that last year at this time Qualcomm launched the Snapdragon 7+ Gen 2 for the premium market, which at least partially overlaps what they’re trying to do with the 8sG3. None the less, Qualcomm insists there’s a market for chips between the Snapdragon 7 series and the flagship Snapdragon 8 SoC, and so here we are.

Absent another 7+ chip this year, it’s hard to see the 8sG3 as anything other than the 7+’s successor. Still, where the 7+ was a souped-up 7, the 8sG3 is clearly a down-market 8, so it has some significant hardware advantages, particularly when it comes to memory bandwidth. It may just be that Qualcomm aimed a bit too low for the premium market with the specs for the 7+, so this is an attempt to aim a bit higher.

In any case, expect to see the Snapdragon 8s Gen 3 picked up by many of the usual Chinese handset OEMs, including Honor, iQOO, realme, Redmi and Xiaomi. The first phones are expected to be announced this month.

]]>
https://www.anandtech.com/show/21307/qualcomm-announces-snapdragon-8s-gen-3 Mon, 18 Mar 2024 02:30:00 EDT tag:www.anandtech.com,21307:news
BIOSTAR Debuts Barebones A620MS mATX Motherboard For Ryzen 7000 Processors Matthew Connatser

BIOSTAR has launched its AM5-based A620MS motherboard today, bringing a new low-end option for PC users on a budget. Though BIOSTAR has not disclosed what MSRP it the A620MS motherboard will carry, the specifications of the board make it clear that it targets the lowest-end segment of the market, though it makes use of the regular A620 chipset instead of the even less expensive A620A chipset.

The A620MS sports some features typical for mATX A620 boards (which make up the vast majority of current models): two DDR5 DIMM slots that support up to two 48GB sticks, an M.2 PCIe 4.0 slot for SSDs, four SATA III ports, and a PCIe Gen4 x16 slot. The motherboard also has four debug LEDs for diagnosing CPU, RAM, GPU, and booting errors.

Meanwhile the rear I/O features a one gigabit Ethernet port, four USB 3.2 ports, analog audio jacks, two USB 2.0 ports, an HDMI 1.4 port, and DisplayPort 1.2. Though there are some more fully-featured A620 motherboards available with more ports operating at a higher specification, but the rear I/O is more or less par for the course when it comes to A620.

However, there are other things about BIOSTAR’s A620MS that implies it will be quite low-end for an A620 motherboard. It has just eight total voltage regulator modules (VRMs), which appear to be in a 6+2 or 6+1+1 phase configuration. This isn’t as low-end as BIOSTAR could have gone (ASRock offers a 4+1+1 stage board), but it is still very sparing in VRM stages compared to most other A620 motherboards. These VRMs are also not covered by a heatsink, which is also typical for boards in this segment, as they're normally paired with equally chip 65W(ish) chips.

BIOSTAR doesn’t list any official CPU restrictions in either its press release or its specification sheet; instead, the company simply lists the motherboard as compatible with Ryzen 7000 and future Ryzen 8000 processors.

While the market for AM5 motherboards includes plenty of B650(E) and X670(E) models, there’s only a handful of A620 boards in total. On Newegg, there are 14 different motherboards available, and many only differ slightly in respect to things like form factor. The cheapest of these cost $75 to $100, and while BIOSTAR didn’t reveal what price we should expect of its A620MS board, given its specifications, we expect it will land in that same $75 to $100 region.

]]>
https://www.anandtech.com/show/21305/biostar-debuts-barebones-a620ms-matx-motherboard-for-ryzen-7000-processors Fri, 15 Mar 2024 16:30:00 EDT tag:www.anandtech.com,21305:news
First DNA Data Storage Specification Released: First Step Towards Commercialization Anton Shilov

The DNA Data Storage Alliance introduced its inaugural specifications for DNA-based data storage this week. This specification outlines a method for encoding essential information within a DNA data archive, crucial for developing and commercializing an interoperable storage ecosystem.

DNA data storage uses short strings of deoxyribonucleic acid (DNA) called oligonucleotides (oligos) mixed together without a specific physical ordering scheme. This storage media lacks a dedicated controller and an organizational means to understand the proximity of one media subcomponent to another. DNA storage differs significantly from traditional media like tape, HDD, and SSD, which have fixed structures and controllers that can read and write data from the structured media. DNA's lack of physical structure requires a unique approach to initiate data retrieval, which brings its peculiarities regarding standardization. 

To address this, the SNIA DNA Archive Rosetta Stone (DARS) working group, part of the DNA Data Storage Alliance, has developed two specifications, Sector Zero and Sector One, to facilitate the process of starting a DNA archive. 

Sector Zero serves as the starting point, providing minimal details necessary for the archive reader to identify the entity responsible for synthesizing the DNA (e.g., Dell, Microsoft, Twist Bioscience) and the CODEC used for encoding Sector One (e.g., Super Codec, Hyper Codec, Jimbob's Codec). Sector Zero consists of 70 bases: the first 35 bases identify the vendor, and the second 35 bases identify the codec. The information in Sector Zero enables access and decoding of data stored in Sector One. The amount of data stored in SZ is small and fits into a single oligonucleotide.

Sector One expands on this by including a description of the contents, a file table, and parameters required for transferring data to a sequencer. This specification ensures that the main body of the archive is accessible and readable, paving the way for data retrieval. Sector One contains exactly 150 bases and will span multiple oligonucleotides. 

"A key goal of the DNA Data Storage Alliance is to set and publish specifications and standards that allow an interoperable DNA data storage ecosystem to grow," said Dave Landsman, of the DNA Data Storage Alliance Board of Directors. "With the publishing of the Alliance's first specifications, we take an important step in achieving that goal. Sector Zero and Sector One are now publicly available, allowing companies working in the space to adopt and implement."

The DNA Data Storage Alliance is led by Catalog Technologies, Inc., Quantum Corporation, Twist Bioscience Corporation, and Western Digital (though we are unsure whether Western Digital's NAND or HDD division is responsible for developing the specification). Meanwhile, numerous industry giants, including Microsoft, support the DNA Data Storage Alliance.

Source: SNIA

]]>
https://www.anandtech.com/show/21304/first-dna-data-storage-specification-released-first-step-towards-commercialization Fri, 15 Mar 2024 12:00:00 EDT tag:www.anandtech.com,21304:news
Asus Launches Low-Profile GeForce RTX 3050 6GB: A Tiny Graphics Card for All PCs Anton Shilov

Asus this week has become the latest PC video card manufacturer to announce a sub-75W video card based on NVIDIA's recently-released low-power GeForce RTX 3050 6GB design. And going one step further for small form factor PC owners, Asus has used NVIDIA's low-power GPU configuration to produce a half-height video card that can fit into low-profile systems.

As Asus puts it, the GeForce RTX 3050 LP BRK 6GB GDDR6 is a 'big productivity in a small package' and for a low-profile dual-slot graphics board, it indeed is. The unit has three display outputs, including a DVI-D, HDMI 2.1, and DisplayPort 1.4a with HDCP 2.3 support, which makes the graphics card s viable option both for a a dual-display desktop and a home theater PC (Nvidia's GA107 graphics processor supports all popular codecs except AV1). Furthermore, a DVI-D output enables the card to drive outdated displays, which even over half a decade after DVI-D was retired, still hang around as spare parts. Meanwhile, because the card only consumes around 70W, it does not require any auxiliary PCIe power connectors, which are at times not available in cheap systems from big PC makers.

Underlying this card is the aforementioned GeForce RTX 3050 6 GB, which uses the GA107 GPU with 2304 CUDA cores, and it comes with 6GB of GDDR6 memory connected to a narrower 96-bit memory bus (down from 128-bits for the full 8GB version. With a lower boost clock of 1470 MHz (1500 MHz in OC mode), the RTX 3050 6GB has reduced computing performance, delivering 6.77 FP32 TFLOPS versus 9.1 FP32 TFLOPS of the full-fledged RTX 3050.

As a result, the low-profile GeForce RTX 3050 6 GB is very much an entry-level card, though the low power requirements for such a card are also what make it special. This should be plenty for low-end gaming – beating out integrated GPUs – though suffice it to say, it's not going to compete with high-end, power-hungry cards either.

With its diminutive size, the Asus GeForce RTX 3050 LP BRK 6 GB GDDR6 looks to be a nice candidate for upgrading cheap systems from OEMs as well as fixing outdated PCs. What remains to be seen is how price competitive it is going to be. The graphics board already has one low-profile rival from MSI — which costs $185 — so Asus is not the only vendor competing here.

]]>
https://www.anandtech.com/show/21303/asus-launches-lowprofile-geforce-rtx-3050-6gb-a-tiny-graphics-card-for-all-pcs Fri, 15 Mar 2024 09:00:00 EDT tag:www.anandtech.com,21303:news
Asus Adds Support for 64GB Memory Modules to Intel 600/700 Motherboards Anton Shilov

Asus on Thursday said it has released new versions of UEFI BIOS for DDR5-supporting Intel 600/700-series motherboards that enable support for 64 GB DIMMs. As a result, Asus's latest platforms for Intel's 12th, 13th and 14th Generation Core processors with four slots for DIMM slots can now work with up to 256 GB of DDR5 memory, and motherboards with two DIMM slots can now support up to 128 GB of memory.

To gain support for 256 GB of DDR5 memory using 64 GB unbuffered DIMMs, one needs to download the latest version of UEFI BIOS for one of the Intel 600/700-series motherboards listed at the Asus website.

The list of Asus motherboards with an LGA1700 socket supporting 256 GB of DDR5 memory includes 75 boards based on a variety of Intel's 600 and 700-series chipsets, including Intel Z790, H770, B760, Z690, W680, and Q670. Though taking stock of Asus's larger motherboard offerings, this is still a bit shy of covering all of Asus's LGA1700 motherboards, which is nearly 200 models in total. So 64 GB DIMM support has only come to a fraction of their boards, at least thus far.

Otherwise, it is noteworthy that cutting-edge high-capacity DIMMs, such as 32 GB, 48GB, and 64 GB, are typically not available with the same blistering XMP clockspeeds as some of their lower-capacity counterparts, so equipping an Intel system with 256 GB of memory will come at a cost of peak memory bandwidth, on top of the typical DDR5 2 DIMM Per Channel (2DPC) frequency penalty. In fact, the fastest 48 GB modules currently offered by Corsair and G.Skill (which could be used to build systems with 192 GB of memory) top out at 6600 MT/s and 6800 MT/s, respectively. Meanwhile, for now, there are no Intel XMP 3.0-compatible 64 GB DDR5 modules from these two renowned makers.

Ultimately, the prime market for high-capacity UDIMMs at this time is going to be content creators, data scientists, and other workstation-light workloads that need a quarter-terabyte of RAM, and can justify the cost for the leading-edge DIMMs. Otherwise 16 GB and 32 GB DIMMs are likely to remain the sweet spot for the LGA1700 platform for the rest of its lifecycle.

Finally, it should be noted that Asus is also announcing (or rather, reiterating) support for 64 GB DIMMs on their AM5 motherboards. That said, this support is already baked into that platform and BIOSes, and unlike the Intel boards, a BIOS update is not needed.

]]>
https://www.anandtech.com/show/21302/asus-confirms-support-for-256gb-of-memory-by-intel-600700-motherboards Thu, 14 Mar 2024 16:30:00 EDT tag:www.anandtech.com,21302:news
Western Digital Launches PC SN5000S SSD: Low-Cost Meets High Performance Anton Shilov

Western Digital has introduced its new series of SSDs aimed at mainstream PCs, which combine high performance and low cost. The Western Digital PC SN5000S family of DRAM-less drives uses the company's 3D QLC NAND memory and an in-house-developed platform, so the SSDs promise to be relatively inexpensive. Meanwhile, their sequential read performance reaches 6,000 MB/s.

Western Digital's PC SN5000S drives are based on the company's latest in-house controller, which supports a PCIe 4.0 x4 host interface and BICS6 3D QLC NAND memory. The controller fully supports Western Digital's nCache 4.0 HybridSLC technology with endurance monitoring to ensure decent performance, RSA-3K and SHA-384 encryption, and TCG Opal 2.02 and Pyrite security capabilities.

On the capacity side, Western Digital's PC SN5000S drives will be available in 512 GB, 1 TB, and 2 TB configurations. As for performance, the 2TB PC SN5000S is rated for up to 6,000 MB/s sequential read speed, up to 5,600 MB/s sequential write speed, up to 750,000 random read IOPS, and up to 900,000 random write IOPS. The SSDs will be available in M.2-2230 and M.2-2280 form factors.

Western Digital SN5000S SSD Specifications
Capacity 512 GB 1 TB 2 TB
Controller Western Digital's proprietary controller
NAND Flash Western Digital / Kioxia BiCS 6 176L 3D QLC NAND
Form-Factor, Interface Single-Sided M.2-2280, PCIe 4.0 x4, NVMe 2.0
Sequential Read 6000 MB/s
Sequential Write 4200 MB/s 5400 MB/s 5600 MB/s
Random Read IOPS 500K 750K
Random Write IOPS 850K 900K
Peak Power 6.1W 6.5W 6.9W
SLC Caching Yes
Security Capabilities TCG Opal 2.02 and Pyrite
Warranty 5 years
Write Endurance 150 TBW 300 TBW 600 TBW

When it comes to endurance, Western Digital rates 2TB PC SN5000S at 600 terabytes to be written, 1TB version at 300TBW, and 512GB at 150TBW, which is significantly lower compared to entry-level SSDs with similar capacities (yet higher compared to WD Green-branded drives). 

While the performance of Western Digital's PC SN5000S hardly impresses our avid readers, who tend to look at the highest-end SSDs, 1TB and 2TB versions offer considerably higher performance than most entry-level drives on the market today. What disappoints is the relatively low endurance of Western Digital's new SSDs compared to entry-level drives from other makers.

Western Digital primarily markets its PC SN5000S solid-state drives for OEMs, where they succeed the company's SN740-series. For PC makers, the drives are fast enough, and perhaps more importantly, they support advanced encryption technologies as well as TCG Opal 2.02 and Pyrite security capabilities, which is crucial for desktops and laptops sold to various U.S. government agencies.

Source: Western Digital

]]>
https://www.anandtech.com/show/21301/western-digital-launches-pc-sn5000s-ssd-lowcost-meets-high-performance Thu, 14 Mar 2024 13:00:00 EDT tag:www.anandtech.com,21301:news
Intel Announces Core i9-14900KS: Raptor Lake-R Hits Up To 6.2 GHz Gavin Bonshor For the last several generations of desktop processors from Intel, the company has released a higher clocked, special-edition SKU under the KS moniker, which the company positions as their no-holds-barred performance part for that generation. For the 14th Generation Core family, Intel is keeping that tradition alive and well with the announcement of the Core i9-14900KS, which has been eagerly anticipated for months and finally unveiled for launch today. The Intel Core i9-14900KS is a special edition processor with P-Core turbo clock speeds of up to 6.2 GHz, which makes it the fastest desktop processor in the world... at least in terms of advertised frequencies it can achieve.

With their latest KS processor, Intel is looking to further push the envelope on what can be achieved with the company's now venerable Raptor Lake 8+16 silicon. With a further 200 MHz increase in clockspeeds at the top end, Intel is looking to deliver unrivaled desktop performance for enthusiasts. At the same time, as this is the 4th iteration of the "flagship" configuration of the RPL 8+16 die, Intel is looking to squeeze out one more speed boost from the Alder/Raptor family in order to go out on a high note before the entire architecture starts to ride off into the sunset later this year. To get there, Intel will need quite a bit of electricity, and $689 of your savings.

]]>
https://www.anandtech.com/show/21298/intel-announces-core-i9-14900ks-raptor-lake-r-hits-up-to-6-2-ghz Thu, 14 Mar 2024 11:00:00 EDT tag:www.anandtech.com,21298:news