Hardware Corner
  • Home
  • Desktop PCs
    • Refurbished Desktops
    • Compare Desktops
    • Desktop Models
    • Dell OptiPlex
    • SSD Database
  • Help and FAQ
  • Guides
  • LLMs
    • LLM Database
    • LLM Hardware News
Select Page
RTX 5070 Ti Super Rumored With 24GB VRAM – Plus New Details on RTX 5080 Super and 5070 Super for Local LLMs

RTX 5070 Ti Super Rumored With 24GB VRAM – Plus New Details on RTX 5080 Super and 5070 Super for Local LLMs

by Allan Witt | Jun 30, 2025 | LLM Hardware News

Fresh hardware rumors have emerged surrounding NVIDIA’s  Blackwell architecture, expanding upon the potential SUPER refresh we discussed in our previous analysis, “Local LLM 24GB and 18GB GPU Options Emerge.” The latest information not only...
PELADN Enters the Strix Halo Arena with YO1 Mini-PC for Local LLM

PELADN Enters the Strix Halo Arena with YO1 Mini-PC for Local LLM

by Allan Witt | Jun 21, 2025 | LLM Hardware News

The landscape for high-density, on-premise AI hardware is rapidly evolving, driven almost single-handedly by the arrival of AMD’s Ryzen AI 300 “Strix Halo” series. For the enthusiast dedicated to running large language models locally, these APUs represent...
FEVM FA-EX9 Mini PC Benchmarked: 128GB Strix Halo for Local LLM Inference

FEVM FA-EX9 Mini PC Benchmarked: 128GB Strix Halo for Local LLM Inference

by Allan Witt | May 28, 2025 | LLM Hardware News

The arrival of AMD’s Ryzen AI MAX+ 395 “Strix Halo” APU has generated considerable interest among local LLM enthusiasts, promising a potent combination of CPU and integrated graphics performance with substantial memory capacity. One of the first...
128GB RAM, Ryzen AI MAX+, $1699 — Bosman Undercuts All Other Local LLM Mini-PCs

128GB RAM, Ryzen AI MAX+, $1699 — Bosman Undercuts All Other Local LLM Mini-PCs

by Allan Witt | May 25, 2025 | LLM Hardware News

The landscape for accessible, high-memory hardware tailored for local Large Language Model (LLM) inference is witnessing an intriguing development. A lesser-known manufacturer, Bosman, has unveiled its M5 AI Mini-PC, promising AMD’s potent Ryzen AI MAX+ 395...
Zotac Joins the Local LLM Race with Strix Halo Mini-PC — What You Need to Know

Zotac Joins the Local LLM Race with Strix Halo Mini-PC — What You Need to Know

by Allan Witt | May 15, 2025 | LLM Hardware News

The small form factor (SFF) PC landscape for local large language model (LLM) inference is set to gain another contender, as Zotac has signaled its intent to launch the Magnus EA series, reportedly featuring AMD’s Ryzen AI MAX+ 395 “Strix Halo” APU,...
I Analyzed NVIDIA’s RTX PRO 5000 Specs – Here’s What Stands Out for Local LLM Work

I Analyzed NVIDIA’s RTX PRO 5000 Specs – Here’s What Stands Out for Local LLM Work

by Allan Witt | May 11, 2025 | LLM Hardware News

NVIDIA has officially announced the RTX PRO 5000 48GB, the latest addition to its professional GPU lineup based on the new Blackwell architecture. Arriving on the heels of its more formidable sibling, the RTX PRO 6000 Blackwell, the RTX PRO 5000 carves out a distinct...
Forget Used RTX 3090s – Intel’s 24GB Arc Pro B60 Could Be the Better Buy for Local LLM

Forget Used RTX 3090s – Intel’s 24GB Arc Pro B60 Could Be the Better Buy for Local LLM

by Allan Witt | May 8, 2025 | LLM Hardware News

Intel is poised to expand its professional graphics lineup with the Arc Pro Battlemage series, confirmed for a reveal at Computex. Among the anticipated offerings, one model, the Arc Pro B60, is set to feature a significant 24GB of VRAM. This development is...
LLM Mini PC War Is On: Beelink Enters the Race with $1,800 Strix Halo-Powered GTR9 Pro AI

LLM Mini PC War Is On: Beelink Enters the Race with $1,800 Strix Halo-Powered GTR9 Pro AI

by Allan Witt | May 7, 2025 | LLM Hardware News

The landscape for compact, high-memory systems capable of local Large Language Model (LLM) inference is steadily expanding, with Beelink now officially announcing its GTR9 Pro AI Mini. This unit joins a growing roster of Mini-PCs built around AMD’s Ryzen AI MAX+...
Local LLM on a Budget? CUDA Deprecation Spells Trouble for DIY AI Rigs Using P40, V100, 1080 Ti

Local LLM on a Budget? CUDA Deprecation Spells Trouble for DIY AI Rigs Using P40, V100, 1080 Ti

by Allan Witt | May 7, 2025 | LLM Hardware News

Nvidia has officially signaled a significant transition in its CUDA ecosystem, announcing in the CUDA 12.9 Toolkit release notes that the next major toolkit version will cease support for Maxwell, Pascal, and Volta GPU architectures. While these venerable...
Running 671B Models Locally? This $20K Chinese LLM Box Does It with 1 GPU, 1TB of RAM, and 20 Tokens/Sec

Running 671B Models Locally? This $20K Chinese LLM Box Does It with 1 GPU, 1TB of RAM, and 20 Tokens/Sec

by Allan Witt | May 2, 2025 | LLM Hardware News

The landscape of local large language model (LLM) inference is often defined by the limitations of GPU VRAM. Enthusiasts meticulously plan multi-GPU setups, hunt for deals on used high-VRAM cards, and carefully select quantization levels to squeeze models onto their...
Local LLM 24GB and 18GB GPU Options Emerge: RTX 5070 SUPER and 5080 SUPER Hint at VRAM Gains for Inference

Local LLM 24GB and 18GB GPU Options Emerge: RTX 5070 SUPER and 5080 SUPER Hint at VRAM Gains for Inference

by Allan Witt | Apr 28, 2025 | LLM Hardware News

The latest whispers from the hardware grapevine suggest NVIDIA might be preparing SUPER variants for its RTX 50 series, specifically an RTX 5080 SUPER and an RTX 5070 SUPER. While mid-generation refreshes are standard practice, these rumored SKUs are particularly...
Local LLM Inference Just Got Faster: RTX 5070 Ti With Hynix GDDR7 VRAM Overclocked to 1088 GB/s Bandwidth

Local LLM Inference Just Got Faster: RTX 5070 Ti With Hynix GDDR7 VRAM Overclocked to 1088 GB/s Bandwidth

by Allan Witt | Apr 27, 2025 | LLM Hardware News

The landscape for local LLM inference hardware has just become more interesting with recent developments in NVIDIA’s memory supply chain. SK Hynix has joined Samsung as a GDDR7 memory supplier for the GeForce RTX 50 series, with initial implementations appearing...
New Chinese Mini-PC with AI MAX+ 395 (Strix Halo) and 128GB Memory Targets Local LLM Inference

New Chinese Mini-PC with AI MAX+ 395 (Strix Halo) and 128GB Memory Targets Local LLM Inference

by Allan Witt | Apr 21, 2025 | LLM Hardware News

https://fagus.fra1.digitaloceanspaces.com/tmp/audio/strix-halo-mini-pc-for-local-llm-Inference.mp3 Chinese manufacturer FAVM has announced FX-EX9, a compact 2-liter Mini-PC powered by AMD’s Ryzen AI MAX+ 395 “Strix Halo” processor, potentially...
Smarter Local LLMs, Lower VRAM Costs – All Without Sacrificing Quality, Thanks to Google’s New QAT Optimization

Smarter Local LLMs, Lower VRAM Costs – All Without Sacrificing Quality, Thanks to Google’s New QAT Optimization

by Allan Witt | Apr 19, 2025 | LLM Hardware News

Google has introduced a breakthrough optimization technique called Quantization-Aware Training (QAT) for their Gemma 3 large language models, dramatically reducing the memory requirements needed to run these powerful AI systems on consumer hardware while preserving...
Arc GPUs Paired with Open-Source AI Playground Offer Flexible Local AI Setup

Arc GPUs Paired with Open-Source AI Playground Offer Flexible Local AI Setup

by Allan Witt | Apr 17, 2025 | LLM Hardware News

In a significant move for the local LLM inference community, Intel has announced that it’s open sourcing AI Playground, its versatile platform for generative AI that was previously exclusive to Intel hardware. This development comes at a critical time as AMD...
RTX 5060 Ti for Local LLMs: It’s Finally Here – But Is It Available, and Is the Price Still Right?

RTX 5060 Ti for Local LLMs: It’s Finally Here – But Is It Available, and Is the Price Still Right?

by Allan Witt | Apr 16, 2025 | LLM Hardware News

The much-anticipated NVIDIA RTX 5060 Ti has finally hit retail shelves, with the 16GB model now available from major retailers like Newegg and Best Buy. Initial pricing has settled between $470-$570 for most standard models, representing a modest 10-23% premium over...
Dual RTX 5060 Ti: The Ultimate Budget Solution for 32GB VRAM LLM Inference at $858

Dual RTX 5060 Ti: The Ultimate Budget Solution for 32GB VRAM LLM Inference at $858

by Allan Witt | Apr 15, 2025 | LLM Hardware News

NVIDIA has officially unveiled the RTX 5060 Ti with 16GB of GDDR7 memory at $429, positioning it as a compelling option for local LLM enthusiasts. At this price point, the card not only offers excellent standalone value but opens up an even more enticing possibility:...
55% More Bandwidth! RTX 5060 Ti Set to Demolish 4060 Ti for Local LLM Performance

55% More Bandwidth! RTX 5060 Ti Set to Demolish 4060 Ti for Local LLM Performance

by Allan Witt | Apr 15, 2025 | LLM Hardware News

In just two days, NVIDIA is set to launch their RTX 5060 Ti, and recently leaked specs suggest this card could become the go-to option for budget-conscious LLM enthusiasts looking to run impressive models locally. With the rising prices and dwindling availability of...
Llama 4 Scout & Maverick Benchmarks on Mac: How Fast Is Apple’s M3 Ultra with These LLMs?

Llama 4 Scout & Maverick Benchmarks on Mac: How Fast Is Apple’s M3 Ultra with These LLMs?

by Allan Witt | Apr 7, 2025 | LLM Hardware News

The landscape of local large language model (LLM) inference is evolving at a breakneck pace. For enthusiasts building dedicated systems, maximizing performance-per-dollar while navigating the ever-present VRAM ceiling is a constant challenge. Following closely on the...
Running Local LLMs? This 32GB Card Might Be Better Than Your RTX 5090—If You Can Handle the Trade-Offs

Running Local LLMs? This 32GB Card Might Be Better Than Your RTX 5090—If You Can Handle the Trade-Offs

by Allan Witt | Apr 7, 2025 | LLM Hardware News

Tenstorrent, the AI and RISC-V compute company helmed by industry veteran Jim Keller, has officially opened pre-orders for its new lineup of Blackhole and Wormhole PCIe add-in cards. Aimed squarely at developers and, potentially, the burgeoning local Large...
« Older Entries

Recent news

  • RTX 5070 Ti Super Rumored With 24GB VRAM – Plus New Details on RTX 5080 Super and 5070 Super for Local LLMs
  • PELADN Enters the Strix Halo Arena with YO1 Mini-PC for Local LLM
  • FEVM FA-EX9 Mini PC Benchmarked: 128GB Strix Halo for Local LLM Inference
  • 128GB RAM, Ryzen AI MAX+, $1699 — Bosman Undercuts All Other Local LLM Mini-PCs
  • Zotac Joins the Local LLM Race with Strix Halo Mini-PC — What You Need to Know
  • I Analyzed NVIDIA’s RTX PRO 5000 Specs – Here’s What Stands Out for Local LLM Work
  • Forget Used RTX 3090s – Intel’s 24GB Arc Pro B60 Could Be the Better Buy for Local LLM
  • LLM Mini PC War Is On: Beelink Enters the Race with $1,800 Strix Halo-Powered GTR9 Pro AI

hardware corner footer logo

Copyright © 2020 HardwareCorner.net. All rights reserved.

Company

  • About us
  • Our team
  • Privacy Policy
  • Terms of Service

Navigation

  • Home
  • Refurbished Laptops
  • Best Refurbished Desktop Computers
  • Refurbished Accessories
  • Compare Desktop Computers
  • Help and FAQ
  • Guides
Copyright © 2019 Hardware Corner. We are not responsible for the content of external sites.