Tuesday, December 19, 2023

Sapphire Rapids for AI: a mini-review

 Intel often touts the performance of their 4th-generation Xeon Scalable "Sapphire Rapids" for generative AI applications, but there are surprisingly few meaningful benchmarks, even from Intel itself. The official Intel slides are as such:

Oof. Three ResNets, two DLRM's, and BERT-large. Come on guys, this is 2023 and no one is buying hardware to run ResNet50. Let's try benchmarking some real workloads instead.

Benchmarked Hardware

Xeon Platinum 8461V ($4,491) on Supermicro X13SEI, default power limits, 256 GB JEDEC DDR5-4800

Stable Diffusion 2.1

Everyone's favorite image generator. SD-2.1 runs on a $200 GPU, so perhaps it doesn't make sense to test performance on a $4,500 Xeon, but as Intel says, every server needs a CPU so the Xeon is in some sense, "free". We use an OpenVINO build of SD-2.1, which contains Intel-specific optimizations, but critically, is not quantized, distilled, or otherwise compressed - it should have comparable FLOPS to the vanilla SD models. Generation is at 512x512 for 20 steps.


Not too shabby. 6.27 it/s puts us somewhere around the ballpark of an RTX3060. On the other hand, you could run your inference on an A4000 (which is licensed for datacenter use) and get higher performance for just $1100, so the Xeon isn't exactly winning on price here.

llama.cpp inference


Inferencing your LLM on a CPU was a bad idea until last week. Microsoft's investments into OpenAI make it almost impossible to compete, price-wise, with gpt-3.5-turbo: since Microsoft is a cloud provider, OpenAI gets to pay deeply discounted rates over the 3-10x markup you would have to pay for cloud infrastructure. You can't escape by building your own datacenter either: without the high occupancy of a cloud datacenter, you still end up paying overhead for idle servers. This left 7B-sized models as the only meaningful ones to self-host, but 7B models are so small that you can run them on a $200 GPU, obviating the need for a huge CPU. (Obviously, there are security-related reasons to self-host, but by and large the bulk of LLM applications are not security-sensitive).

Fortunately for Intel, a medium-sized MoE model, Mixtral-8x7B, with good performance appeared last week. MoE's are unique in that they have the memory footprint of a large model, but the compute (really, bandwidth) requirements of a small model. That sounds like a perfect fit for CPUs, which have tons of memory but limited bandwidth.

It ends up being that llama.cpp, an open-source hobbyist implementation of LLMs on CPUs, is the fastest CPU implementation. At batch size 1 we get about 18 tokens per second in Q4, which is a perfectly usable result (prompt eval time is poor, but I think that's an MoE limitation in llama.cpp which should be fixed shortly?).

Falcon-7B LLM fine-tuning with TRL


This is probably the most interesting benchmark. Full fine-tuning of a 7B parameter LLM needs over 128 GB of memory, requiring the use of unobtanium 4x A100 or 4x A6000 cloud instances or putting $15K into specialized on-premises hardware that will be severely underutilized (given that you are unlikely to be fine-tuning all the time). The big Xeon is able to achieve over 200 tokens per second on this benchmark (with numactl -C 0-47 I was able to achieve about 240 tokens per second on this particular dataset at batch size 16).

It's worth noting that other 7B LLM's perform slower. I don't think this is because Falcon is architecturally different; rather, Falcon was a somewhat off-brand implementation using a bespoke implementation of FlashAttention.

200 tokens per second is pretty decent, allowing 3 epochs of fine tuning on a 10M token dataset in about a day and a half (for reference, openassistant-guanaco, a high quality subset of the guanaco dataset, is about 5M tokens, and a page of text is about 450 tokens).

Conclusions: usable vs useful?

First, without a doubt the three results presented above are usable. A single-socket Sapphire Rapids machine is able to generate images, run inference on a state of the art LLM, and fine-tune a 7B-parameter LLM on millions of tokens, all at speeds which are unlikely to have you tear your hair out. 

On the other hand, is it useful

In the datacenter, we could see a case for image generation, which runs briskly on the Xeon and has a light memory footprint. The problem is, the image generation workload uses all 48 cores for three seconds, which is a poor fit for oversubscribed virtualized environments. On a workstation, the thought of having 48 cores but no GPU is ludicrous, and even AMD GPUs are going to tie the Xeon in Stable Diffusion.

The LLM inferencing use case is a bit different, being primarily bound by bandwidth and memory capacity. The dynamics here are entirely enforced by market conditions, not raw performance or technology supremacy. For example, Sapphire Rapids CPUs are available as spot instances from hyperscalers; GPUs are not. Nvidia also chooses to charge $100 per GB of VRAM on its datacenter parts, but conversely, Intel seems to think its cores are worth $100 each even in bulk. Software support for the Xeon is poor - while llama.cpp is fast, it is primarily a single-user library and has minimal (no?) support for batched serving.

The training benchmark is the most interesting one of all, because it is an example of a workload that will not run at all on most GPU instances, and in fact, there are technology limitations as to why GPUs have trouble reaching 100s of gigabytes of memory. Once again, the Xeon is held back here by Intel's high pricing - a 1S Xeon system costs about $6,000, compared to about $15,000 for a 3x RTX A6000 machine which is significantly faster.

Finally, let's take a look at Intel's most important claim: "the CPU is free because you aren't allowed to not have one". Disregarding hyperscalers which work on a different cost model, a company upgrading to Sapphire Rapids in 2023 is likely still on Skylake. Going from a 20-core Skylake part to a 32-core Sapphire Rapids part represents a 2x boost in general application performance and a ~6x boost in AI performance, except for bandwidth-limited LLM inference where the gains are closer to 2x. Getting 2x more work done with your datacenter and a competitive option to run AI-based analytics or serve AI applications on top of that is a pretty compelling reason to buy a new server, and for a lot of IT departments that's all you need to make the sale.

Tuesday, April 18, 2023

Assorted video decoding tidbits

Some form of hardware acceleration is all-but-mandatory for playing 4K video, especially high bitrate H.265. Nowadays H.265 decoding is commonplace (you need to go all the way back to 2015 to find a CPU or GPU that can't decode HEVC Main10), but presumably as the world transitions to AV1 the same will apply.

Hardware accelerated decode in web browsers on Linux

It works! Well, sort of. On my test machine with an i3-12100F and an ancient Polaris 12 (AMD) GPU running the open source drivers, 1080p H.264 content is properly decoded by UVD but 4K60 VP9 (the famous '4K Costa Rica' demo clip on Youtube) is not. CPU usage seems a bit high in either case, about half a core in the former and 1-2 cores in the latter.

Decode on integrated graphics, display connected to discrete graphics

An esoteric use case. I ran into the H.265 version (!); I had a M4000 I wanted to use for Solidworks and the M4000 does not support HEVC decode, but the iGPU on the i5-12600 it was paired with does.

Unfortunately, this doesn't seem to work. iGPU usage remained zero and the M4000 ran in some sort of weird hybrid decoding mode. Performance, however, was acceptable.

Decode in integrated graphics, multiple GPUs and displays

Can we fix the above by plugging the monitor into the iGPU? The behavior is strange:


Both GPUs are now at 30% usage, but the CPU usage has gone through the roof. Very bad indeed, but possibly fixable with enough effort.

Remarkably, even in this bugged state the Costa Rica clip runs at 60 fps.

VLC + H.265, but the decoding is supposed to be done on the integrated graphics

Unfortunately, the iGPU chooses not to participate, but the hybrid decoding seems to work fine, playing back 4K24 high bitrate video with about 12% CPU usage. It's worth noting however that this is 12% of a 4.8GHz hex core Alder Lake, which is like..an entire laptop CPU from not that long ago or two whole 3GHz Skylake cores.

Heavy decode on integrated graphics

Surprisingly good. On my Kaby Lake laptop, the CPU is able to keep up with about 25% usage while remaining throttled to ~1.6 GHz on battery - the fixed function hardware really does the heavy lifting here and keeps the power consumption down.

Hardware accelerated decode, but there are many cores

The test system was a Epyc 7702 with an RTX 3060, by all means close to the state of the art. I didn't expect problems here, and didn't find any; the 3060 ran at heavy usage on the Costa Rica clip and the CPU was basically idle.

It's unclear what the actual CPU usage was; Task Manager lacks the granularity to deal with so many cores since even 1% is almost an entire core.

Tuesday, February 21, 2023

"Phones are getting better": buying a camera in 2023

It's a tough time to be a camera manufacturer. The mighty ISOCELL HP2 now rules the mobile space, sporting 200 million (!) 0.6um pixels binnable as 12.5M 2.4um pixels. Subelectron read noise, backside illumination, very deep wells, and sophisticated readout schemes allow virtually unlimited dynamic range while not compromising light sensitivity. Practically speaking, the out-of-the-box performance of a state-of-the-art mobile camera greatly exceeds that of any ILC in challenging-but-well-lit conditions: the phone has access to live gyro data for stack alignment, more processing than any ILC could dream of, and is backed by hundreds of millions of dollars of software R&D. It also has access to color science that, no doubt, has been statistically developed to be perceived as "good looking" across a wide demographic of viewers - I'm a firm believer that the best photos are the ones that make other people happy.

We can do some math to see just how screwed ILC's are. The Galaxy S23 Ultra ships with a 23mm f1.7 equivalent lens and a sensor measuring 9.83mm x 7.37mm, for a total sensor area of 72mm2. A full frame sensor measures 864mm2. Light gathering goes as the square of the f-number, so we have the following equivalency:
  • ISOCELL HP2 (1/1.3"): f1.7
  • 4/3": f2.9
  • APS-C: f3.9
  • Full frame: f5.9
At the wide end, ILC's are looking pretty dead: 24mm f5.6 is a reasonable aperture and focal length to shoot at on full frame, and the same performance can be achieved with with a phone. There's some argument that the FF sensor has higher native DR, but the phone has what is more or less a hardware implementation of multi-shot HDR which makes up for the difference. Plus the phone is, you know, a phone, and fits in your pocket.

Astute readers will note that 23mm is awful wide, and it's true - the effective sensor area of the phone decreases if you want a tighter focal length. Taking a look at a 2x crop (46mm equivalent), the sensor area of the phone drops to a rather shabby 18mm2, so the equivalency is now:
  • 4/3": 5.9
  • APS-C: f7.8
  • Full frame: f11.7
The ILC suddenly looks much more compelling - if you forced me to shoot at 45mm f11 all day I'd abandon photography and take up basket weaving.

This makes shopping a whole lot easier: phones obsolescing the moderate-wide-end means that zooms which include 50mm are are pretty useless. Suddenly, 50mm primes look real interesting again, especially since we are now spoiled for choice in the 50mm space. 4/3" cameras, which looked pretty dead for a while, also suddenly look viable: subjects you would shoot with a 50mm prime are often DOF-limited which means the larger sensors can't take advantage of faster apertures.

Things get trickier at longer focal lengths, because you are less likely to be DOF-limited. Wait, what?! Don't telephotos have a shallower depth of field? Well, it turns out for moderate focus distances, the DOF of a lens is proportional to the f-number, and inversely proportional to the square of the magnification. More likely than not, telephoto subjects are large, and since the sensor area is constant in a given camera the DOF actually increases if you stand far away enough to fit the subject on the sensor.

Telephotos provide a compelling argument to buy a full-frame body. Regardless of the sensor format telephotos are going to stay a constant size because they are dominated by their large front elements, and in the types of lighting you might want to use a telephoto lens you are often struggling for light. That premium 35-100/2.8 for 4/3" looks real nice until you remember it is the optical equivalent of a 70-200/5.6 on full frame, a lens so sad that they don't even make one.

Finally, there's image stabilization. I would argue that IS is mandatory for a good experience, especially for new users: for stationary subjects IS gives you something like three stops of improvement, allowing stabilized cameras and lenses to beat un-stabilized cameras with sensors ten times the size. The importance of that cannot be emphasized enough: that 18mm2 of cropped phone sensor can gather as much light as a 180mm2 (nearly 4/3 sized) sensor with an f1.7 lens on it factoring in IS. This unfortunately throws a wrench in many otherwise-sound budget combos: short primes didn't ship with IS until quite recently, and many budget bodies are un-stabilized.

With all that said, here are some buying suggestions:

The $500 "I'm poor" combo

The situation is dire. Long ago, I would have recommended an old 17-50mm f2.8 stabilized zoom from a third party and a used entry-level DSLR. Unfortunately, you'd be insane to tell a new user to buy an entry-level DSLR in 2023 (people expect features like "working live view" and "4K video") and the third-party zooms don't work with most mirrorless cameras. What we really want is a stabilized 40-50mm f2-2.8 equivalent (that's 40-50mm equivalent focal length, f2-2.8 real aperture) on a body that supports 4K video and PDAF, but inexplicably, that combination does not exist, even in the micro-4/3 world (which has had IBIS for a long time).

Consolation prize: any of the 24MP Nikon DSLR's, plus a Sigma 17-50/2.8 OS, used, but I wouldn't recommend it.

The $1000 combo

This used to be a downright reasonable price point, but inflation and feature creep have somewhat diluted it. Fortunately, the long-lived life cycles of Sony cameras help you here: a6500's are regularly available, used, for $600-700 leaving you with $300 for a lens. By some crazy miracle of third-party lenses you can fit two autofocus lenses into $300 - the Rokinon 35/2.8 is cheap and small, and the other can be 'to taste'.

The drawback here is that as a new system, long lenses for Sony tend to be rather inaccessible, but the same can be said of any other mirrorless-only system and the starter offerings for the Canon/Nikon ecosystem are very poor in comparison. It's also worth noting that there's nothing good at this price point new.

Recommendation: the most beat-up a6500 you can find, a used Rokinon 35/2.8, and one other lens, or save up and buy a second lens which costs more than $150 :)

The photographer's special: a D800, 50mm 1.8G, and 70-200 VR1. You lose a lot of features (stabilization, eye AF, 4K video, touchscreen) but optically the D800 is as good as they get, and the two lenses will let you take pictures none of your friends can. Highly recommended if you've spent some time behind a camera before - otherwise, the transition from phone to optical viewfinder may be a bit jarring.

The dubious alternative: an a7R ii and Rokinon 45/1.8. The a7R ii checks every box: stabilization, full frame, BSI, 4K video, but still manages to be a poor user experience nonetheless thanks to its ill-thought-out controls and menus. If you're fine with that, you get unsurpassable (as in, the sensor is limited by the laws of physics) optical performance for $1000.

The $1500 combo

Things start getting a little weird here. The a6500 is a really good camera, and its nice and compact too. The E-mount ecosystem matured quickly, with a ton of off-brand companies making decent prime lenses at ludicrously good prices. I would argue if you're content shooting short-to-moderate focal lengths, you are better off staying in the E-mount ecosystem - you can buy an a7iii and a nice starter prime for $1500, then (quickly) build out the system from there.

If you want to shoot long lenses, Sony no longer looks so sweet. The 70-200 options from the big three are comparable: the Sony GM Mark I is a native lens which is about $1500, the same price as the adaptable-without-penalties Nikon -FL but optically inferior. The EF-mount Mark III is the same price but worse than the FL (and better than the GM); the EF-mount Mark II is $500 less and probably superior to the Nikon VR2. The Nikon VR1 is incredibly cheap for a modern pro lens, but the corners are dubious which is a disaster for some people and a non-issue for others.

Above 200mm, Sony is out - the 200-600 is a very good budget 600mm option but pretty pathetic at 300 and 400mm. It's also very expensive: if you accept the extending barrel, the 150-600 options from third parties are $700 cheaper. Among Canon and Nikon, Nikon wins on the budget end (the Z6 was a very usable camera, the original EOS R was not), but Canon just announced some new releases so we should expect prices to move down across the stack.

Recommendation: a7iii plus your favorite primes (or a7R ii and your favorite primes if you don't shoot video at all)

Recommendation: Z6 Mark I, FTZ, 40mm f2, and 70-200 VR1. This comes out to $1800, and the VR1 recommendation is going to make a lot of people angry, but you'll be getting beautiful images for years to come (or until you drop the extra $1000 on the FL).

"I have money, help me spend it"

I'm a Nikon shooter so I'll just provide a Nikon kit. This assumes you have plenty of cash, but you aren't interesting in wasting it.

Body: Z6 Mark I. The Mark II is not worth the extra cash (unless you really need the second slot). Also, an FTZ to go with it.
Short lens: 40mm f2. There are so many options here and they're all good, so really, pick your poison.
Telephoto: Unfortunately, save up for the FL. It's not that much better than the VR2, but it fixes the focus breathing on the VR2 that was problematic at portrait focal lengths. The FL also performs exceptionally with teleconverters, so pick up a TC14 and TC20 III and get your 280mm f4 and 400mm f5.6 for "free".
Long lens: A sane man would recommend the 200-500 f5.6. I would also put a vote in for the 300/2.8 AF-S, which is only about $700 used and works well with teleconverters to get to 600mm. A madman would go all in and buy a 600/4 but seriously, don't do it - shooting with that lens is a serious commitment.
Ultrawide: Get fucked. You absolutely want a native ultrawide since the short flange distance on the Z bodies makes them much better, but Nikon refuses to provide a 16-35/2.8 which doesn't suck. I guess the 17-28/2.8 will have to do? The runner up would be the 14-30/4, but f4 isn't f2.8.

Sony will give you the same thing, trading the a better ultrawide for a worse long lens. Canon is...honestly superior, I think, but I am not familiar enough with Canon to make a $5000 recommendation.