Tuesday, April 18, 2023

Assorted video decoding tidbits

Some form of hardware acceleration is all-but-mandatory for playing 4K video, especially high bitrate H.265. Nowadays H.265 decoding is commonplace (you need to go all the way back to 2015 to find a CPU or GPU that can't decode HEVC Main10), but presumably as the world transitions to AV1 the same will apply.

Hardware accelerated decode in web browsers on Linux

It works! Well, sort of. On my test machine with an i3-12100F and an ancient Polaris 12 (AMD) GPU running the open source drivers, 1080p H.264 content is properly decoded by UVD but 4K60 VP9 (the famous '4K Costa Rica' demo clip on Youtube) is not. CPU usage seems a bit high in either case, about half a core in the former and 1-2 cores in the latter.

Decode on integrated graphics, display connected to discrete graphics

An esoteric use case. I ran into the H.265 version (!); I had a M4000 I wanted to use for Solidworks and the M4000 does not support HEVC decode, but the iGPU on the i5-12600 it was paird with does.

Unfortunately, this doesn't seem to work. iGPU usage remained zero and the M4000 ran in some sort of weird hybrid decoding mode. Performance, however, was acceptable.

Decode in integrated graphics, multiple GPUs and displays

Can we fix the above by plugging the monitor into the iGPU? The behavior is strange:


Both GPUs are now at 30% usage, but the CPU usage has gone through the roof. Very bad indeed, but possibly fixable with enough effort.

Remarkably, even in this bugged state the Costa Rica clip runs at 60 fps.

VLC + H.265, but the decoding is supposed to be done on the integrated graphics

Unfortunately, the iGPU chooses not to participate, but the hybrid decoding seems to work fine, playing back 4K24 high bitrate video with about 12% CPU usage. It's worth noting however that this is 12% of a 4.8GHz hex core Alder Lake, which is like..an entire laptop CPU from not that long ago or two whole 3GHz Skylake cores.

Heavy decode on integrated graphics

Surprisingly good. On my Kaby Lake laptop, the CPU is able to keep up with about 25% usage while remaining throttled to ~1.6 GHz on battery - the fixed function hardware really does the heavy lifting here and keeps the power consumption down.

Hardware accelerated decode, but there are many cores

The test system was a Epyc 7702 with an RTX 3060, by all means close to the state of the art. I didn't expect problems here, and didn't find any; the 3060 ran at heavy usage on the Costa Rica clip and the CPU was basically idle.

It's unclear what the actual CPU usage was; Task Manager lacks the granularity to deal with so many cores since even 1% is almost an entire core.

Tuesday, February 21, 2023

"Phones are getting better": buying a camera in 2023

It's a tough time to be a camera manufacturer. The mighty ISOCELL HP2 now rules the mobile space, sporting 200 million (!) 0.6um pixels binnable as 12.5M 2.4um pixels. Subelectron read noise, backside illumination, very deep wells, and sophisticated readout schemes allow virtually unlimited dynamic range while not compromising light sensitivity. Practically speaking, the out-of-the-box performance of a state-of-the-art mobile camera greatly exceeds that of any ILC in challenging-but-well-lit conditions: the phone has access to live gyro data for stack alignment, more processing than any ILC could dream of, and is backed by hundreds of millions of dollars of software R&D. It also has access to color science that, no doubt, has been statistically developed to be perceived as "good looking" across a wide demographic of viewers - I'm a firm believer that the best photos are the ones that make other people happy.

We can do some math to see just how screwed ILC's are. The Galaxy S23 Ultra ships with a 23mm f1.7 equivalent lens and a sensor measuring 9.83mm x 7.37mm, for a total sensor area of 72mm2. A full frame sensor measures 864mm2. Light gathering goes as the square of the f-number, so we have the following equivalency:
  • ISOCELL HP2 (1/1.3"): f1.7
  • 4/3": f2.9
  • APS-C: f3.9
  • Full frame: f5.9
At the wide end, ILC's are looking pretty dead: 24mm f5.6 is a reasonable aperture and focal length to shoot at on full frame, and the same performance can be achieved with with a phone. There's some argument that the FF sensor has higher native DR, but the phone has what is more or less a hardware implementation of multi-shot HDR which makes up for the difference. Plus the phone is, you know, a phone, and fits in your pocket.

Astute readers will note that 23mm is awful wide, and it's true - the effective sensor area of the phone decreases if you want a tighter focal length. Taking a look at a 2x crop (46mm equivalent), the sensor area of the phone drops to a rather shabby 18mm2, so the equivalency is now:
  • 4/3": 5.9
  • APS-C: f7.8
  • Full frame: f11.7
The ILC suddenly looks much more compelling - if you forced me to shoot at 45mm f11 all day I'd abandon photography and take up basket weaving.

This makes shopping a whole lot easier: phones obsolescing the moderate-wide-end means that zooms which include 50mm are are pretty useless. Suddenly, 50mm primes look real interesting again, especially since we are now spoiled for choice in the 50mm space. 4/3" cameras, which looked pretty dead for a while, also suddenly look viable: subjects you would shoot with a 50mm prime are often DOF-limited which means the larger sensors can't take advantage of faster apertures.

Things get trickier at longer focal lengths, because you are less likely to be DOF-limited. Wait, what?! Don't telephotos have a shallower depth of field? Well, it turns out for moderate focus distances, the DOF of a lens is proportional to the f-number, and inversely proportional to the square of the magnification. More likely than not, telephoto subjects are large, and since the sensor area is constant in a given camera the DOF actually increases if you stand far away enough to fit the subject on the sensor.

Telephotos provide a compelling argument to buy a full-frame body. Regardless of the sensor format telephotos are going to stay a constant size because they are dominated by their large front elements, and in the types of lighting you might want to use a telephoto lens you are often struggling for light. That premium 35-100/2.8 for 4/3" looks real nice until you remember it is the optical equivalent of a 70-200/5.6 on full frame, a lens so sad that they don't even make one.

Finally, there's image stabilization. I would argue that IS is mandatory for a good experience, especially for new users: for stationary subjects IS gives you something like three stops of improvement, allowing stabilized cameras and lenses to beat un-stabilized cameras with sensors ten times the size. The importance of that cannot be emphasized enough: that 18mm2 of cropped phone sensor can gather as much light as a 180mm2 (nearly 4/3 sized) sensor with an f1.7 lens on it factoring in IS. This unfortunately throws a wrench in many otherwise-sound budget combos: short primes didn't ship with IS until quite recently, and many budget bodies are un-stabilized.

With all that said, here are some buying suggestions:

The $500 "I'm poor" combo

The situation is dire. Long ago, I would have recommended an old 17-50mm f2.8 stabilized zoom from a third party and a used entry-level DSLR. Unfortunately, you'd be insane to tell a new user to buy an entry-level DSLR in 2023 (people expect features like "working live view" and "4K video") and the third-party zooms don't work with most mirrorless cameras. What we really want is a stabilized 40-50mm f2-2.8 equivalent (that's 40-50mm equivalent focal length, f2-2.8 real aperture) on a body that supports 4K video and PDAF, but inexplicably, that combination does not exist, even in the micro-4/3 world (which has had IBIS for a long time).

Consolation prize: any of the 24MP Nikon DSLR's, plus a Sigma 17-50/2.8 OS, used, but I wouldn't recommend it.

The $1000 combo

This used to be a downright reasonable price point, but inflation and feature creep have somewhat diluted it. Fortunately, the long-lived life cycles of Sony cameras help you here: a6500's are regularly available, used, for $600-700 leaving you with $300 for a lens. By some crazy miracle of third-party lenses you can fit two autofocus lenses into $300 - the Rokinon 35/2.8 is cheap and small, and the other can be 'to taste'.

The drawback here is that as a new system, long lenses for Sony tend to be rather inaccessible, but the same can be said of any other mirrorless-only system and the starter offerings for the Canon/Nikon ecosystem are very poor in comparison. It's also worth noting that there's nothing good at this price point new.

Recommendation: the most beat-up a6500 you can find, a used Rokinon 35/2.8, and one other lens, or save up and buy a second lens which costs more than $150 :)

The photographer's special: a D800, 50mm 1.8G, and 70-200 VR1. You lose a lot of features (stabilization, eye AF, 4K video, touchscreen) but optically the D800 is as good as they get, and the two lenses will let you take pictures none of your friends can. Highly recommended if you've spent some time behind a camera before - otherwise, the transition from phone to optical viewfinder may be a bit jarring.

The dubious alternative: an a7R ii and Rokinon 45/1.8. The a7R ii checks every box: stabilization, full frame, BSI, 4K video, but still manages to be a poor user experience nonetheless thanks to its ill-thought-out controls and menus. If you're fine with that, you get unsurpassable (as in, the sensor is limited by the laws of physics) optical performance for $1000.

The $1500 combo

Things start getting a little weird here. The a6500 is a really good camera, and its nice and compact too. The E-mount ecosystem matured quickly, with a ton of off-brand companies making decent prime lenses at ludicrously good prices. I would argue if you're content shooting short-to-moderate focal lengths, you are better off staying in the E-mount ecosystem - you can buy an a7iii and a nice starter prime for $1500, then (quickly) build out the system from there.

If you want to shoot long lenses, Sony no longer looks so sweet. The 70-200 options from the big three are comparable: the Sony GM Mark I is a native lens which is about $1500, the same price as the adaptable-without-penalties Nikon -FL but optically inferior. The EF-mount Mark III is the same price but worse than the FL (and better than the GM); the EF-mount Mark II is $500 less and probably superior to the Nikon VR2. The Nikon VR1 is incredibly cheap for a modern pro lens, but the corners are dubious which is a disaster for some people and a non-issue for others.

Above 200mm, Sony is out - the 200-600 is a very good budget 600mm option but pretty pathetic at 300 and 400mm. It's also very expensive: if you accept the extending barrel, the 150-600 options from third parties are $700 cheaper. Among Canon and Nikon, Nikon wins on the budget end (the Z6 was a very usable camera, the original EOS R was not), but Canon just announced some new releases so we should expect prices to move down across the stack.

Recommendation: a7iii plus your favorite primes (or a7R ii and your favorite primes if you don't shoot video at all)

Recommendation: Z6 Mark I, FTZ, 40mm f2, and 70-200 VR1. This comes out to $1800, and the VR1 recommendation is going to make a lot of people angry, but you'll be getting beautiful images for years to come (or until you drop the extra $1000 on the FL).

"I have money, help me spend it"

I'm a Nikon shooter so I'll just provide a Nikon kit. This assumes you have plenty of cash, but you aren't interesting in wasting it.

Body: Z6 Mark I. The Mark II is not worth the extra cash (unless you really need the second slot). Also, an FTZ to go with it.
Short lens: 40mm f2. There are so many options here and they're all good, so really, pick your poison.
Telephoto: Unfortunately, save up for the FL. It's not that much better than the VR2, but it fixes the focus breathing on the VR2 that was problematic at portrait focal lengths. The FL also performs exceptionally with teleconverters, so pick up a TC14 and TC20 III and get your 280mm f4 and 400mm f5.6 for "free".
Long lens: A sane man would recommend the 200-500 f5.6. I would also put a vote in for the 300/2.8 AF-S, which is only about $700 used and works well with teleconverters to get to 600mm. A madman would go all in and buy a 600/4 but seriously, don't do it - shooting with that lens is a serious commitment.
Ultrawide: Get fucked. You absolutely want a native ultrawide since the short flange distance on the Z bodies makes them much better, but Nikon refuses to provide a 16-35/2.8 which doesn't suck. I guess the 17-28/2.8 will have to do? The runner up would be the 14-30/4, but f4 isn't f2.8.

Sony will give you the same thing, trading the a better ultrawide for a worse long lens. Canon is...honestly superior, I think, but I am not familiar enough with Canon to make a $5000 recommendation.

Friday, September 16, 2022

The best little telescope that'll never get built

 I really love the IMX183 from Sony. For a very good price, you get 20 million BSI pixels with photon-counting levels of read noise and negligible dark current - truly a miracle of the economies of scale. The sensor is also large enough to have good light gathering capabilities, yet small enough to work with compact optics.

The problem is taking advantage of all 20 million of those pixels. 20 MP isn't that much - 5400-pixel horizontal resolution leaves you with precious little room to crop to 4K so actually resolving 20MP is important. You can't really buy a telescope that resolves 2.4um pixels - in the center anything diffraction limited will work, but if you want a fast, wide system getting good performance in the corners isn't happening. Obviously, the answer is to design your own telescope.

Now, I have zero interest in making telescopes - grinding your own lenses is ass and generally a poor use of time. If I wanted to spend time doing repetitive tasks I'd pick up embroidering or something. As such the system is designed to be reasonably manufacturable by an overseas vendor.


The telescope is a Houghton-type design with integrated field flattener, a focal length of 150mm, an entrance pupil of 76mm (3"), and a 6-degree field of view covering a 16mm sensor. The optical performance is pristine:



Normally, I would not vouch for a form like this - the telescope is much longer than its focal length, the image plane is inside the tube, and two full-aperture refractive elements are needed. However, in this case, we are building a small, tightly-integrated system: polishing a 3" lens is easy, the difference between a 6" and 10" OTA is negligible from a practical standpoint, and the IMX183 can easily be fit into the tube.

The glasses aren't great - the automatic glass substitution tool came up with BSM28 and BSM9 which are moderately costly, infrequently-melted glasses. The chemical properties are poor, basically akin to those of ED glass, but 77mm clear filters are cheap and readily available. The elements don't have any serious manufacturing red flags, though the curvatures are a bit steep compared to your usual refractor doublet.


Sunday, March 20, 2022

Image a DLP down through a microscope objective and look at it with a camera through the same objective

 


For some reason it's taken me a while to actually do this, but its easy to expose high resolution film or resist with a DLP. All you need is a camera, a beam splitter, a DLP, and some microscope parts.


The DLP is an ordinary DLP projector with the lens removed - I used a TI dev kit because the firmware offers nice features like "disable the LEDs" which can be used to, for example, replace the blue LED with a violet one for exposing resist. Unless you have a really exotic objective it is probably best to use a g-line (430nm) LED rather than a 405nm LED - the 430nm parts are hard to find but objectives will be quite dispersive in the far violet unless they are specifically designed to be apochromatic from violet to red. Your best bet is probably to leave the green LED in place for focusing and replace the blue LED with a violet one. A regular pico-projector will work just fine, but is less convenient to work with. You do save about $800 at current street prices though.

The light from the DLP reflects off a beam splitter and passes through the tube lens, which is just a 200mm a(po)chromat. A 200mm doublet would probably work fine here, but you can get better corner performance from a triplet or a quadruplet - the DLP is not very large though, so it might not matter.  The tube lens collimates the light and sends it down the microscope objective, which focuses it on the target.

The camera looks down the other beam splitter path and sees the light scattered off the target. Unfortunately, this technique only works with optically smooth targets - otherwise, the camera sees the surface texture of the target and not the imaged patterns displayed on the DLP.


Parfocalizing the camera and the DLP is easier than it seems - the object side of the tube lens is something like f/50 so the depth of field is very large. Roughly speaking, there is only a small range of distances where the the objective comes into focus at all. Either by reading the datasheet or by trial and error it is possible to roughly set the backfocus, then the camera and target position are adjusted for best focus. Once a starting position is found, the backfocus can be adjusted to minimize spherical aberration (microscope objectives gain some spherical aberration if the conjugates aren't the right ones).

The pixel scale in the capture is 560nm/pixel, so the Windows clock is only a few microns long. Performance is as expected but it is always entertaining to use the Windows desktop on a 1mm wide screen :)

Sunday, January 30, 2022

The State of the CPU Market, early 2022

It's been a year of shakeups in the high-end CPU market, what with catastrophic supply chain shortages, the rise of AMD, and Pat Gelsinger's spearheading of Intel's return to competency. Now that the dust has mostly settled, it's interesting to look at what's hot, and what's not.

The 5950X is still king...

Alder Lake i9 really gives AMD a run for the money, but the 5950X is still the king of workstation CPUs, especially now that you can buy it. It has aggressive boost clocks which allow it to beat every Skylake and Cascade Lake Xeon Platinum (including the 28-core flagships), consistent internal design which scales well in every application, consistent and manageable 142W power limit, and bonus server features like ECC support. ADL is good, but the 250W PL2 really kills it for workstation use (a good 12900K build requires serious effort to get right), and because of the heterogeneous internal layout it fails to scale on some operating systems and in some applications.

Scaling is really important in this era of 16, 32, and 64-core processors; many applications completely fail to scale past 16 cores, and even those that do exhibit much less than linear returns. As a result, those 16 highly clocked cores punch above their weight when it comes to real-world results - a 4.x GHz Zen 3 core isn't actually twice as fast as a 2.8 GHz Skylake core, but 16 4.x GHz Zen 3 cores can still outperform 28 Skylake cores because the 12 extra cores are doing less work.

...Except when it's not

The elephant in the room, of course, is single-threaded performance. Alder Lake outruns Zen 3 by about 20%, which is an enormous leap in x86 performance. For all intents and purposes, ADL is an 8-core CPU with 8 bonus cores that you can't really rely on. If you commit to the 8-core life (which encompasses a lot of applications), Alder Lake suddenly looks a lot more enticing, because 8 Golden Cove cores are 20% faster than 8 Zen 3 cores.

Of course, part of the reason why ADL can do this is because Intel 10 ESF ("7 nm") is a high performance node designed to scale to aggressive clocks at high voltages, and TSMC N7 is a SoC node designed for lower clocks and voltages. The price you pay is that those 8 Golden Cove cores draw twice as much power as 8 Zen 3 cores to perform 20% more work, which isn't very good engineering.

In the end, Zen 3 and Alder Lake are mostly complementary products. If your workflow is interactive content creation, gaming, or design work, ADL is right for you. If you're building a machine mostly to handle long renders and simulations, the 5950X is the best processor under $2000 for the job.

What about HEDT?

HEDT is a curious thing. In the beginning, desktop and server parts were cut from the same silicon. Starting with Nehalem, Intel experimented with bifurcating their designs into laptop (Lynnfield) and server (Gulftown) variants with rather drastically differing designs - Lynnfield was a SoC with an on-die PCIe root complex offering 16 lanes, Gulftown was a traditional design with an off-package PCIe controller offering 48 lanes. The bifurcation makes sense - laptops rarely need more than 16 PCIe lanes, whereas servers need dozens, or even hundreds, of lanes for accelerators, storage, and networking.

The bifurcation really took off starting with Sandy Bridge; Intel aggressively marketed the 2600K, which was cut from the mobile silicon. Sandy Bridge-E, the server based variant, filled a niche, but the platform was expensive and to top it off, unlocked processors based on the full 8-core SB-EP die were never released.

Since then, HEDT has come and gone - it hit a real low during the Broadwell-EP era, but experienced a resurgence with Skylake-X, which competed against the dubious-and-not-really-recommendable Threadripper 1 and 2. Unfortunately, Ice Lake-SP gives off Broadwell-EP vibes - namely, the process it is built on does not have the frequency headroom required to make a compelling desktop platform. This leaves AMD relatively unchallenged in the high end space:
  • The 24-core 3960X is currently a dubious choice over the 5950X - supply is poor, power consumption is high, and performance is not that much better. If you need balanced performance with good PCIe it's not a bad choice, but there are cheaper (Skylake-X, used Skylake-SP) and faster (the other Threadrippers) offerings in the category.
  • The 32-core 3970X is a good processor for most applications. Thanks to the blessing (or curse) of multicore scaling, it comes within striking distance of the 3990X is most applications at half the price, while offering the full suite of Threadripper features.
  • The 64-core behemoth 3990X is...not a very good choice, mostly due to extreme pricing ("$1 per X") and really bad scaling. Fortunately, it wields a very competent implementation of turbo, so it is never slower than the 3970X.
  • Threadripper Pro ("Epyc-W") is everything you've ever wanted, but is expensive and platform options are limited.
There are also a few interesting choices in the server space, with the usual caveats (long POST times, no audio, no RGB):
  • Dual Rome or Milan 64-core processors offer unmatched multithreaded performance, but not much can take advantage of 256 threads.
  • Dual 32-core Epycs are an interesting choice, offering performance comparable to a 3990X but with four times the aggregate memory bandwidth for all your sparse matrix needs
  • Dual low-end Ice Lake (e.g. 5320, 6330)  offers AVX-512 support and high memory bandwidth at a price and performance comparable to those of a 3990X, but may be more available. Unfortunately, 2P ICL motherboards are rather expensive.
As far as used options go, Haswell-EP and older are finally ready to retire, unless you really need RDIMM support. A pair of 14-core Haswell processors performs worse than a 5950X at twice the power, with all the caveats of 2P platforms attached. Otherwise:
  • Dual Skylake-SP is an OK choice, simply because Skylake Xeons are entering liquidation and Epyc Rome is not. Technologically, Skylake has no redeeming features over Rome, but the fact that you can pick up a pair of 24-core Platinums for slightly more than $1000 is interesting. It's worth noting only the 2P configuration is interesting; 1P Xeon is generally slower than a 5950X.
  • Epyc Naples is bad. Don't do it. Threadripper 1 falls in the same category, the only times you'd consider either of these is if you found a motherboard in the trash or something.
Summary

"My application scales indefinitely with core count"

No, it doesn't. But for this class of trivially-parallelizable application (rendering, map/reduce, dense matrix), the 5950X is a safe bet. The most extreme cases can benefit from one of the high core count platforms (Threadripper, Epyc, Xeon Scalable), but careful to benchmark the applications first - the 5950X wields a considerable clock speed advantage over the enterprise platforms which often swings things in it favor.

"I only need 8 cores"

The 12700K is probably your friend here, it's strictly faster than the 5800X. This category encompasses most content creation and all of CAD (minus simulation).

"Give me PCIe or give me death!"

This encompasses all of machine learning, plus anything which streams data in and out of nonvolatile storage. The 3960X is perfect for you, but in case its out of stock (which it probably is), the winner is...the 10980XE, which is fast enough to feed your accelerators and generally available. Of course, die-hard accelerator enthusiasts are going to look to more exotic platforms, and there, the platform dictates the choice of CPU.

"I'm out of RAM"

If your application requires more than 256GB of memory, Epyc-W is the CPU for you. Unfortunately, it is rather expensive, so the second place prize, and the bang-for-the-buck prize, goes to a pair of used 24-core Xeon Scalable, which gets you pretty darn close to Epyc-W for $1200.

Tuesday, September 21, 2021

A Small Astrograph with a Large Payload

 


Building a large telescope is hard; designing a small telescope is hard. What exactly do I mean by that? Well, there are parts of the telescope that don't scale well with size, for example, the instrument payload, the filters, or the focusing actuators. More often than not, a design which works well on a 1m-class instrument fails to scale down to a 300mm-class instrument because the payload is incompatible with the mechanics, or is so large that it fills the clear aperture of the instrument.

A small telescope should also be...small. A good example of this is the remarkable unpopularity of equatorially-mounted Newtonians; a parabolic mirror with a 3-element corrector offers fast focal ratios and good performance, but an f/4 Newtonian is four times longer than it is wide, which gets unwieldy even for a 300mm diameter instrument.

The Argument for Cassegrain Focus

Prime focus instruments are popular as survey instruments in professional observatories. However, they fail to meet the needs of small instruments because of:

  • Excessive central obscuration. A 5-position, 2" filter wheel is about 200mm in diameter. In order to maintain a reasonable central obstruction, a 400mm clear aperture instrument is required which is marginally "small". Any larger-diameter instrumentation requires a 0.6m+ class instrument which is outside of the scope of many installations.
  • Unreasonable length. The fastest commercially available paraboloids are about f/3. Anything faster is special-order and very expensive. An f/3 prime focus system is actually longer than 3 times its diameter because of the equipment required to support the instrument payload.
  • Challenging focusing. For a very large system, actuating the instrument is the correct method for focusing because even the secondary mirror will be several tons. For a small system, reliably actuating 10+ kg of payload with no tilt or slip in a cost-effective fashion is rather unpleasant.
  • Too fast. A short prime focus system is necessarily very fast, complicating filter selection. A very fast system also performs poorly combined with scientific sensors with large pixels.
The commercially available prime focus instruments (Celestron RASA/Starizona Hyperstar, Hubble HNA, Sharpstar HNT) are designed for use with small, moderately-cooled CMOS cameras, possibly with a filter wheel in case of the Newtonian configurations. The RASA is wholly unsuited for narrowband imaging because a filter wheel would almost cover the entire aperture.

A Cassegrain system solves these issues by (1) allowing for moving-secondary focusing (2) roughly decoupling focal ratio from tube length and (3) moving the focal plane to be outside of the light path.

The 50% Central Obstruction

A 50% CO sounds bad, but by area the light loss is 25%, or less than half a stop. A 300mm nominal instrument with a 50% CO has the light gathering capacity of a 260mm system, which is pretty reasonable. The 50% CO also makes sizing the system an interesting exercise, because at some point the payload will be smaller than the secondary and prime focus makes sense again.

The Design

The Busack Medial Cassegrain is a really nice telescope that this design draws inspiration from, but it requires two full-aperture elements each with two polished sides that makes it ill-suited to mass production. Instead, we build the system as a Schmidt Corrector, an f/2 spherical mirror, and a 4E/3G integrated corrector. There's really nothing to it - by allowing the CO to grow and using the corrector to deal with the increasing aberrations, an f/4 SCT is entirely within the realm of possibility. There's a ton of freedom in the basic design, the present example makes the following tradeoffs:
  • f/4 overall system allowing for the use of an f/2 primary (which we know is cheaply manufacturable based on existing SCT's). f/4 also allows for the use of commodity narrowband filters.
  • 400mm overall tube length (not counting back focus) is a good balance between mechanical length and aberrations. 50mm between the corrector and secondary allows ample space for an internally-mounted focus actuator.
  • 160mm back focus allows for generous amounts of instrumentation including filters, tip-tilt correction, and even deformable mirrors.
  • Integrated Schmidt corrector allows for good performance with no optical compromises.
  • Corrector lenses are under 90mm in diameter and made from BK7 and SF11 glass, all easily fabricated using modern computer-controlled polishing.
The total length of the system could also be shortened, and the corrector diameters reduced, by increasing the primary-secondary separation and reducing the back focus, depending on instrument needs. Overall performance is quite good, achieving 4um spots sizes in the center and a high MTF across the field.





Actually Building It?!

Obviously, you are not going to make a 300mm Schmidt corrector and a four-element, 90mm correction assembly at home. This design is probably buildable via standard optical supply chains (the hardest part would be getting someone who is neither Celestron nor Meade to build Schmidt correctors). The correction assembly should also be further improved - there are a huge number of choices for its configuration and the 'correct' one is probably the one that is most manufacturing-friendly.

Shoot me an e-mail in case you are crazy and want to do something with the prescription for this design!

Friday, July 9, 2021

GCA 6100C Wafer Stepper Part 2: the stages

The modern wafer scanner is a truck-sized contraption full of magnets, springs, and slabs of granite capable of accelerating at several g's while maintaining single-digit nanometer positioning accuracy. The motion systems contained within painstakingly try to optimize for dynamic performance by using active vibration dampening, voice coils, linear motors, and air bearings, all to increase the value of the machine for its owner (who spent a good fraction of a billion dollars on it).

As it turns out, a 80's stepper is none of these things. Scanners are immensely complex because they are dynamic systems - as the wafer moves in one direction, the reticle moves in the other direction, perfectly synchronized but four times faster. In contrast, steppers are allowed time to settle between steps, which allows for much more leeway in the motion system design. Throughput requirements were also lower; compare the 35 6" wph of an old stepper to the 230 12" wph of a modern scanner.

Old stepper stages are an instructive exercise in the design of a basic precision motion system; in fact, Dr. Trumper used to give this exact stage out as a controls exercise in 2.171. The GCA stages are also particularly interesting from a hardware perspective - they are carefully designed to achieve 40nm positioning accuracy using fairly commodity parts. The only precision parts seem to be the slides for the coarse stage, and even those are ground, not scraped.

The stage architecture

System overview

GCA steppers use a stacked stage architecture. Coarse positioning is done by two conventional mechanical bearing stages stacked on top of each other. Fine positioning is done by a single two-axis flexure stage. Rotational positioning, which only happens for alignment, is done using a simple open-loop, limited travel stage mounted on the fine stage. Focusing, which is done by changing the Z spacing between the lens and the wafer, is done by moving the optical column up and down with a linkage mechanism.

The position feedback system




The fine position feedback on GCA steppers is implemented through a two-axis HP 5501A heterodyne interferometer. Briefly, a stabilized HeNe laser is Zeeman split through a powerful magnet to create two adjacent lines separated by a few MHz with different polarizations. One of these lines is separated with a polarizing beam splitter and reflected off a moving mirror; this line is Doppler shifted due to the velocity of the moving mirror and beat against the stationary component to generate a signal. This signal is compared against a stationary REF signal to derive velocity and position measurements. Heterodyne interferometers are the preferred choice for metrology due to their insensitivity to ambient effects and power fluctuations.

The 5501A is the de facto choice for interferometric metrology; its successor the 5517 is still available from Keysight. A description of the system as found in the GCA steppers is as follows:

The laser points towards the rear of the stepper; a 10707A beam bender and a 10701A 50% beam splitter generate the two axes of excitation. The X and Y stages have identical measurement assemblies; the Y assembly is located to the rear of the stepper (behind the column) and the X assembly is located inside the laser housing. Both assemblies use a plane-mirror interferometer which differentially measures the wafer position against the optical column; the stationary mirror is a corner cube mounted to the column and the moving mirror is a 6" long dielectric quartz block mirror mounted to the wafer stage. The flats are precision shimmed to ensure orthogonality (since it is the orthogonality of the flats which determines the closed-loop orthogonality of the motion).

There are two additional position sensors in the system. The first is a sensor to measure the position of the fine stage relative to the coarse stage. Literature indicates that this is an LVDT, but on the 6100C it appears to be implemented as two photodiodes outputting a sin/cos type signal. The second is a brushed tachometer on each of the coarse stage drive motors, which is used for loop closure by the stock controller.

The coarse stage

The purpose of the coarse stage is to position the fine stage to within 0.001" of its final position. The stage is built as a pair of stacked plain-bearing stages; these stages are driven by brushed DC motors with brushed tachometers for velocity feedback. The motors go through a right-angle gearbox comprising of a bevel gear and several spur gear stages before being coupled by a flexible coupling to a long drive shaft which turns a pinion positioned near the center of each stage. This pinion drives a brass rack mounted to the stage which generates the final motions.

The fine stage


The fine stage is constructed as a parallel two-axis flexure stage with a few hundred microns of travel on each axis. The flexures are constructed from discrete parts; the stage is made from cast iron and the flexures themselves are constructed from blue spring steel. Actuation is by moving-coil voice coil motors with samarium-cobalt magnets, and position is read directly from the interferometer system.

The theta stage


The theta stage is a limited travel stage based on a tangent arm design. A (very small) Faulhaber Minimotor is coupled into a high reduction gearbox, which drives a worm gear that turns a segment of a worm wheel. The worm wheel pushes on a linkage which rotates the wafer stage about a pivot point.

Rotation control is entirely open-loop - the wafer is rotated once during the alignment process based on the fiducials observed through the alignment microscopes. A slow open-loop system is acceptable given that the speed of rotational alignment does not significantly affect wafer throughput.

The Z mechanism

The focusing mechanism is a limited-travel (according to literature, about 600um) flexure mechanism. The entire optical column is suspended on two large spring steel plates; a stiff spring counterbalances the weight of the column. A voice coil motor (identical to the fine stage VCMs) actuates a linkage mechanism which moves the column up and down.

Adjusting the mechanism is a bit subtle. The white rod sticking out is actually a tensioning mechanism for the counterbalance; it is possible to aggressively tension the spring to stiffen the assembly for transport. The cap at the end of the rod can be removed to reveal a nut and a piece of threaded rod with a flathead in it. You want to hold the rod in place with a screwdriver and crank on the nut with a wrench until the column just barely 'floats' in place.

Incidentally, this mechanism also reveals a fairly severe weakness of the focusing system - it is extremely undamped. Any disturbances on the column cause the whole assembly to ring like a bell, with the only source of damping being the resistance of the VCM. I think (though there is some information to the contrary) that 6000-series GCA steppers focused once per wafer, relying on wafer leveling to keep the image in resist in focus between fields. Otherwise if the focusing had to be highly dynamic there could be problems.