Wednesday, March 6, 2019

Freefly Systems ARC200 teardown

Most motor controllers are bad. They range from not really doing motor control (any hobby controller, eBike controllers) to being electrically questionable (small VESCs) or having flaky (SimonK and BLHeli, which in addition, don't do real motor control) or confusing (Sevcon) firmware, to being mechanically questionable (most 'servo drives').

On the surface, the ARC200 doesn't really distinguish itself; nominally, a '200A 48V inverter with FOC' isn't much different from any of the large VESC variants out there. However, none of the big VESCs are great (questionable layout, too much electrolytic capacitance, terrible connector choices, too expensive), and the ARC200 sits at a comfortable price point above the bad Chinese controllers but well below the VESCs and industrial servo drives. In addition, I am good friends with several engineers at Freefly, and had high hopes that their involvement in the product would make it not bad.

What Makes a Motor Controller?

When most people evaluate a motor controller, they immediately jump to the power stage. However, the power stage is but a small part of the system, and with power MOSFETs getting cheaper and better, device selection and layout are becoming less and less critical.

All DC-operated inverters share the following equally important building blocks:

Microcontroller and firmware: You can't really screw up the implementation of a microprocessor, but boy is it possible to screw up an implementation of FOC. The core algorithms are probably right (you quickly notice swapped variables or extra factors of -1), but managing the rest of the state machine is much harder. Startup conditions, integral windup, throttle bounds, and interrupt priority are a few of the many ways to go wrong. The firmware is probably the hardest to test, as some edge operating conditions are difficult to trigger on the bench.

On the other hand, writing good motor control firmware is no different from writing any other kind of software, but most hardware engineers (and many software engineers!) don't have formal training in writing robust code.

Low voltage power supply (LVPS): The LVPS is a DC-DC converter that takes the DC link voltage and generates the 12-15V and 3.3-5V rails to power the gate drive and logic, respectively. The LVPS is a somewhat tricky part to design; typically, it is built using an off-the-shelf SMPS controller IC. Commercial SMPS controllers are "black boxes" that usually expect relatively clean DC input and a slow load. Inverter applications, in contrast, are very noisy, since the DC link is full of transients from the power stage.  Cheaper or very low-voltage controllers will usually run the logic off a LDO, possibly with a resistor in series. This suffers from a similar problem; in fact, an unfiltered LDO is guaranteed to pass input transients to the output, as no linear regulator is capable of handling sub-microsecond input changes.

LVPS failures will usually take out the entire inverter, since the failing logic and gate drive rails will put the entire control stage in an invalid state for several milliseconds, leading to desaturation or shoot-through of the power stage. In addition, failure of the LVPS to regulate (due to an input transient) will often damage the control stage by passing the transient to the output.

Isolation and gate drive: The gate drivers turn the power MOSFETs on and off. There are various levels of sophistication; the dumbest gate drivers are just a pair of complementary BJTs, the smartest ones integrate capacitive isolation, desat detection, and fault signaling. Closely related is how the gate drives are powered. Most low-voltage controllers power all six off a single 12 or 15V rail, and use bootstrap capacitors and diodes to generate the high-side voltages. This adds the obvious benefit of simplicity, but makes layout a little tricky (the 12V rail has to fan out to all six gate drives) and has the rather large disadvantage of connecting the logic and power grounds. Circulating ground currents can then potentially upset the microcontroller and LVPS.

In contrast, high voltage inverters always fully isolate the gate drives, for safety reasons. This has the neat benefit of completely separating the control and power grounds (indeed, most 300V+ controllers require a separate low voltage power supply as input), but adds a ton of complexity.

I/O: Control inputs are dangerous, because they often run over a long wire. Small controllers rarely isolate their inputs, which means analog and serial control cables contain a wire directly connected to the ground of the microcontroller. Needless to say, attaching a large antenna to logic ground and having it pick up every switching transient in the system often leads to poor performance.

Industrial servo drives almost universally have optically isolated inputs. In this case, the control signal drives the LED in an optoisolator, removing the need to bring logic ground out of the controller. Annoyingly, most hobby controllers marketed as 'opto' don't actually have optoisolated inputs; 'opto' in this case only means that the controller does not provide a 5V accessory power supply.

Power stage:  And now, we finally get to the power stage. The choice of power device is practically a non-issue in this day and age, but layout still requires some consideration, especially in small controllers (where cost and compactness considerations sometimes lead to electrical compromises). In particular, very high-current, low-voltage controllers have trouble finding space for enough copper to carry the full phase current, and sometimes run into package current limitations as well.

Surprisingly enough, high-voltage inverters are much easier to lay out. The smaller ones (400V, up to ~50A) are covered by fully integrated 'smart power modules', and the larger ones (up to ~300A) are covered by sixpack IGBT modules (which contain 3 half bridges sharing a DC link, but no gate drive).

Selecting capacitors is also somewhat of a black art. Electrolytic capacitors are mostly resistors at high frequencies, and a poor implementation of an electrolytic DC link capacitor can be worse than nothing (the energy stored helps blow up the devices in case of a failure). On the other hand, insufficient DC link capacitance leads to high ripple current in the DC link cables (which could be long, and therefore potential sources of EMI) and high voltage transients (switching spikes aside, the average DC link voltage + half the peak to peak ripple needs to remain under the voltage rating of the devices). For large high-voltage inverters, the cost and weight of the capacitors is often equal to that of the switching devices.

Electromechanical components: Connector selection is a matter of taste and application. The automotive and aerospace industries have stringent isolation, water-resistance, and even coloration requirements. An inverter for an OEM robotics application might value weight and compactness over waterproofing, and an industrial controller would use common connectors found in automation. There are clear wrong choices (pin headers come to mind), but never a single "best" connector. The same goes for housings and thermal management.

The Actual Teardown

Phew, that was a lot of intro. This teardown has an associated Flickr album containing high-res images, because nothing sucks more than not being able to see the part number on an IC. For convenience, some of the images will be reproduced here, but are limited to Blogger's 1600x1200 resolution.

Overview, Microcontroller, and LVPS

The ARC200 is constructed as a two-board stack; the top board contains the LVPS, microcontroller, and additional logic, and the bottom board carries the power devices, gate drive, and capacitors.

The microprocessor is an STM32F746NGH6, which is a serious part that costs almost $12 in quantity. For better or for worse, this is the largest microcontroller I have seen in a motor control application (it beats out the F446RE in my own designs). In addition, the microcontroller is connected to a 512MB SPI Flash, so there is plenty of room for expansion and future features if so desired. The main logic rail is generated by a LM5116, which is a 100V-capable buck controller IC.

Visible towards the right (near the I/O connectors) are a MCP2562 CAN transceiver, a LMV612 dual op-amp, and a TLP2361 dual optocoupler. The CAN transceiver's purpose is obvious (though it is worth noting CAN is not included in the available external interfaces), the LMV612 probably serves to buffer analog throttle and the TLP2361 probably buffers various forms of digital throttle. Also visible are some small DIP switches, which probably shouldn't be flipped.

Additional Logic

The backside of the logic board reveals several additional components. A number of LMV612's likely provide additional analog functionality. To the left, a TPS542941 buck regulator generates the 5 and 3.3 V rails, augmented by a healthy number of ceramic capacitors. To the right, the capacitors and MOSFETs (low side, high side) for the LVPS are visible - the LM5116 does not integrate switches. A diode likely provides reverse polarity protection. On the upper right, there is an ST M24128 - a somewhat strange choice given the amount of FLASH available on the F7, but perhaps it saves flash wear?

On the bottom left, a tiny Rigado BMD-350 provides BLE connectivity. This is a trick to avoid having to FCC certify the 2.4GHz part of the controller, as well as making layout easier (implementing RF SoC's can be tricky).

Also visible all around the board are the short board stacking headers that connect the logic deck to the bottom board.

Gate Drives

The gate drive uses a standard bootstrapped design with some twists. Because level-shifted gate drive IC's top out at 4A (and even those are somewhat fragile) and fully isolated drives are very expensive, the gate drive stage uses FAN3122 9A discrete drivers, bootstrap diodes, and Silabs SI8620BB digital isolators. In addition to providing high-side control, the isolators serve to somewhat protect the microcontroller from transients in the power stage. A column of tiny linear regulators powers the isolators.

A number of diodes provide various functionalities including bootstrapping, turn-on/turn-off time separation, and gate protection.

Also visible here are the DC link capacitors, six 180uF 63V parts. The lack of ceramic capacitors is disappointing, but realistically, at the RMS currents the ARC200 targets, 100V/200uF of ceramics would have been required, an expensive proposition at any time and especially now during the ceramic capacitor shortage.

Power Stage

The power stage uses four TPH2R608NH devices per switch. These are very economically priced parts, only 67 cents in full reel quantity for a 75V 2 mohm FET with reasonable gate charge. In fact, they are cheap enough that to the right, three more of them are used to provide anti-spark functionality. Current sensing is done with three (!?) huge 250uohm shunts. Appearances are deceiving; the SOIC device next to each shunt is not a shunt amplifier, but rather another SI8620BB. Instead, current sense amplification is likely implemented using several of the many LMV612's on the logic deck.

Mechanical Design

The thermal path to the case is provided by several thick thermal pads. Heatsinking of the power stage is done entirely through the top of the packages; this is often worse than through vias in the board, but allows for more design flexibility (and the Toshiba FETs in particular have pretty thin top epoxy). The springs in the top image are used to keep the logic board pressed down against the power board, which could have implications for reliability in high-g applications. Silicone sealing is visible along the entire seam of the enclosure (the nature of this sealing means once an ARC200 is disassembled, it will be difficult to waterproof it again).

Additional Commentary

This is a pretty good controller. The logic stage is intense, featuring three buck converters, SPI flash memory, and possibly the largest microcontroller I have ever encountered in a motor control application. Freefly's implementation of sensorless FOC is class-leading, and the computing power of the F7 leaves room for potential new algorithms (such as HFI for salient motors). I generally don't believe in sensorless control (magnetic encoders are cheap and easy), but that kind of feature is perhaps relevant at this level. The LVPS is nothing particularly exciting, but the LM5116 is a good chip and the switching devices used in the buck converter are beefy to the point of being excessive.

The gate drive stage seems solid but perhaps a bit unusual, as it utilizes bootstrapped discrete drivers in combination with external RF-based isolators. I would like to do a further analysis for the circuit at some point, as some aspects are not immediately clear (for example, there appear to be six dual-channel isolators for six drivers, and VDD2 on the isolators appears to come from the bootstrapped supply).

The power stage is very solid - the switches use four 2 mohm devices each for a total resistance of 500 uohms per FET. This is much less than the resistance of the motor and at this level, switching losses are a huge part of the total losses, meaning adding more FETs doesn't necessarily improve performance by much.

For the price, I would have liked to see more isolation - it seems possible to build a 5-channel isolated supply for around $10 in 1ku. A fully isolated design is not inherently superior at 48V, but it greatly eases integration if there are multiple inverters in the system. That being said, testing the isolated supply poses its own challenges; at that point, the LVPS approaches the rest of the inverter in complexity. I would also have liked to see an all-ceramic DC link capacitor, but I don't know how that would have affected the retail cost of the controller - a ceramic capacitor suitable for 140Arms operation might very well be an incredibly expensive object.

Thursday, January 24, 2019

'' 48V 1800W brushless motor teardown

A friend recently acquired a '1800W 48V' brushless scooter motor and I decided to have a look inside, ostensibly for the purpose of doing thermal testing.

The rotor measures 63mm (diameter) by 73mm (stack height). This gives an air-gap-area*radius metric of 289 cm^3, which is not too shabby; for comparison, the Sonata HSG, which is a 60Nm motor, is 415 cm^3. The laminations are about .56mm thick, which is not great for high speed performance and is probably a huge reason why these motors are not very efficient.

The stator is surprisingly well-made. The fill factor is OK, and the concentrated windings mean there aren't a ton of end-turn copper losses. The stator also has a pretty high iron-to-copper ratio, which is good for peak torque and not good for efficiency. The large volume of iron in the stator is probably another contributor to the high losses - at peak efficiency, copper losses are less than half of the total losses.

The motor has hall sensors, installed using the standard in-the-slots technique:

The housing is mediocre. The end caps are cast aluminum, and the stator is pressed into a piece of steel tube, which adds quite a bit of weight. The motor is also much longer than it needs to be - out of the 177mm of total length, only 72mm contribute to torque production.

Motor specifications:

Type: Surface PM machine
Pole Pairs: 3
Resistance (line-to-line): 73 mOhms
Inductance (line-to-line): .415 mH
Flux linkage [derived]: 0.036 Vs

Back EMF:

Full of harmonics, but reasonably sinusoidal.

Thermal testing:

Thermal testing was done by passing DC current through a pair of phases while watching the temperature of the end turns with a Flir A65 thermal camera. We initially set a temperature cutoff of 110C, but backed down to 95C after noticing some degradation in the either the enamel or epoxy in the stator at around 100C (this is pretty terrible; good wire can operate at 200C!).

Performance with no additional cooling proved to be rather poor; at 28A (which is the RMS current at 40 peak phase amps), the stator hit 95C and started overheating.

Performance with active cooling (a Sunon PMB1297PYBX-AY 12V blower) proved to be much better; we were able to achieve 33Arms (46 peak amps) with a stator temperature that stabilized at around 90C.

Note that this is a best-case operating scenario; at stall, there are no iron losses. Further testing at speed is planned for a later date.


This is "a lot of motor" - it can produce huge amounts of peak torque. Unfortunately, terrible efficiency, non-existent high-speed performance, and a dubiously low temperature cutoff all serve to severely limit its applications. Even for its advertised application (small electric scooters) it is a poor choice, as 70% peak efficiency means around 20% of the battery pack is wasted (versus a 90% efficient machine).
These motors are very close to being good - better wire and thinner laminations, both of which wouldn't drastically increase costs, would go a long way to making them more useful. Maybe in the future, we will see an updated version with these improvements, but for now, I would steer away from these motors.

Wednesday, January 9, 2019

Feiyu Tech A1000 Gimbal Teardown

The Feiyu Tech A1000 is a midsize handheld gimbal for compact cameras and small mirrorless cameras. I recently acquired one and took a look inside, with the ultimate goal of operating the gimbal without the handle, which contains the batteries and some electronics.

The gimbal consists of a "main unit", which is attached to a handle by the means of a threaded collar. The handle contains the controls for the gimbal, as well as the batteries; it is not possible by default to turn the gimbal on without the handle.

The first task was to disassemble the handle. My hunch was that the handle suppled 7.4V (2S Li-Ion) to the inverters in the main unit, and sent pan and tilt commands via serial to a microcontroller in the main unit that ran the stabilization loop and talked to the inverters and IMU via I2C.

The Feiyu gimbals are remarkably easy to take apart - everything was held together with screws with not a plastic snap in sight. Removing the four Phillips screws from the top of the handle released the connector board:

The contacts for the spring-pins on the main unit are just pads on a matte black (!) PCB; the top board is just connectors with no active components.

Removing the four socket cap screws on the side of the handle reveals the bulk of the circuitry:

The module is an NRF51822 carrier module...with some sort of bonus wire on it to act as an antenna. Completely not OK - the reason manufacturers use carriers is to avoid having to undergo additional FCC certification, and adding the extra antenna defeats this. The chip below it (next to the USB port) is a Silabs USB to UART bridge. This is a notable difference from the smaller Feiyu gimbals, which put the UART bridge inside the USB adapter and run serial over the physical USB connector.

The backside isn't too exciting - a buck converter provides power for the electronics on the board (and possibly logic power for the inverters as well). The connectors are all neatly labeled, a nice touch.

Moving on to the inverters (we look inside one motor, but the other three are nearly identical):

The microprocessor is an STM32F303, an popular choice for gimbals. Two shunts are present - no cost-cutting one-shunt techniques were used here.

The power stage is an MPS6536 integrated brushless driver IC. The position sensor is not on the board; presumably, it is on the other side of the motor.

The connector board on the main unit reveals something surprising: unpopulated pads for a NRF51822 module are present.

Presumably at some point during development, a handle-less version was in fact planned, but was aborted before it reached full production.

Some further analysis:

The handle does have an microcontroller in it - it is possible but unlikely that the NRF51822 (which contains a Cortex-M0) is used for the stabilization loop. However, the only data lines running up into the main unit carry 115200 baud serial on them; standard async serial is not easily daisy-chained, and very few IMU's speak UART. Most likely, one of the inverter microcontrollers also does stabilization (this is the case on the Feiyu Tech wearable gimbals, which have no handle).

Thursday, December 13, 2018

Tiny Camera on a Big Lens

With the release of the Nikon Z6 and Z7, Nikon shooters at long last have a way to add stabilization to unstabilized lenses. This presents some nifty opportunities - a decade's worth of fast AF-S primes are now all stabilized, and some very desirable zooms such as the 14-24 and the Sigma 24-35 ART also gain stabilization.

Much more interestingly, the original AF-S supertelephotos all gain stabilization. The VR versions command a $2000 premium over their unstabilized counterparts, so clearly there are substantial (one Z6 per lens!) savings to be had here. The situation is not as magical as it first seems though - small angular motions transform into huge shifts at the sensor, so in-body stabilization is not as effective for long lenses as lens-based stabilization.

I don't have a Z-series camera, but I do own an A7ii (which has a very similar sensor resolution and stabilization system) and an AF-S 500mm f4, and had been contemplating a Z6, so I was interested in testing the effectiveness of IBIS when used with really long lenses.

Testing stabilization is a little tricky, because there is inherently a human factor involved (some people are really good at keeping cameras stable, some less so). For these tests, I settled on a compromise which I felt would be representative of my shooting situations:

  • Lens and camera mounted on a gimbal head on a tripod - I think a setup like this will always have some kind of support underneath it, be it tripod or monopod; other than maybe the 500FL no one is going to be handholding a big supertelephoto prime for very long.
  • Gimbal head locks loosened - if I'm shooting with a long lens, I'm probably also following something that moves. Realistically, the scenario in this test would only show up for slow-moving wildlife or portraiture; any real "action" will require 1/500 or faster anyway to stop subject motion.
  • Camera triggered by pressing shutter button - in the same line of reasoning as above, I wanted to be able to keep my hand on the grip at all times.


Blogger is not really set up for hosting huge images, so the test results are externally hosted here. 100.png, 200.png, 400.png, and 800.png are, respectively, 1/100, 1/200, 1/400, and 1/800 shutter speeds without stabilization; is100.png, is200.png, is400.png, and is800.png are the same speeds with stabilization.


IBIS is effective, even for very long focal lengths. At 1/100 for example, the worst frame (out of 4) with IS off looked like this:

Completely unusable, by most standards. In contrast, the worst frame with IS on looked like this:

Still a bit soft, but this would be usable for smaller output sizes, especially with some careful postprocessing.

Stabilization also helps, but much less visibly, at 1/200:



However, stabilization is not magic. While the 1/100 shots with IS on are usable, they are still not quite as sharp as a 1/800 image:

The 1/200 shots get pretty close, but are still a bit blurrier (the difference would likely not be perceptible with a softer lens).


What did we learn? Well, it seems for at least one shooting scenario (lens supported but not completely locked down), IBIS does make a difference, allowing for at least 2 stops of stabilization. Anecdotally at least, this puts it on par with lens-based stabilization. It's a little hard to tell - lens-based stabilization is supposed to be good for 3+ stops, but there's precious little subject matter which needs a big telephoto prime and moves slowly enough to be shot at 1/30.

We also learned that fast shutter speeds are necessary to extract maximum performance from a telephoto prime. While sensor-based stabilization allows for usable shots at slow shutter speeds, reliably achieving the maximum optical performance of the lens still requires 1/(focal length) or shorter exposures.

The other question is how much more stable the viewfinder image is with IBIS. There are some scenarios where it is possible to shoot handheld, at least for a little while, and having IS is quite useful for framing purposes. Unfortunately, this is much harder to test, and I expect the answer to be quite negative, given how much the viewfinder image moves.

Sunday, July 29, 2018

Field Weakening, Part 2

Recall in a previous post we had found an analytic solution to the field weakening problem. Unfortunately, the model is useless in practice; high currents (which are needed to cancel large amounts of PM flux) result in much lower inductances (which serves to decrease the amount of flux being canceled), resulting in numbers which are implausible and wrong.

However, while back EMF depends on the inductances, flux linkage, currents, and speed, torque is independent of speed - the same \((I_d, I_q)\) will always produce the same torque, no matter what speed the motor is at. Furthermore, we already know the relationship between torque and the axis currents from stall testing, and we can used this data as a black box to look up torque outputs from \(I_d\) and \(I_q\) inputs.

We are going to make an additional huge assumption: at high speeds, the current is low. This is not necessarily true, but for motors designed to be aggressively field weakened, the achievable current is likely low due to the high inductances. This assumptions means we can use the voltage equations to compute the back EMF for most of the field weakened operating regime. Of course, there will be a transition around base speed where this assumption doesn't hold, but we can "fix that in post".

Armed with this, we can write a simple C++ program (source, executable, sample input) to search the entire space of \(I_d\) and \(I_q\) values. The program is not particularly good or fast, but the brute-force approach makes it very robust and trivially extensible to a saturated motor (just override the Vs2() function in the MotorModel class with a lookup table based one). In contrast, Newton's-method based approaches seem to fail if the voltage surface is too complex.

The program generates some very reasonable output; for example, the following plot of power and torque versus speed for the HSG at 160V:

The flat part of the torque-speed curve extends up to what would traditionally be called "base speed" [1]. A surface PM machine spends most of its time operating in this regime, as operating over base speed results in reduced power output and efficiency. In contrast, an IPM is a constant-power device past base speed; this has several implications for system design:

Hybrid vehicles: Field weakening is very important for hybrid vehicles.  Consumer hybrids have electric subsystems optimized for city driving. In order to optimize efficiency in this scenario, it is beneficial to have a high reduction between the motor and the wheels, to reduce the motor current required to accelerate the car. This typically means putting base speed somewhere around 40 mph, which means at highway speeds, the motor is operating well beyond base speed. Being able to produce power at these speeds is important for consistent performance.

There is also a class of emerging high-performance hybrids. Typically, these use a combination of one or two motors, a medium sized (around 5KWh) battery pack, and a very high power forced-induction internal combustion engine. The electric subsystem is used to compensate for the narrow power band of the ICE by adding additional low-speed torque. It also usually provides power to all four wheels, improving handling and launch performance. Finally, it improves the regulatory status of such cars by at least nominally increasing the fuel economy. Once again, we find it beneficial to place base speed at a relatively low speed in order to maximize the launch torque delivered to the wheels (and reduce the weight of motor required to deliver that torque to the wheels); consequently, field weakening is needed to prevent the top speed of the car from being voltage-limited.

Pure electric vehicles: It is widely known that most EV's have a single-speed gearbox. This is entirely due to the power-speed profile of an IPM [2]; as the motor can reach peak power at very low speeds, a variable-speed transmission is not necessary to maximize power output across the entire operating range of the vehicle.

In fact, we can simulate the broad power band of an IPM with a surface PM machine and a continuously-variable transmission. It is usually not desirable to do so [3]; multi-speed transmissions incur additional complexity, weight, cost, and losses, usually negating the improved torque density of the surface PM motor. The only cars that use surface PM motors (Honda, Hyundai) are hybrids which are strongly derived from existing gas-only cars and already have manual transmissions.

Combat robots [4]: Spinner weapons are very similar to cars - both are inertial loads that have highly variable speed profiles. Interior PM machines have obvious mechanical benefits, as the rotors are much more robust. In addition, having a virtually unlimited top speed makes match-ups more consistent. Having moderate weapon speeds is usually beneficial, as it improves energy transfer and tooth engagement. However, in the vertical-on-vertical matchup (which is becoming much more common), the robot with the higher blade speed hits first. In this case, being able achieve very high speeds can greatly improve chances of victory.

And of course, higher-speed weapons hit harder if they do engage, so having the option to spin up to very high energies can be beneficial in certain situations.


[1] Technically, base speed also depends on stator current, so the correct terminology would be 'the base speed of the motor is 2000 rpm at 180A'.

[2] Induction machines (Tesla) and synchronous reluctance motors (no one yet) have similar characteristics, and trade off torque density for reduced cost.

[3] There are some designs which use a 2-speed transmission to further improve efficiency below base speed.

[4] No one has done this yet, but someone should!

Saturday, July 28, 2018

IPM's: an overview

The brushless motors we typically see on the mass market are "surface PM" machines. In this configuration, the permanent magnets (PM's) are glued to the surface of a steel rotor. Torque is generated by rotating the magnetic field in the stator electronically, which in effect continuously "pulls" the PM's on the rotor towards the coils on the stator.

In contrast, all automotive PM motors are "interior PM" machines. This means the magnets are buried inside a steel rotor. While this seems counter-intuitive at first (doing this moves the magnets further from the stator and makes the rotor heavier), putting the magnets inside a chunk of steel gives the motor several features which are highly beneficial for traction applications.

Greatly increased inductance: The surface PM motor has low inductance. This is because the PM's have a much lower permeability than steel, effectively putting a huge air gap in the flux path. In contrast, the interior PM machine places the rotor steel very close to the stator teeth; the magnetic air gap is only the size of the physical air gap, and this greatly increases the inductance, often by a factor of 10 over a similarly-sized surface PM machine.

Having high inductance is important, because for traction applications, the switching frequency is primarily determined by the allowable current ripple (excessive current ripple increases the resistive losses in the copper and conduction and switching losses in the inverter). Being able to reduce the switching frequency can drastically reduce inverter losses. Conversely, for some types of very low inductance and resistance motors (Emrax, Yasa), system efficiency is much lower than what the motor specifications alone would indicate, as Si IGBT inverters have a hard time efficiently driving these types of motor.

Position varying inductance: Why does this matter? Recall that inductance stores energy, and torque is the angle derivative of the co-energy of a system (or, roughly speaking, the system will try to settle to its lowest energy state). This means that by properly manipulating the stator currents, we can use this varying inductance to generate torque: the so-called reluctance torque. Reluctance torque is beneficial because it behaves very differently from the torque generated by the attraction of the magnets to the stator (the PM torque); it grows with both d and q-axis current, and doesn't necessarily generate additional back EMF.

We typically assume that the inductances vary sinusoidally; the typical model therefore has two inductances, \(L_d < L_q\), the "d-axis" and "q-axis" inductances.

Field weakening: Field weakening uses the stator inductance to generate a voltage that counters the back EMF produced by the permanent magnets. This is typically done by injecting current on the d-axis (on a surface PM motor, \(I_d\) is normally close to zero). Field weakening is typically presented as an atypical operating regime, a way to get a little extra speed out of your motor after you've run out of volts. This is because surface PM motors have very low inductance and relatively high flux linkage, necessitating a large amount of d-axis current to cancel out the PM flux. Furthermore, \(I_d\) only serves to generate heat on surface PM motors, and produces no additional torque.

In contrast, IPM's have a much higher ratio of inductance to flux linkage, which means the d-axis current needed to cancel the PM flux is much lower. Furthermore, because of reluctance torque, the d-axis current generates some torque, so it is not entirely wasted. In fact, well-designed IPM's have virtually no top speed; the top speed is not limited by available voltage, but rather by rotor mechanical integrity and hysteresis losses.

Higher speed operation: The rotor iron has an obvious benefit: it mechanically constrains the PM's and prevents them from flying off at high speeds. Running the motor faster makes the motor better. Being able to run a motor twice as fast means it can make twice the power, so despite their slightly lower torque density, IPM's can have higher power density than their surface PM counterparts.

High Speed VCR 2018

I'm a huge enthusiast of thin desktops. I have no idea why - normally such systems are used for HTPC duties or in very space constrained labs and offices, but my desk is not particularly small and I don't even own a TV.  The low-profile cases are about as small as cases get (they have a smaller interior volume and footprint than the cubes), and fitting everything into <85mm z-height makes for an interesting challenge.

Core Component Thoughts

Most HTPC-type systems are built around the "small" platform - currently, Z370 on the Intel side, X470 on the AMD side. These platforms offer low latencies, high clock speeds, and tons of integrated connectivity, but don't offer many cores compared to the state of the art. In contrast, the "high-end" desktop platforms are derived from server hardware - the boards have loads of PCIe lanes but very little integrated functionality, and the CPU's have many cores lashed together in weird and wonderful ways (rings, grids, clusters, and in the AMD case, multiple dies).

There are currently two possible routes for a USFF high core count system - the current-generation X299e-ITX/ac, or the now-discontinued X99e-ITX/ac. The X299 offers access to the latest platform features and CPU architecture, but as LGA2066 is not shared with any Xeons, the CPU's are quite expensive - the 10c part costs $899 and prices only go up from there. X99, in comparison, is kind of long in the tooth by now, but the CPU's are more accessible; an 18c 2.3/3.6 part used to be about $500 on the used market, and will likely be again once major datacenter upgrades flood eBay with used CPU's. With current pricing, X299 is certainly the correct choice; the 2699 V3 will perform similarly to a 14-core i9, costs about the same right now, and the i9 offers a full generation of platform and core improvements.

There is also no reason to go with anything under 12 cores. Ryzen will get you to 8 cores on a very power efficient platform (trust me, you are not overclocking anything on a computer this dense), and the 10-core i9 costs much more than any of the 8-core processors since Intel charges a "PCIe tax". 

Since I had a 2699 V3 available from the $500 days, I went with a X99 build (I had also hoarded an X99e-ITX from when they were $120 on eBay; prices have since jumped up to $200-300). The final selection was:
  • Motherboard/CPU: ASRock X99e-ITX + E5-2699 V3 - really no other choices here.
  • RAM: Crucial Ballistix Sport LT DDR4-2400: I really like the Ballistix Sport LT series; the gray heatspreaders are inoffensive and functional, and the DIMMS are pretty low profile - there are no useless protrusions on the heatspreaders to run into the CPU cooler.
  • Storage: Inland Professional 256GB NVMe - these are just reference Phison PS5008-E8 + Toshiba BiCS drives. They are incredibly cheap and offer better-than-SATA performance. Being M.2 also means one less cable to route in a case that is incredibly cramped with wires. My usual choice would be a Samsung 970 PRO, but at 3.6 GHz you can't really feel the difference between a fast drive and a slow one, especially when you take into account the Windows scheduler adding extra latency by moving threads between the many cores.
  • Graphics: ...I should really get a real GPU for this thing, but based on previous experiences, anything but the really big cards (Asus STRIX line, I'm looking at you) will fit.

Everything Else

Building these things is really an arts-and-crafts project, especially when you have as many computers as I do.  As such, picking the not-computer parts of the computer is much harder than selecting the parts that do the computing.


My usual case for this type of nonsense is the Silverstone ML08, which is nicely priced and is as thin as possible (the minimum allowable clearance for an ATX case is 58mm). Unfortunately, the extra tight cooler clearance makes fastening a cooler to the board nearly impossible, since 2011/2066 heatsink mounting screws have to go in from the top. I was also interesting in trying the latest crop of Silverstone cases, which add an extra inch or so of clearance in order to fit an ATX power supply. All the 83mm-clearance Silverstones are based on the same chassis, just with different trims. I went with the RAVEN RVZ03, since I am a fan of RGB lighting.

Power Supply

The RVZ03 somewhat misleadingly supports ATX power supplies. While it is true that the mounting holes are for an ATX supply, most supplies flat-out don't fit; the case really requires a 140mm or shallower power supply to leave cable clearance. Furthermore, like the ML08, the RVZ03 uses an internal right angle IEC extender to place the power jack on the case somewhere reasonable. This caused a ton of problems - the CX550M I bought had a power jack to close to the left side of the power supply, which cased the extender to collide with the side of the case, and Seasonic Focus+ 550W had a power switch which collided with the molding on the right-angle connector, causing the switch to get stuck in the "off" position.

I eventually gave up and bought Silverstone's own 500W SFX-L supply. The power supply fit great, but as the X99e-ITX has its power connectors rotated 90 degrees from most ITX boards (the 24-pin is in the upper left corner), the stock 24-pin cable wasn't long enough. Thankfully, Silverstone makes a long cable set for this exact purpose; the kit is amazing for small builds since the 24-pin cable is only 550mm long, which is ~100mm shorter than usual.


This whole project was made possible by an obscure-and-discontinued Cooler Master GeminII S heatsink. Low-profile LGA20xx coolers are hard to find - the reference socket backplate uses studs that are tightened from the top, meaning the cooler has to leave sufficient clearance to allow the studs to be tightened. My original plan was to use a Hydro H55 with a slim fan; measurements showed that the clearance would be sufficient. Unfortunately, packing the tubing into the case was pretty much impossible - it could be made to fit, but there was no way to gauge if excessive force was being applied to vertical components on the motherboard. Silverstone claims that a silm fan + slim radiator AIO will fit in this case, but even that seems doubtful...

The stock GeminII S doesn't quite fit - the 25mm fan is about 3mm too tall. I started out by mounting a 15mm fan from a GeminII M4, but that wasn't quite enough, so some more work was required...

Stuffing It All In

This was definitely the hardest computer I've ever assembled. The 58mm Silverstone cases are pretty easy to work on - the top and the bottom both come out, the GPU mounts from the back, and there is an access hole behind the socket to install the CPU cooler. In contrast, the 83mm cases only have one removable side, and the GPU is mounted on a plastic subframe that installs from the top; this makes cable routing far less pleasant. Without the 550mm long 24-pin this would probably have been impossible - I don't think another 100mm of cable would have physically fit in the case.

Performance Tuning

The 2699 v3 has a 80C temperature limit - once it hits 80C, it slowly drops out of turbo to stabilize temperatures. It's a graceful falloff - rather than dithering between 800MHz and 2.8GHz like some processors would, it decreases the multiplier a bin at a time until it achieves thermal equilibrium.

Initial performance was poor; the processor would hit 80C and drop to about 2.2GHz, which is below even the base speed of the 2699 v3. More concerningly, Intel's throttling algorithm seems to favor the core over the uncore - uncore speeds were dropping by as much as 50%, which was sure to affect performance in some applications.

Fortunately, upon further investigation it appeared I had plugged the CPU fan in the 'SYS_FAN' header on the board, which caused the CPU fan to get stuck at its lowest speed (SYS_FAN tracks the chipset temperature, not the CPU temperature). Swapping headers greatly improved performance; the CPU now stabilized at 2.5GHz, and the uncore throttling was gone.

But we can do better! Most 25mm fans have a few mm of superfluous plastic on top - by milling that plastic off I was able to get a 25mm thick Corsair fan to barely fit in the case. Installing the thicker fan bumped clock speeds up another 200 MHz, and and dropping Vcore by 50 mV in XTU allowed the processor to maintain 2.8GHz steady state under full load.