Why your Creality K2's input shaper recommendation might be wrong
If you've run SHAPER_CALIBRATE on a K2 and felt the result was off, you're not imagining it. Creality's fork of Klipper's shaper analysis has drifted from upstream — five concrete bugs, including a missing ZV anti-bias check that biases close calls toward the wrong shaper.
If you've run SHAPER_CALIBRATE on a Creality K2 and squinted at the recommendation, you're not imagining things. The K2 ships with a fork of Klipper that's a long way behind upstream master, and several of the differences are inside the input-shaper analysis itself. Most of the time the recommendation is fine. Some of the time it's biased toward zv even when mzv or ei would give you 10 to 30 percent less residual vibration on the same dataset.
This is a write-up of what I found when I diffed Creality's shaper_calibrate.py against upstream Klipper, what I patched on my own K2, and how the calibrated values changed.
Why the K2 even has its own copy
Creality didn't just rebadge stock Klipper. They added a C++ FFT helper at /usr/bin/calc_psd and a shared-memory IPC path that lets the controller's tiny RAM run a power-spectral-density calculation the stock Python code can't fit. That part is genuinely a good engineering decision — the K2's controller would otherwise OOM mid-calibration. The downside is the surrounding Python got pinned to whatever Klipper master looked like at fork time, and upstream has since fixed several maths bugs that never got pulled back.
Five differences that actually matter
I'll describe each one in plain English. The patches themselves live in /usr/share/klipper/klippy/extras/shaper_calibrate.py.
1. is 0 vs == 0 (Python 3.8+)
In two places, the file checks if retcode is 0:. In Python 3.8+ that's a SyntaxWarning and on some interpreters silently a bug — is checks identity, not equality, and small-int caching is an implementation detail. Upstream fixed both to == 0 years ago. This rarely bites but it's exactly the kind of latent issue that surfaces after a Python bump.
2. Frequency rolloff: hard zero vs smooth exponential
The function normalize_to_frequencies decides how to weight the response curve at very low frequencies. Creality's version hard-zeros everything below twice the minimum analysis frequency. Upstream uses a smooth exponential rolloff in that region.
In practice this matters when your printer's worst resonance is genuinely in the lower band — small bedslingers, heavy toolheads, soft mounts. The hard zero throws away signal that should bias the recommendation toward a shaper that suppresses low frequencies, typically mzv or ei. The smooth rolloff keeps that signal in.
3. ZV anti-bias fallback
This is the big one. find_best_shaper picks the shaper with the lowest "vibrations" score by default. Upstream added a sanity check: if the winner is zv but another shaper has at least 10 percent less vibration on the same axis, prefer that one. The reasoning is that zv has the highest smoothing penalty and the narrowest tuning bandwidth — it's only worth picking when it genuinely outperforms the alternatives, not by a hair.
Creality's fork doesn't have the fallback. Result: zv wins close calls it shouldn't. On my Y axis, before patching, zv won by about 3 percent over mzv. After the fallback, mzv won by 11 percent on a clean re-test. mzv is also kinder to print speed because it has lower smoothing.
4. _bisect edge-case guard
The bisection routine that picks the maximum acceleration can return garbage if no valid acceleration is found within the search range — typically only happens on extremely soft setups, but when it does the resulting max_accel cap propagates to your slicer profile. Upstream returns 0 in that case so the caller can detect the failure. Creality's version doesn't.
5. Shaper test list
The AUTOTUNE_SHAPERS macro in gcode_macro.cfg only tests zv, mzv, ei by default. Upstream Klipper tests all five — adding 2hump_ei and 3hump_ei. Those two are aimed at printers with multiple distinct resonance peaks, and on a CoreXY toolhead carrying both an extruder and an accelerometer, you absolutely have multiple peaks. If your data wants 2hump_ei, the stock K2 macro will never even consider it.
What the patches did to my numbers
Same dataset, same printer, same accelerometer file — different recommendation because the analysis code is doing something different.
- X axis stock fork:
mzvat about 57 Hz,max_accelabout 5800. - X axis upstream-patched: EI at 58.4 Hz, max_accel 6,400.
- Y axis stock fork:
zvat about 50 Hz,max_accelabout 5200. - Y axis upstream-patched: MZV at 49.4 Hz, max_accel 7,200.
The Y axis flipping from zv to mzv is exactly the anti-bias fallback firing. The X axis stayed close in frequency but moved up the shaper hierarchy now that the broader test list includes the right candidate.
Should you patch yours?
Honest answer: try it and see. The patches are reversible — every edit script I wrote takes a timestamped backup first (.bak-shaperupgrade-<unix-ts> next to each file), so a single cp puts the original back. The risk is low because nothing changes the C++ FFT helper or the IPC path; only the Python that interprets its output.
If your prints look fine, don't bother. If you have ringing on the Y axis and SHAPER_CALIBRATE keeps suggesting zv, this is a reasonable rabbit hole.
A separate but related issue is the copy_TestAxis_y_to_x flag in resonance_tester.py, which silently overwrites your X-axis shaper with the Y result on every Y calibration run. That has its own write-up: Why CoreXY printers need separate X and Y input shaper calibration.
What's coming
I'm building a Resonance Graph Viewer that takes a SHAPER_CALIBRATE CSV and shows both Creality's pick and upstream's pick side-by-side, with the residual-vibration delta visualised. That makes this whole post a one-click diagnosis instead of a 30-minute Python diff. Watch the Hark Tech tools page for it.
If your K2 isn't behaving the way it should, send it in for a tune-up — half the time it's exactly this kind of firmware-vs-upstream gap.