13 abliteration methods, 116 models, 837 tests, and it gets SMARTER every time someone runs it

its called OBLITERATUS
it finds the exact weights that make the model refuse and surgically removes them, full reasoning stays intact, just the refusal disappears
15 analysis modules map the geometry of refusal BEFORE touching a single weight, it can even fingerprint whether a model was aligned with DPO vs RLHF vs CAI just from subspace geometry alone
then it cuts, the model keeps its full brain but loses the artificial compulsion to say no
every time someone runs it with telemetry enabled their anonymous benchmark data feeds a growing community dataset, refusal geometries, method comparisons, hardware profiles at a scale no single lab could build
https://github.com/elder-plinius/OBLITERATUS
https://t.co/zv9bokcr4X
#JailbreakAi #OBLITERATUS