Model stitching, a method developed to test the functional similarity of neural representations has showed that different neural networks trained on the same task have surprisingly similar internal representations. However, it is widely believed that adversarially robust networks are considerably different from their non-robust counterparts and therefore produce different representations. This is evidenced by the fact that the representations of robust image classifiers are invertible, whereas non-robust representations are not. Our research aims to investigate the similarities and differences between robust and non-robust representations from a functional standpoint. We show that these representations are compatible regarding clean accuracy, but from the point of view of robust accuracy, this compatibility drops quickly after the first few, preprocessing layers. In the later layers however, stitching can test a different kind of similarity. We use this to show that for any classifier, these layers mostly depend on the abstract task specification and therefore cross-task stitching is also feasible, even across robust networks and adversarial classification tasks.