Our publication “Towards Logical Specification of Adversarial Examples in Machine Learning” is now available online. This is collaborative work with researchers at IRIT and CEA List. This paper proposes an approach to adversarial example threat specification and detection in component-based software architecture models using first-order and modal logic. The general idea of the approach is to specify the threat as property of a modeled system such that the violation of the specified property indicates the presence of the threat. We demonstrate the applicability of the method through a classifier used in a recommendation system. See Publications for more details!