We focus on eliminating the failures of AI systems by creating transparent, verifiable, explainable machine learning models. We investigate flaws in algorithms specific to AI software, including malicious and adversarial input. Increased reliability can enable usage in critical decision-making situations, as well as in situations where reliable human-machine collaboration is required. We also evaluate data analysis techniques against deanonymization attacks and develop privacy preserving and GDPR compliant methods.