EnnCore: End-to-End Conceptual Guarding of Neural Architectures

Find Similar History 20 Claim Ownership Request Data Change Add Favourite

Title
EnnCore: End-to-End Conceptual Guarding of Neural Architectures

CoPED ID
0784b5b9-e4ee-4774-b321-70365bd8c087

Status
Active


Value
£8,607,795

Start Date
Feb. 1, 2021

End Date
July 31, 2024

Description

More Like This


EnnCore will address a fundamental security problem in neural-based (NB) architectures, allowing system designers to specify and verify a conceptual/behavioral hardcore to the system, which can be used to safeguard NB systems against unanticipated behavior and attacks. It will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. We will, therefore, develop methods, algorithms and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and that are robust towards attacks.

EnnCore will be validated on three diverse and high-impact application scenarios: (i) securing an AI system for cancer diagnosis (health -- Cancer Research UK, The Christie); (ii) ensuring ethical and legal behavior of a conversational agent (health -- Cancer Research UK, The Christie); and (iii) securing an AI system for demand response (energy -- Urbanchain). The use cases will be co-designed and validated under real-world data conditions with the help of one clinical and one industrial partner.

As a result, EnnCore will address a fundamental research problem to ensure the security of neural-enabled components by taking into account its entire lifecycle from development to deployment. Solving this research problem will have a far-reaching impact on areas such as health care, ethically grounded AI and demand response, which heavily depend on secure and trusted software components to meet safety-critical requirements. Therefore, our overall research objective is to have a long-term impact on writing secure and trusted AI-based software components, thus contributing to a shared vision of fully-verifiable software, where underlying NB-based architectures are built with strong symbolic and mathematical guarantees.

To achieve this objective, EnnCore will design and validate a full-stack symbolic safeguarding system for NB architectures. This project will advance the state-of-the-art in the development of secure Deep Neural Network (DNN) models by mapping, using and extending explainability properties of existing neuro-symbolic DNN architectures (e.g., Graph Networks, Differentiable Inductive Logic Programming), thus safeguarding them with symbolic verification, abstract interpretation and program synthesis methods. EnnCore will pioneer the multi-disciplinary dialogue between explainable DNNs and formal verification.

In particular, EnnCore will deliver safeguarding for safety-critical NB architectures with the following novel properties:

(1) Full-stack symbolic software verification: we will develop the first bit-precise and scalable symbolic verification framework to reason over actual implementations of DNNs, thereby providing further guarantees of security properties concerning the underlying hardware and software, which are routinely ignored in existing literature.

(2) Explainability / Interpretability: EnnCore will pioneer the integration of knowledge-based and neural explainability methods to support end-users specifying security constraints and diagnosing security risks, in order to provide assurances about its security as NB models evolve. Particular attention will be given to the quantitative and qualitative characterization of semantic-drift phenomena in security scenarios.

(3) Scalable: we will systematically combine contemporary symbolic methods for explaining, interpreting and verifying neural representations. In particular, we will develop a neuro-symbolic safeguard framework by linking the structural knowledge-based representation elements to the attentional architecture elements, to achieve scalability and precision in an unprecedented manner. We will also develop new learning techniques for reusing information across different verification runs to reduce formulae size and consistently to improve constraint solving.


More Information

Potential Impact:
EnnCore will be co-designed with the support of clinical and industrial partners in the fields of medical diagnosis and energy, respectivelly. The design of the supporting use case scenarios, requirements and benchmarks will be defined in coordination with our partners, thus mapping to their actual needs. This support from our partners will allow us to achieve medium- and long-term impact of regular users and maintainers of intelligent systems. Equally important, there exists an ever-growing demand in various areas of Computer Science and Engineering, to build fully-verifiable intelligent systems that are security-by-design. There exist various applications of this approach, including autonomous vehicles, telecommunication systems, process control systems, robotics systems, and automatic pilot avionics. The EnnCore tools will push the state-of-the-art on formal verification and explainability techniques to provide assurances about the security of AI applications and to explain their security properties.

Currently, two main aspects typically hinder further advances to build fully-verifiable intelligent systems that are security-by-design. First, there exists a lack of automated verification methods to detect low-level and behavior failures of AI applications. It happens since these verification methods must be able to incorporate knowledge about implementation aspects, system purpose and associated features. Second, due to the system's size and complexity, state-of-the-art formal verification techniques typically present scalability restrictions. Developing a custom and scalable symbolic verification method will allow us to ensure the security of intelligent systems, which is the main impact of this research.

EnnCore will target the leading conferences and Journals in AI, software verification and synthesis. The project aims to shape new sub-areas within the AI community. We want to contribute to shaping strategic emerging communities: (i) Explainable & Trustworthy AI, (ii) Debugging Neural Systems, (iii) Secure AI and (iv) Scalable Program Verification. Workshops and tutorials at the leading AI and formal verification conferences will be used as vehicles for engaging new communities.

All software and non-sensitive resources developed during the project will be available on open source/open data terms. The following general-purpose software frameworks and benchmarks will be made available: (i) neuro-symbolic safeguarding framework, including EnnCore-EXP and EnnCore-SYMB tools; (ii) benchmarks and supporting experimental Testbench for secure AI applications; (iii) software verification and testing benchmarks for Sv-Comp and Test-Comp competitions associated with the annual TACAS conference; and (iv) program synthesis benchmarks for SyGuS competition, which runs in conjunction with CAV and SYNT. These general-purpose software frameworks and benchmarks will impact the academic communities in two ways. First, we expect to demonstrate in the competition the efficacy of our approach. Since we will build tools that can be used to ensure verified trustworthy software systems, we expect that other groups will directly pick up on this. Second, the contributed benchmarks will influence the further development of software verification and synthesis tools for AI-based applications with a particular focus on safety and security aspects.

Lastly, the PI Cordeiro directs public engagement activities within the Department of Computer Science at the University of Manchester; he has funds (available via the department) to continue this work. Understanding how to educate and enthuse the public in general by using formal verification, security and AI, such as those related to this project, is a challenge that he enjoys and believes can be transformative to the future of computing in the UK.

Lucas Cordeiro PI_PER
Gavin Brown COI_PER
Mikel Lujan COI_PER
Mustafa Mustafa COI_PER
Xiaowei Huang COI_PER
Andre Freitas COI_PER
Andrew Paul Nisbet RESEARCH_PER
Ashkan Tousi RESEARCH_PER

Subjects by relevance
  1. Safety and security
  2. Computer programmes
  3. Artificial intelligence
  4. Cyber security
  5. Data security
  6. Verification
  7. Expert systems
  8. Neural networks (information technology)

Extracted key phrases
  1. EnnCore tool
  2. Stack symbolic software verification
  3. Stack neural software verification
  4. Stack symbolic safeguarding system
  5. Scalable symbolic verification method
  6. Trustworthy software system
  7. AI system
  8. End conceptual Guarding
  9. Verifiable intelligent system
  10. NB system
  11. Fundamental security problem
  12. Art formal verification technique
  13. Secure AI application
  14. Formal verification conference
  15. System purpose

Related Pages

UKRI project entry

UK Project Locations
100 km
Leaflet | © OpenStreetMap contributors